id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.10132 | Thermometry by correlated dephasing of impurities in a 1D Fermi gas | We theoretically investigate the pure dephasing dynamics of two static
impurity qubits embedded within a common environment of ultracold fermionic
atoms, which are confined to one spatial dimension. Our goal is to understand
how bath-mediated interactions between impurities affect their performance as
nonequilibrium quantum thermometers. By solving the dynamics exactly using a
functional determinant approach, we show that the impurities become correlated
via retarded interactions of the Ruderman-Kittel-Kasuya-Yosida type. Moreover,
we demonstrate that these correlations can provide a metrological advantage,
enhancing the sensitivity of the two-qubit thermometer beyond that of two
independent impurities. This enhancement is most prominent in the limit of low
temperature and weak collisional coupling between the impurities and the gas.
We show that this precision advantage can be exploited using standard Ramsey
interferometry, with no need to prepare correlated initial states nor to
individually manipulate or measure the impurities. We also quantitatively
assess the impact of ignoring these correlations when constructing a
temperature estimate, finding that acceptable precision can still be achieved
from a simplified model of independent impurities. Our results demonstrate the
rich nonequilibrium physics of impurities dephasing in a common Fermi gas, and
may help to provide better temperature estimates at ultralow temperatures. | Sindre Brattegard, Mark T. Mitchison | 2023-07-19T16:55:35Z | http://arxiv.org/abs/2307.10132v4 | # Thermometry by correlated dephasing of impurities in a 1D Fermi gas
###### Abstract
We theoretically investigate the pure dephasing dynamics of two static impurity qubits embedded within a common environment of ultracold fermionic atoms, which are confined to one spatial dimension. Our goal is to understand how bath-mediated interactions between impurities affect their performance as nonequilibrium quantum thermometers. By solving the dynamics exactly using a functional determinant approach, we show that the impurities become correlated via retarded interactions of the Ruderman-Kittel-Kasuya-Yosida type. Moreover, we demonstrate that these correlations can provide a metrological advantage, enhancing the sensitivity of the two-qubit thermometer beyond that of two independent impurities. This enhancement is most prominent in the limit of low temperature and weak collisional coupling between the impurities and the gas. We show that this precision advantage can be exploited using standard Ramsey interferometry, with no need to prepare correlated initial states nor to individually manipulate or measure the impurities. We also quantitatively assess the impact of ignoring these correlations when constructing a temperature estimate, finding that acceptable precision can still be achieved in some parameter regimes using a simplified model of independent impurities. Our results demonstrate the rich nonequilibrium physics of impurities dephasing in a common Fermi gas, and may help to provide better temperature estimates at ultralow temperatures.
## I Introduction
The last decades have seen impressive developments in experimental techniques for ultracold atomic gases. It is now possible to tune the geometry and dimensionality of the system, the nature and strength of interparticle interactions, and even the exchange statistics of the constituent particles, enabling exploration of a wide range of fascinating physical phenomena [1; 2; 3; 4]. A particularly promising development in this regard is the creation of homogeneous ultracold gases [5; 6; 7; 8; 9; 10; 11]: these are especially useful for the quantum simulation of condensed-matter and high-energy physics models, for which translation invariance is crucial. However, measuring the temperature of these systems is challenging. Standard methods like time-of-flight measurements completely obliterate the system, do not give access to spatially resolved temperature information, and may suffer from a loss of precision at ultra-low temperatures [12].
An interesting alternative is to work with ultracold atomic mixtures comprising more than one species of atom [13; 14]. If the density of one species is much lower than the others, they may be considered as impurities embedded in a fluid of majority atoms. Such impurities a exhibit range of interesting behaviour straddling the interface between open quantum systems and condensed-matter physics. For example, mobile impurities form polarons in bosonic [15; 16] and fermionic [17; 18] environments, while a static impurity exhibits universal dynamics manifesting the Anderson orthogonality catastrophe [19; 20; 21; 22; 23]. By measuring static and dynamic properties of the impurities, numerous experiments have been able to accurately infer the temperature of the host gas [24; 25; 26; 27; 28; 29; 30; 31]. This opens up the tantalising prospect of exploiting the quantum mechanical behaviour of ultracold impurities for improved thermometry, in a way that is also local and minimally destructive, in principle.
The quest to understand the fundamental capabilities and limits of quantum thermometry has inspired a substantial theoretical literature (see Ref. [32] for a review). Seminal early work established the optimum sensitivity and level structure of fully equilibrated probes [33], while numerous proposals have put forward the possibility of using nonequilibrium impurity dynamics for thermometry, especially in the context of ultracold gases [34; 35; 36; 37; 38; 39; 40; 41; 42]. More recently, it has been established how temperature estimation is affected by informational constraints such as limited measurement data [43; 44; 45; 46; 47; 48] or coarse-grained measurements [49; 50]. A particularly important issue of current interest is to understand how relevant physical effects such as strong system-bath correlations [51; 52; 53; 54] may affect thermometry protocols in real impurity systems.
Another important physical effect is the interaction between several probes induced by their mutual interaction with a common thermal environment. This is important because experiments typically operate in a regime with several impurities -- perhaps even hundreds or thousands -- embedded in a single copy of the gas. Naively, increasing the number of impurities is helpful to increase the signal-to-noise ratio, which scales by a factor of \(\sqrt{M}\) for \(M\) independent impurities. Yet independence is generally spoiled by bath-induced interactions, which arise naturally in ultracold mixtures [55; 56; 57; 58; 59; 60; 61; 62; 63; 64] and have been observed in recent experiments [65; 66]. Previous theoretical work has shown that, in some settings, trapped impurities can be configured to suppress bath-mediated interactions [67; 36], but in general these interactions can give rise to classical and quantum correlations between
impurities [68]. It is well known that quantum correlations can yield a metrological advantage in some scenarios [69; 70] and thermometry is no exception [71]. Indeed, recent works have shown that bath-mediated interactions can improve temperature estimation for impurities embedded in a bosonic environment [72; 73; 74].
In this work, we take the first steps towards understanding how bath-induced interactions affect thermometry for impurities embedded in a fermionic environment. Following previous work by one of us [39], we focus on dephasing probes [38; 40; 41; 42], where information about the temperature is imprinted on quantum coherences that can be measured experimentally by Ramsey interferometry [16; 22; 31; 75]. Specifically, we consider a system of two static impurities, each possessing two internal states, which are coupled to a one-dimensional, homogeneous Fermi gas. Homogeneous systems are particularly challenging for thermometry, since absorption imaging of the spatial density provides no information on temperature, while time-of-flight imaging of the momentum distribution yields diminishing sensitivity at low temperature because only a small number of atoms near the Fermi energy yield useful information. In this context, we show that correlations induced by bath-mediated interactions can yield a collective enhancement for thermometry at low temperatures. Remarkably, this advantage survives even when one is limited to local observables that are accessible via Ramsey interferometry.
In the following, we describe our two-impurity setup in detail (Sec. II.1) and explain how to solve the quantum dynamics of the impurities exactly using a functional determinant approach [76; 77; 78] (Sec. II.2). We then analyse the dephasing dynamics in detail (Sec. III), focusing especially on the effect that the impurities have on each other via the bath. Unlike the dipole-dipole interactions that typically arise in bosonic baths, itinerant fermions induce Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions between localised spins [79; 80] with a non-trivial oscillatory spatial dependence whose period is set by the Fermi wavelength. Interestingly, we show that monitoring the two-impurity dynamics allows one to observe the effect of RKKY-like interactions developing in real time.
Next, we describe how temperature can be optimally inferred from measurements on the impurities (Sec. IV.1), and propose a Ramsey protocol that is generally sub-optimal but experimentally feasible (Sec. IV.2). Our approach is based on local temperature estimation theory [81; 32], where the quantum Fisher information sets the ultimate limit for precision. Using these tools, we analyse the temperature sensitivity of the two-impurity system (Sec. IV.3), finding that bath-induced correlations enhance precision even under local measurements, while using entangled initial states yields no advantage.
Finally, we ask under what conditions the impurities can be modelled as independent (Sec. IV.4). This approximation may be useful to simplify the construction of the temperature estimator, especially when scaling to a larger number of impurities \(M>2\). For two impurities at fixed positions, we find that the validity of this approximation depends quite strongly on the impurity separation, but works best at higher temperatures. Apart from their potential relevance for current experiments on multiple impurities, our results open up several interesting future directions for theoretical research that we discuss in Sec. IV.4.
## II Setup
### Description of the system
We consider a system \(S\) of two impurity atoms coupled to an environment \(E\) comprising a gas of spin-polarized fermions with homogeneous density, which is confined to a one-dimensional (1D) box of length \(L\). We are interested in the ultra-low temperature regime where \(s\)-wave scattering is dominant. Because of the anti-symmetry of the fermionic wavefunction there is no \(s\)-wave scattering between identical fermions, so the environment can be treated as a non-interacting gas to a good approximation.
The impurity atoms are modelled as two-level systems with energy eigenstates \(\ket{\uparrow}_{i}\), \(\ket{\downarrow}_{i}\), with \(i=1,2\) labelling the two impurity qubits. We take the impurities to be stationary and strongly localized at fixed positions \(x_{1}\) and \(x_{2}\), which can be achieved using a species-selective optical lattice that only affects the impurities [81; 82; 31]. We work in the pseudopotential approximation, which models \(s\)-wave collisions between the impurities and the surrounding gas atoms as a contact interaction. The total Hamiltonian is then
\[\hat{H}=\hat{H}_{S}+\hat{H}_{E}+\hat{H}_{I}, \tag{1}\] \[\hat{H}_{S}=\sum_{i=1}^{2}\varepsilon_{i}\ket{\uparrow}_{i}\! \bra{\uparrow},\] (2) \[\hat{H}_{E}=-\frac{\hbar^{2}}{2m}\int_{-L/2}^{L/2}\mathrm{d}x\, \hat{\Psi}^{\dagger}(x)\nabla^{2}\hat{\Psi}(x),\] (3) \[\hat{H}_{I}=-\frac{\hbar^{2}}{ma}\sum_{i=1}^{2}\ket{\uparrow}_{i }\!\bra{\uparrow}\otimes\hat{\Psi}^{\dagger}(x_{i})\hat{\Psi}(x_{i}), \tag{4}\]
where \(\varepsilon_{i}\) is the local energy splitting between the impurity eigenstates, \(\hat{\Psi}^{\dagger}(x)\) is the field operator that creates a fermion of mass \(m\) at position \(x\), such that
Figure 1: A sketch of the setup we are considering. Two impurities separated by distance \(2x_{0}\) (gray balls) embedded in a 1D Fermi gas confined by a box potential of length \(L\) (blue background).
\(\left\{\hat{\Psi}(x),\hat{\Psi}^{\dagger}(x^{\prime})\right\}=\delta(x-x^{\prime})\), and \(a\) is the scattering length describing impurity-fermion collisions. We have assumed that the states \(\left|\downarrow\right\rangle_{i}\) effectively do not interact with the gas. This can be achieved, for example, by using a spin-dependent Feshbach resonance [84] to tune the corresponding scattering length to a very large value. Note that in 1D the interaction strength is inversely proportional to the scattering length [85].
We assume that the environment is initially in thermal equilibrium at temperature \(T\), with \(\hat{\rho}_{T}\propto e^{-\hat{H}_{E}/k_{B}T}\) the corresponding thermal state. The initial state of the system and environment is taken to be a tensor product
\[\hat{\rho}(0)=\hat{\rho}_{S}(0)\otimes\hat{\rho}_{T}, \tag{5}\]
which can be prepared since the \(\left|\downarrow\right\rangle_{i}\) states do not perturb the gas. A specific experimental protocol to realise this are discussed in Sec. IV.2.
We want to infer the temperature of the gas by observing the dynamics of the probes. The latter are described by their reduced density matrix
\[\hat{\rho}_{S}(t)=\mathrm{tr}_{E}\left[e^{-i\hat{H}t/\hbar}\hat{\rho}(0)e^{i \hat{H}t/\hbar}\right], \tag{6}\]
obtained by tracing over the environment. Since \([\hat{H}_{I},\hat{H}_{S}]=0\), the system Hamiltonian merely generates trivial phase factors that are irrelevant for the system-environment dynamics. From here on we remove these by working in a rotating frame via the transformation \(\hat{\rho}_{S}(t)\to e^{i\hat{H}_{S}t/\hbar}\hat{\rho}_{S}(t)e^{-i\hat{H}_{S}t /\hbar}\), which is tantamount to setting \(\varepsilon_{i}=0\).
Let \(\sigma\) label the different internal states of \(S\), taking the values \(\sigma\in\left\{\uparrow\uparrow,\uparrow\downarrow,\downarrow\uparrow, \downarrow\downarrow\right\}\). It is straightforward to show that
\[\langle\sigma|\hat{\rho}_{S}(t)|\sigma^{\prime}\rangle=\nu_{\sigma,\sigma^{ \prime}}(t)\,\langle\sigma|\hat{\rho}_{S}(0)|\sigma^{\prime}\rangle\,, \tag{7}\]
where we have defined the complex functions
\[\nu_{\sigma,\sigma^{\prime}}(t)=\mathrm{tr}_{E}\left[e^{i\hat{H}_{\sigma}t/ \hbar}e^{-i\hat{H}_{\sigma^{\prime}}t/\hbar}\hat{\rho}_{T}\right], \tag{8}\]
with \(\hat{H}_{\sigma}\) denoting the Hamiltonian of the environment conditioned on the internal state of the system:
\[\hat{H}_{\sigma}=\,\langle\sigma|\hat{H}_{E}+\hat{H}_{I}|\sigma\rangle\,. \tag{9}\]
The diagonal elements of \(\hat{\rho}_{S}(t)\), with \(\sigma=\sigma^{\prime}\), are constant because the energy of the probes is conserved. The off-diagonal terms with \(\sigma\neq\sigma^{\prime}\) will evolve and generally decay in time according to \(\nu_{\sigma,\sigma^{\prime}}(t)\), which we refer to as the decoherence functions of our system.
### Calculation of the decoherence functions
To calculate the decoherence functions in Eq. (8), we employ the Levitov formula or functional determinant approach (FDA) [76; 77; 78], which has been widely used to study nonequilibrium impurity systems [86]. The FDA is a numerically exact method that maps a many-body expectation value into a determinant in single-particle space:
\[\nu_{\sigma,\sigma^{\prime}}(t)=\det\Bigl{[}1-\hat{n}+\hat{n}e^{i\hat{h}_{ \sigma^{\prime}}t/\hbar}e^{-i\hat{h}_{\sigma}t/\hbar}\Bigr{]}, \tag{10}\]
This equation holds because Eq. (8) involves only exponentials of quadratic fermionic operators. Here, \(\hat{h}_{\sigma}\) is the single-particle equivalent of \(\hat{H}_{\sigma}\), i.e. the Hamiltonian of a single particle in the gas conditioned on the impurities being in state \(\sigma\). Meanwhile, \(\hat{n}\) is an operator describing the initial Fermi-Dirac distribution of the gas atoms
\[\hat{n}=\Bigl{[}1+e^{(\hat{h}_{\downarrow\downarrow}-\mu)/k_{B}T}\Bigr{]}^{-1}. \tag{11}\]
We work in the grand canonical ensemble and use the chemical potential \(\mu\) to fix the total number of atoms \(N=\bar{n}L\), where \(\bar{n}\) is the number density of the atoms.
The box is modelled as an infinite square well with hard-wall boundary conditions at \(x=\pm L/2\). The single-particle Hamiltonians \(\hat{h}_{\sigma}\) are then differential operators of the form
\[\hat{h}_{\sigma}=-\frac{\hbar^{2}\nabla^{2}}{2m}+V_{\sigma}(x), \tag{12}\]
defined on the interval \(x\in[-L/2,L/2]\). Here, \(V_{\sigma}(x)\) is the effective potential felt by the fermions when the impurities are in state \(\sigma\). In the pseudopotential approximation, we have
\[V_{\downarrow\downarrow}(x) =0, \tag{13}\] \[V_{\uparrow\downarrow}(x) =-\frac{\hbar^{2}}{ma}\delta(x-x_{1})\] (14) \[V_{\downarrow\uparrow}(x) =-\frac{\hbar^{2}}{ma}\delta(x-x_{2}),\] (15) \[V_{\uparrow\uparrow}(x) =V_{\uparrow\downarrow}(x)+V_{\downarrow\uparrow}(x). \tag{16}\]
From here on, we take the impurities to be placed symmetrically around the centre of the box, \(x_{1}=-x_{0}\) and \(x_{2}=+x_{0}\). This assumption can be made without loss of generality because we always take \(L\) to be large enough to avoid boundary effects, so that only the impurity separation \(\Delta x=x_{2}-x_{1}=2x_{0}\) is relevant. Under this assumption, there are four independent decoherence functions that we need to fully describe the dynamics of the system. Further details on how we evaluate Eq. (10) numerically can be found in Appendix B.
## III Dynamics of two impurities dephasing in a 1D Fermi gas
In this section we systematically investigate the dephasing dynamics of two impurities in a 1D Fermi gas. The physical scales of the gas are fully determined by the fermion number density, \(\bar{n}=N/L\), and are given by
the Fermi wavevector \(k_{F}=\bar{n}\pi\), the Fermi energy \(E_{F}=\hbar^{2}k_{F}^{2}/2m\),the Fermi time \(\tau_{F}=\hbar/E_{F}\), the Fermi velocity \(v_{F}=\hbar k_{F}/m\), and the Fermi temperature \(T_{F}=E_{F}/k_{B}\). The interaction between the impurities and the gas is parameterized by the dimensionless parameter \(k_{F}a\). We also define the interaction time between the impurities \(\tau_{i}=\Delta x/v_{F}\). This can be understood as the time for excitations to travel from one impurity to the other, i.e. the time after which bath-mediated interactions begin to play a role.
In Fig. 2 we plot all four independent decoherence functions for both \(k_{F}a=-0.1\) and \(k_{F}a=-1\), which correspond to relatively strong and weak system-environment interaction strengths, respectively. For strong system-environment coupling we also plot the decoherence for temperatures \(T/T_{F}=0.1\), \(0.05\) and \(0.0001\). Here we focus exclusively on negative scattering lengths, in order to avoid the additional complication of the bound state that appears for \(a>0\). While in general all four decoherence functions contribute to the dynamics, each one can be tied to a distinct initial state \(\hat{\rho}_{S}(0)\), which is helpful to interpret their features.
We begin by examining the decoherence function \(\nu_{\uparrow\downarrow,\downarrow\downarrow}\) in Fig. 2(a). This describes the situation where only a single impurity interacts with the gas. For example, given the initial condition with
\[\ket{+}_{i}=\frac{1}{\sqrt{2}}\left(\ket{\uparrow}_{i}+\ket{\downarrow}_{i} \right), \tag{17}\]
the evolution of \(\hat{\rho}_{S}(t)\) is determined purely by \(\nu_{\uparrow\downarrow,\downarrow\downarrow}(t)\) and all other matrix elements are constant or zero. The dynamics of a single heavy impurity qubit interacting with a Fermi gas has been extensively studied in the literature, e.g. see [23] for a comprehensive review of the three-dimensional case and the Supplemental Material of Ref. [39] for analysis of the 1D case.
Fig. 2(a) reproduces the behaviour expected from these previous studies. For intermediate times between the Fermi time and the thermal timescale, \(\tau_{F}\ll t\ll\hbar\beta\), the decoherence function has a universal power-law decay, with an exponent that depends on the interaction between the impurity and Fermi gas. For later times, thermal effects dominate and the decoherence function decays exponentially in time with a rate proportional to temperature. In order to elucidate this behaviour, in Appendix C we derive an expression for the decoherence functions valid in the weak-coupling and low-temperature regime, using a cumulant expansion method adapted from Ref. [39]. The result is that
\[\ket{\nu_{\uparrow\downarrow,\downarrow\downarrow}(t)}\sim\begin{cases}t^{- \alpha}&(\tau_{F}\ll t\ll\hbar\beta)\\ e^{-\alpha t/\hbar\beta}&(\hbar\beta\ll t),\end{cases} \tag{18}\]
where the exponents are determined by the dimensionless coupling strength
\[\alpha=(\pi k_{F}a)^{-2}\,. \tag{19}\]
The decoherence functions \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) and \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\) are shown in Fig. 2(b) and (c). These are the relevant decoherence functions for initial Bell state preparations \(\hat{\rho}_{S}(0)=|\Psi^{\pm}\rangle\!\langle\Psi^{\pm}|\) and \(\hat{\rho}_{S}(0)=|\Phi^{\pm}\rangle\!\langle\Phi^{\pm}|\), respectively, where
\[\begin{split}|\Psi^{\pm}\rangle&=\frac{1}{\sqrt{2}} \left(\ket{\uparrow_{1}\downarrow_{2}}\pm\ket{\downarrow_{1}\uparrow_{2}} \right)\\ |\Phi^{\pm}\rangle&=\frac{1}{\sqrt{2}}\left(\ket{ \uparrow_{1}\uparrow_{2}}\pm\ket{\downarrow_{1}\downarrow_{2}}\right).\end{split} \tag{20}\]
For short times, \(t\ll\tau_{i}\), both decoherence functions have the same power-law behavior as two impurities in independent baths, i.e. \(\nu_{\sigma,\sigma^{\prime}}\sim t^{-2\alpha}\). This reflects the fact that, for times much less than the bath-mediated interaction time, the two impurities should evolve independently. This implies that, for \(t<\tau_{i}\), we have
\[\hat{\rho}_{S}(t)=\mathcal{E}\hat{\rho}_{S}(0)\approx(\mathcal{E}_{1}\otimes \mathcal{E}_{1})\,\hat{\rho}_{S}(0), \tag{21}\]
where \(\mathcal{E}\) is the quantum channel describing the exact two-impurity evolution and \(\mathcal{E}_{1}\) is the channel describing a single impurity immersed in a Fermi gas. Eq. (21) immediately implies that \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\approx|\nu_{\uparrow\uparrow, \downarrow\downarrow}|^{2}\) and \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\approx\nu_{\uparrow\downarrow, \downarrow\downarrow}^{2}\). Therefore, the phase of \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) vanishes while the phase of \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\) is twice that of a single impurity, and their magnitudes are identical. However, around the interaction time \(\tau_{i}\) these two decoherence functions begin to show drastically different behaviour: the decay of \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) slows down completely, while \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\) begins to show marked oscillations with approximate period \(\tau_{i}\). These oscillations represent a non-Markovian effect, where excitations from one impurity travel through the gas and hit the other one, representing backflow of information from environment into the system [87; 88].
These dynamical features for \(t\lesssim\tau_{i}\) are qualitatively captured by the cumulant expansion derived in Appendix C. For short times \(t\ll\tau_{i}\), our analytical
Figure 2: (a-d) The full dynamics of two impurity qubits with separation \(k_{F}\Delta x/2\pi=3\) coupled to a 1D fermionic bath at temperature \(T=0.0001T_{F}\) (solid), \(T=0.05T_{F}\) (dashdot) and \(T=0.1T_{F}\) (dotted) with coupling strength \(k_{F}a=-1\) (blue) and \(k_{F}a=-0.1\) (black). The Gray dotted line denotes the interaction time \(\tau_{i}=\Delta x/v_{F}\). (e-f) The power law behaviour of \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\) and \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) for \(T=0.0001T_{F}\) with an interaction strength of \(k_{F}a=-5\). The dotted lines are the power laws discussed in the main text valid for low temperature and weak coupling.
theory predicts that both \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) and \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\) behave as \(\nu_{\sigma,\sigma^{\prime}}(t)\sim t^{-2\alpha}\), as expected for independent impurities. After the impurity interaction time \(t=\tau_{i}\), we find that the algebraic decay changes exponent as
\[\alpha\rightarrow\alpha\left(1\pm\cos^{2}(2k_{F}\Delta x)\right), \tag{22}\]
where the plus and minus signs correspond to \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\) and \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\), respectively. A comparison between this analytical prediction and the exact numerics can be seen in Fig. 2 (e) and (f), demonstrating excellent agreement within the weak-coupling and low-temperature regime where the cumulant expansion is valid.
The change of exponent at \(t=\tau_{i}\) manifests the well-known phenomenon of super- and sub-decoherence [89; 90; 91; 92]. In Appendix C we provide an explanation for this effect: the Bell states \(\left|\Phi^{\pm}\right\rangle\) and \(\left|\Psi^{\pm}\right\rangle\) couple respectively to density fluctuations at the impurity positions that are in phase or anti-phase, respectively, which have very different low-frequency properties. Moreover, in Eq. (22) we recognise the sinusoidal spatial dependence that typically arises in perturbed Fermi gases, with the same period of \(\pi/k_{F}\) that characterises both Friedel oscillations and the RKKY interaction. Therefore, the transition from normal to sub- or super-decoherent dynamics manifests the time-retarded effect of RKKY-like correlations developing in real time. Note that the sub- and super-decoherent behaviour persists only up to the thermal time, \(t\simeq\hbar\beta\), after which both decoherence functions decay exponentially as \(\sim e^{-2\alpha t/\hbar\beta}\). This is the decay expected for two independent impurities, meaning that thermal noise washes out the dynamical effect of RKKY interactions on the decoherence signal at long times. We note that RKKY-like interactions have been observed in ultracold atomic gases recently [65].
Finally, Fig. 2 d shows the decoherence function \(\left|\nu_{\uparrow\uparrow,\uparrow\downarrow}\right|\). This is the relevant decoherence function for the initial state \(\hat{\rho}_{S}(0)=\left|\uparrow\right\rangle_{1}\langle\uparrow\right| \otimes\left|+\right\rangle_{2}\langle+\left|\right|\), which describes a situation where the pseudospin of the first impurity is flipped at the same time that the second impurity is prepared in a superposition. Thus, for short times, \(\nu_{\uparrow\uparrow,\uparrow\downarrow}\) is indistinguishable from \(\nu_{\uparrow\downarrow,\downarrow\downarrow}\). For times \(t\gtrsim\tau_{i}\) the perturbations from the first impurity hit the second one, causing the decoherence to slow down. The cumulant expansion at second order fails to capture this behaviour, indicating that this is a non-perturbative effect.
Aside from the temperature dependence of the long-time exponential decay, the interaction effects discussed above depend strongly on both temperature and coupling strength. In particular, non-Markovian effects due to exchange of excitations are most prominent at low temperatures, and disappear as the temperature is increased. Observing these features thus yields useful information on temperature, as we show in the next section.
## IV Thermometry with correlated dephasing probes
### Local temperature estimation theory
Since the impurity dynamics depends on the temperature of the gas, we can infer the temperature from measurements of many identical preparations of the probes. We first consider a general "prepare and measure" scenario: a given probe state \(\hat{\rho}_{S}(0)\) is prepared as in Eq. (5), the system decoheres for a time \(t\), and then a measurement in a given basis is made on the resulting state \(\hat{\rho}_{S}(t)\). Repeating this procedure for \(M\) identical preparations yields a sequence of measurement outcomes \(\mathbf{x}=\{x_{1},x_{2},\ldots,x_{M}\}\). In the most general case, the measurement may be a positive operator-valued measure (POVM): a collection of positive operators \(\hat{\Pi}(x)>0\) that are normalised as \(\sum_{x}\hat{\Pi}(x)=1\).
Using our knowledge of the temperature-dependence of \(\hat{\rho}_{S}(t)\), a temperature estimate \(\hat{T}(\mathbf{x})\) can be constructed from the measurement data \(\mathbf{x}\), e.g. using maximum-likelihood estimation. Any such estimate will carry uncertainty due to the randomness inherent to quantum measurements and the finite number of samples \(M\). The achievable precision depends not only on the probe state \(\hat{\rho}_{S}(T)\) but also the specific choice of POVM \(\{\hat{\Pi}(x)\}\).
To quantify the precision attainable in our setup, we use the theory of local quantum parameter estimation [81]. Here, the central quantity is the quantum Fisher information (QFI), \(\mathcal{F}_{T}\), which provides a lower bound on the variance of any unbiased estimator. In particular, an unbiased estimator obeys \(\mathbb{E}[\hat{T}(\mathbf{x})]=T\), while its variance obeys the (quantum) Cramer-Rao bound [93]
\[\mathbb{E}\left[\left(\hat{T}(\mathbf{x})-T\right)^{2}\right]\geq\frac{1}{MF_ {T}}\geq\frac{1}{M\mathcal{F}_{T}}. \tag{23}\]
Here, \(F_{T}\) is the Fisher information for the measurement \(\{\hat{\Pi}(x)\}\),
\[F_{T}\left(\hat{\rho}_{S}(t),\{\hat{\Pi}(x)\}\right)=\sum_{x}p(x)\left(\frac{ \partial\ln p(x)}{\partial T}\right)^{2}, \tag{24}\]
where \(p(x)=\mathrm{tr}\left[\hat{\rho}_{S}(t)\hat{\Pi}(x)\right]\) is the probability of obtaining outcome \(x\). The QFI is defined as maximum of the Fisher information over all POVMs [93],
\[\mathcal{F}_{T}\left(\hat{\rho}_{S}(t)\right)=\max_{\{\Pi(x)\}}F_{T}\left(\hat {\rho}_{S}(t),\{\hat{\Pi}(x)\}\right). \tag{25}\]
Therefore, the QFI quantifies the maximum information on temperature that can be obtained from repeated measurements on the state \(\hat{\rho}_{S}(t)\).
The maximum in Eq. (25) is achieved by projective measurements of the symmetric logarithmic derivative (SLD), \(\hat{\Lambda}_{T}\). Writing the probe state in its eigenbasis as \(\hat{\rho}_{S}(t)=\sum_{n}r_{n}\left|r_{n}\right\rangle\left\langle r_{n}\right|\), the QFI and SLD are given ex
plicitly by [81]
\[\mathcal{F}_{T} =2\sum_{m,n}\frac{|\langle r_{m}|\,\partial_{T}\hat{\rho}_{S}\,|r_{n} \rangle|^{2}}{r_{m}+r_{n}}, \tag{26}\] \[\hat{\Lambda}_{T} =2\sum_{m,n}\frac{\langle r_{m}|\,\partial_{T}\hat{\rho}_{S}\,|r_{ n}\rangle}{r_{m}+r_{n}}\,|r_{m}\rangle\,\langle r_{n}|\,. \tag{27}\]
Note that the SLD is explicitly dependent on temperature, and the quantum Cramer-Rao bound (23) can only be saturated in the asymptotic limit of \(M\gg 1\). Therefore, local estimation theory is most relevant when some coarse information on temperature is already known and a large quantity of measurement data is available. Alternative approaches to temperature estimation based on Bayesian statistics [43; 44; 45; 46; 47; 48] and single-shot information theory [94] have recently been developed, allowing thermometry even with few measurements and no prior information. Nevertheless, the QFI is very useful as a quantitative benchmark for our impurity thermometer in different parameter regimes.
### Ramsey interferometry protocol
Since the impurities become correlated through their interaction with the gas, the SLD is generally a non-local observable that may be difficult to measure. We therefore consider the following thermometry protocol based on Ramsey interferometry, which is feasible in cold-atom systems [22; 31]. To generate the initial product state (5), both of the impurities start in the non-interacting state \(\ket{\downarrow}_{i}\) while the gas is prepared in a thermal state. The impurities may either be left _in situ_ during this preparation phase [22], or transported into the Fermi gas by a moving trap potential [31; 83]. At \(t=0\), a \(\pi/2\) pulse is applied to the impurities, generating the initial state
\[\hat{\rho}_{S}(0)=\ket{+}_{1}\bra{+}\otimes\ket{+}_{2}\bra{+}. \tag{28}\]
The system is then left to decohere in contact with the gas for a time \(t\), giving rise to a state \(\hat{\rho}_{S}(t)\) which is a function of all four decoherence functions discussed in Sec. III. Finally, another \(\pi/2\) pulse is applied with a phase \(\phi\) relative to the first pulse, and then the population of the qubit energy eigenstates is immediately measured projectively, e.g. by using laser pumping to induce state-dependent flourescence.
In experiments with multiple impurities embedded in ultracold atomic gases, it is typically not possible to address the individual impurities during the measurement. Instead, only the total measurement signal is accessible, e.g. in Ref. [31] the total number of impurities left in the ground state after the final \(\pi/2\)-pulse is counted. This is equivalent to measuring the expectation value of the observable
\[\hat{O}(\phi)=\sum_{i=1}^{2}\left(\hat{\sigma}_{i}^{\parallel}(\phi)+\hat{ \sigma}_{i}^{\perp}(\phi)\right), \tag{29}\]
with \(\hat{\sigma}_{i}^{\parallel}=\cos\phi\hat{\sigma}_{i}^{x}+\sin\phi\hat{ \sigma}_{i}^{y}\) and \(\hat{\sigma}_{i}^{\perp}=\sin\phi\hat{\sigma}_{i}^{x}-\cos\phi\hat{\sigma}_{ i}^{y}\). The temperature uncertainty \(\Delta T\) expected for \(M\gg 1\) measurements of this observable can be quantified by the error propagation formula [95]
\[\Delta T=\frac{\Delta\hat{O}}{\sqrt{M}\partial_{T}\langle\hat{O}\rangle}, \tag{30}\]
where \(\Delta\hat{O}=\langle\hat{O}^{2}\rangle-\langle\hat{O}\rangle^{2}\) is the operator variance and \(\langle\hat{O}\rangle\) is the expectation value in the state \(\hat{\rho}_{S}(t)\). The signal-to-noise ratio is then
\[\frac{T}{\Delta T}=\sqrt{M}\frac{T\partial_{T}\langle\hat{O}\rangle}{\Delta \hat{O}}\equiv\sqrt{M}\mathcal{S}_{T}(\hat{O}) \tag{31}\]
which defines the effective temperature sensitivity \(\mathcal{S}_{T}(\hat{O})\) for measurements of the observable \(\hat{O}\).
The sensitivity \(\mathcal{S}_{T}\) will be our metric of performance in the following. It can be shown that the sensitivity is maximised by measurements of the SLD because [96]
\[\mathcal{S}_{T}(\hat{\Lambda}_{T})=T\sqrt{\mathcal{F}_{T}}, \tag{32}\]
in which case, according to Eq. (31), the temperature error \(\Delta T\) asymptotically saturates the quantum Cramer-Rao bound (23). For local observables of the form in Eq. (29), the sensitivity is generally smaller; however, for any state \(\hat{\rho}_{S}(t)\) we can find the optimal operator \(\hat{O}(\phi)\) to measure by maximising \(\mathcal{S}_{T}\) over \(\phi\). Like the SLD, the optimal \(\phi\) depends on \(T\) and thus some prior knowledge about the temperature is needed for this to work in practice.
### Thermometric performance and collective advantages
We now discuss how the dynamical features explored in Sec. III affect the performance of our two-qubit thermometer. The blue curves in Fig. 3 show the optimal sensitivity, \(\mathcal{S}_{T}(\hat{\Lambda}_{T})=T\sqrt{\mathcal{F}_{T}}\), as a function of time in various different scenarios for temperature \(T=0.05T_{F}\) and two interaction strengths: \(k_{F}a=-1\) [Fig. 3(a)] and \(k_{F}a=-0.1\) [Fig. 3(b)]. The sensitivity peaks at a particular moment in time, which defines the optimal time to perform the measurement. We see that the peak sensitivity is generally larger at weak system-environment coupling but also occurs at a later time. A similar trade-off between sensitivity and measurement time was found in previous work on thermometry with single impurities [39]. At stronger coupling, the optimal sensitivity exhibits complicated oscillations. These are a consequence of non-Markovian effects induced by the impurities exchanging excitations via the gas, as discussed in Sec. III.
An interesting question is whether bath-induced interactions improve the precision of our two-impurity thermometer. In order to assess this, we compare to the case of two independent impurities with \(\hat{\rho}_{S}(t)=[\hat{\rho}_{1}(t)]^{\otimes 2}\)
where \(\hat{\rho}_{1}(t)\) is the state of a single impurity dephasing in a 1D Fermi gas. That is, the diagonal elements of \(\hat{\rho}_{1}(t)\) are constant while the off-diagonal elements are proportional to \(\nu_{\uparrow\downarrow,\downarrow\downarrow}(t)\). This is equivalent to the situation where the impurity separation is very large, so that \(\tau_{i}\gg\hbar\beta\) and thermal decoherence kicks in before interactions can play any role. The corresponding QFI is additive, \(\mathcal{F}_{T}([\hat{\rho}_{1}(t)]^{\otimes 2})=2\mathcal{F}_{T}(\hat{\rho}_{1})\). Fig. 3(a) shows that the optimal sensitivity of two impurities in a common bath can be markedly larger than the corresponding value for two independent impurities (yellow curves in Fig. 3). This effect, which only arises for sufficiently weak coupling and low temperature, indicates that bath-induced interactions yield an advantage for thermometry.
Remarkably, this advantage persists even when restricted to more realistic local observables as in Eq. (29). To show this, we numerically optimize the angle \(\phi\) to find the observable that gives the largest sensitivity \(\mathcal{S}_{T}\) at each point in time. The results, shown by the green curve in Fig. 3, demonstrate that optimal local measurements may yield almost as much temperature information as measurements of the SLD. This is true for a wide range of parameters. Similar effects have been observed previously in the context of equilibrium thermometry [97].
To further elucidate the role of bath-induced interactions, we quantify the correlations that develop between the two impurity probes using the quantum mutual information
\[I(1:2)=S(\hat{\rho}_{1})+S(\hat{\rho}_{2})-S(\hat{\rho}_{S}), \tag{33}\]
where \(\hat{\rho}_{i}\) is the reduced density matrix for impurity \(i\), e.g. \(\hat{\rho}_{1}=\mathrm{tr}_{2}(\hat{\rho}_{S})\), and \(S(\hat{\rho})\) is the von Neumann entropy
\[S(\hat{\rho})=-\mathrm{tr}(\hat{\rho}\ln\hat{\rho}). \tag{34}\]
Fig. 4 shows the mutual information of the two probes as a function of time. We see that correlations are strongest in the weak coupling and low temperature regime, which
Figure 3: The signal-to-noise ratio of different measurements for the two impurity qubits separated by \(k_{F}\Delta x/2\pi=5\), temperature \(T=0.05T_{F}\), and interaction strength \(k_{F}a=-1\) (a) and \(k_{F}a=-0.1\) (b). Measurement of the SLD including bath-induced interactions (blue), the SLD of two independent impurities (orange), the operator of Eq. (30) with angle \(\phi\) that gives the maximum information (green). We also investigated the effect of starting in one of the Bell states \(\ |\Psi^{+}\rangle\!\langle\Psi^{+}|\) or \(\ |\Phi^{+}\rangle\!\langle\Phi^{+}|\) and doing measurement on their respective SLD (red and purple).
is precisely where the collective advantage to thermometry is greatest. Conversely, for high temperature or strong coupling the correlations quickly vanish.
However, correlations alone are not sufficient to obtain a precision advantage, as the following example demonstrates. Consider preparing the impurities in the maximally entangled Bell states \(\hat{\rho}_{S}(0)=|\Phi^{+}\rangle\!\langle\Phi^{+}|\) or \(\hat{\rho}_{S}(0)=|\Psi^{+}\rangle\!\langle\Psi^{+}|\). Entangled initial states are known to give a precision boost in many metrological settings, e.g. for phase estimation in cold atom systems [98, 99]. For our system, however, this is not the case, as shown by the purple and red curves in Fig. 3. For the \(|\Psi^{+}\rangle\) state, this can be understood because the relevant decoherence function is almost purely real, while much of the temperature information at weak coupling is known to be contained in bath-induced phase shifts [39]. For the initial \(|\Phi^{+}\rangle\) state, phase information is amplified but the super-decoherence effect discussed in Sec. III causes the matrix elements of \(\hat{\rho}_{S}\) to decay too quickly to take advantage of this effect.
In summary, correlations between the impurities are generated by bath-mediated interactions, and these correlations can increase the temperature sensitivity. This effect is strongest at weak coupling and low temperature and, most interestingly, can be exploited using only product-state preparations and local measurements, e.g. via Ramsey interferometry. However, since these bath-mediated interactions vary rapidly with the distance between the impurities, the impurity's positions must be controlled with a precision comparable to the Fermi wavelength [c.f. Eq. (22)] to take advantage of these collective effects. Moreover, finding the optimal measurement and constructing the estimator requires solving the dynamical problem for two impurities, which is significantly more complicated than the single-impurity case. In the following section, therefore, we consider whether one may ignore bath-induced correlations and still obtain a reasonable temperature estimate.
### When can we assume uncorrelated probes?
In principle, our functional determinant approach can be scaled to describe the dynamics of an arbitrary number of probes, \(M\). In practice, however, this becomes impractical due to the large number of different decoherence functions that must be evaluated, as well as numerical instabilities and convergence issues that must be carefully addressed (see Appendix B). This motivates us to ask: under what conditions the bath-induced correlations can be ignored without sacrificing the quality of the temperature estimate? This would enable one to treat the impurities as independent, so that only the dynamics of a single impurity need be solved, which is far simpler.
To investigate the error arising in the temperature estimate by assuming the two impurity probes are uncorrelated, we adopt the following procedure:
1. Numerically find the operator \(\hat{O}_{\text{max}}(\phi)\) that gives the most information on temperature as well as the optimal time \(t_{\text{max}}\) to perform the measurement, assuming the impurities are evolving in independent Fermi gases of temperature \(T\). This yields the optimum measurement that an experimenter who is unaware of bath-induced interactions would perform.
2. Evaluate the reduced density matrix for two impurities in a shared bath of temperature \(T\) and evaluate the expectation value \(\langle\hat{O}_{\text{max}}\rangle=\text{tr}\left(\hat{\rho}_{S}(t_{\text{ max}})\hat{O}_{\text{max}}\right)\). This yields the expected result that the unaware experimentalist would measure.
3. Find \(T_{\text{est}}\), the temperature that the unaware experimenter would infer from the result \(\langle\hat{O}_{\text{max}}\rangle\) (assuming \(M\rightarrow\infty\) measurements), using the model of independent impurities.
4. Compare \(T_{\text{est}}\) with \(T\) to infer the error that would be incurred.
The error in assuming uncorrelated probes depends crucially on the dynamics of the system at the measurement time \(t_{\text{max}}\). If \(t_{\text{max}}\ll\tau_{i}\), the impurities remain uncorrelated and \(T_{\text{est}}=T\) to an excellent approximation. Therefore, we instead focus on impurities with a separation such that the interaction time is less than or similar to the \(t_{max}\), where the effect of bath-induced interactions on the temperature estimate is most significant.
Fig. 5 shows the error in assuming uncorrelated probes for different temperatures for two separations of the impurities. From Section III, we know the effect of the bath-induced impurity interaction is quantified by \(\cos^{2}(2k_{F}\Delta x)\). We consider the two extremal cases \(2k_{F}\Delta x=n\pi\) and \(2k_{F}\Delta x=n\pi/2\) (\(n\in\mathbb{Z}\)), where this interaction is maximal or minimal, respectively. We find that in both cases, and for most temperatures considered, approximating the impurities as independent yields a relative error \(|T-T_{\text{est}}|/T\lesssim 10\%\). However, the error depends in a complicated way on the temperature,
Figure 4: The mutual information between the two impurity probes as a function of time, with \(\Delta x=0.2L\).
because this determines the optimal measurement time \(t_{\text{max}}\). For example, at temperature \(T=0.175T_{F}\) the error for the maximally interacting probes is large because \(t_{\text{max}}\) is close to an integer multiple of \(\tau_{i}\).
These results suggest that, at least for two fixed impurities, achieving the best precision requires an estimator that explicitly accounts for bath-induced correlations. However, one may hope that, when averaging over the signal from many impurities at widely varying positions, the effect of correlations averages out. This presumably depends on the impurities' spatial distribution, but we leave a careful analysis of this problem for \(M>2\) to future work.
## V Discussion & Conclusions
In this work, we have proposed and analysed a thermometry protocol based on the dynamics of two qubit impurities dephasing in a 1D gas of ultracold fermions. We have solved the quantum evolution of the two impurities exactly, including the effect of bath-induced interactions. We have also gained valuable physical intuition into the observed behaviour at different timescales by means of a perturbative cumulant expansion. We have found that certain impurity decoherence functions manifest a retarded RKKY-like interaction, giving rise to sub- and super-decoherent behaviour depending on the initial state, which persists for intermediate times \(\tau_{i}<t<\hbar\beta\) and is eventually washed away by thermal fluctuations.
To understand how the bath-mediated interactions affect the achievable precision, we have compared the thermal sensitivity of our two-qubit thermometer to a pair of impurities interacting with independent environments. We have found that, at low temperatures and weak coupling, bath-induced correlations between the impurities can enhance precision. These results reinforce other recent work showing that bath-mediated interactions can be helpful in the context of low-temperature thermometry [72; 73; 74]. This conclusion is by no means obvious, since correlations between the impurities could also arise from redundant encoding of temperature information, thus reducing the signal-to-noise ratio relative to truly independent measurements [67; 100].
In order to exploit these correlations to the full, one would need to measure the non-local SLD observable. However, we have shown that a simple Ramsey protocol can approach the optimal precision, without the need to individually address the impurities or perform entangling operations. Moreover, we have quantified the systematic error incurred by neglecting correlations altogether. Our results show that this error depends strongly on the impurity separation, but remains is on the order of a few percent for most system parameters at not-too-low temperatures. This suggests that it may be an acceptable approximation to neglect bath-induced correlations in most situations, potentially simplifying future thermometry experiments with many impurities embedded in a single copy of the gas.
It is possible to generalize the techniques presented in this paper to investigate other systems, for example higher-dimensional systems. It has recently been suggested that the precision one is able to obtain depends crucially on the spatial dimensionality of the system [101] and that thermometry in higher dimensions could be more effective. However, we also expect bath-induced interactions to be less prominent in higher dimensions because the excitations from the impurity-bath interactions will spread out more. It would be interesting to investigate how the dynamics, as well as the thermoelectric performance, would change for several impurities decohering in a higher-dimensional Fermi gas. Our approach could also be adapted to explore thermometry in the presence of different confining potentials [21], charged impurities [40], and more exotic environments such as Fermi superfluids [102; 103]. Other properties of these systems beyond temperature, such as transport coefficients [104], could also be extracted using correlated impurities. Moreover, it has been suggested that using multi-dimensional spectroscopy instead of the Ramsey protocol considered in this text could yield extra information [105; 106], and the procedure presented here could be generalised to such scenarios as well.
Finally, it would be interesting to see how thermometric performance scales with the number of impurities \(M\), especially for \(M\gg 1\). A closely related question is how disturbing such a temperature measurement would be in terms of heat absorbed by the environment [107]. Intuitively, one may expect that improved precision comes at the cost of increased measurement backaction, especially if the number of impurities scales extensively with system size. This could be quantified using the recently developed thermodynamic description of decoherence [108; 109].
Figure 5: The relative error when assuming uncorrelated impurity probes as a function of temperature for coupling strength \(k_{F}a=-1\). The blue and orange points are for separations such that \(k_{F}\Delta x=n\pi\) and \(k_{F}\Delta x=n\pi/2\), respectively.
###### Acknowledgements.
We are grateful to S. Campbell, G. Mihailescu, and A. K. Mitchell for insightful discussions, and we also thank S. Campbell, R. Onofrio, and A. Purkayastha for useful comments on the manuscript. We acknowledge financial support from a Royal Society-Science Foundation Ireland University Research Fellowship (URF\(\backslash\)R1\(\backslash\)221571).
## Appendix A Solutions to the single particle Schrodinger equations
In this section, we solve the single-particle Schrodinger equation for a particle in a box, with no, one, or two delta potentials present. We will solve the equations
\[\hat{h}_{\sigma}\psi_{n}^{(\sigma)}=E_{n}^{(\sigma)}\psi_{n}^{(\sigma)}. \tag{10}\]
for each \(\sigma\).
For \(\hat{h}_{i\downarrow}\), this is simply given by
\[\psi_{n}^{(\downarrow\downarrow)}\equiv\psi_{n}=\sqrt{\frac{2}{L}}\sin\left(k_ {n}(x+L/2)\right), \tag{11}\]
and \(E_{n}=\hbar^{2}k_{n}^{2}/2m\), with \(k_{n}=n\pi/L\).
We next consider the case of \(\sigma=\downarrow\uparrow\). The eigenfunctions in this case are given by
\[\psi_{n}^{(\downarrow\uparrow)}=\begin{cases}A_{n}\sin\left(k_{n}^{\prime}(x+ L/2)\right),\;-L/2<x<x_{0}\\ B_{n}\sin\left(k_{n}^{\prime}(x-L/2)\right),\quad L/2>x>x_{0}\end{cases} \tag{12}\]
with energy
\[E_{n}^{(\downarrow\uparrow)}\equiv E_{n}^{\prime}=\frac{\hbar^{2}k_{n}^{ \prime 2}}{2m} \tag{13}\]
and \(k_{n}^{\prime}\) is given by the quantization condition
\[\cot\left(k_{n}^{\prime}(x_{0}+L/2)-\cot\left(k_{n}^{\prime}(x_{0}-L/2)\right) \right)=\frac{1}{k_{n}^{\prime}a}. \tag{14}\]
The coefficients \(A_{n}\) and \(B_{n}\) can be found by the normalization requirement.
The solution for \(\sigma=\uparrow\downarrow\) is similar to the one above. We can note immediately that the eigenfunctions should be mirrored versions of Eq. (12) around the center of the box. Indeed we find that
\[\psi_{n}^{(\uparrow\downarrow)}=\begin{cases}(-1)^{-n}B_{n}\sin\left(k_{n}^{ \prime}(x+L/2)\right),\;-L/2<x<-x_{0}\\ (-1)^{-n}A_{n}\sin\left(k_{n}^{\prime}(x-L/2)\right),\quad L/2>x>-x_{0}\end{cases} \tag{15}\]
with the same eigenenergies as above.
For \(\sigma=\uparrow\uparrow\) we treat the even and odd eigenfunctions separately. For \(n\) even we find that the solution can be written
\[\psi_{n}^{(\uparrow\uparrow)}=\begin{cases}C_{n}\sin\left(k_{n}^{\prime\prime }x-\delta_{n}\right),\;-L/2<x<-x_{0}\\ D_{n}\sin\left(k_{n}^{\prime\prime}x\right),\quad-x_{0}<x<x_{0}\\ C_{n}\sin\left(k_{n}^{\prime\prime}x+\delta_{n}\right),\;L/2>x>x_{0}\end{cases} \tag{16}\]
where \(k_{n}^{\prime\prime}=k_{n}-2\delta_{n}/L\) and the quantization condition takes the form
\[\cot(k_{n}^{\prime\prime}x_{0})-\cot(k_{n}^{\prime\prime}x_{0}+\delta_{n})= \frac{2}{k_{n}^{\prime\prime}a}. \tag{17}\]
For \(n\) odd, the eigenfunctions are given by
\[\psi_{n}^{(\uparrow\uparrow)}=\begin{cases}C_{n}\cos\left(k_{n}^{\prime\prime }x-\delta_{n}\right),&-L/2<x<-x_{0}\\ D_{n}\cos\left(k_{n}^{\prime\prime}x\right),&-x_{0}<x<x_{0}\\ C_{n}\cos\left(k_{n}^{\prime\prime}x+\delta_{n}\right),&L/2>x>x_{0}.\end{cases} \tag{18}\]
with \(k_{n}^{\prime\prime}\) the same as above. The quantization condition this time takes the form
\[\tan(k_{n}^{\prime\prime}x_{0}+\delta_{n})-\tan(k_{n}^{\prime\prime}x_{0})= \frac{2}{k_{n}^{\prime\prime}a}. \tag{19}\]
The eigenenergy is in both cases given by
\[E_{n}^{(\uparrow\uparrow)}\equiv E_{n}^{\prime\prime}=\frac{\hbar^{2}k_{n}^{ \prime\prime 2}}{2m}, \tag{20}\]
and the coefficients \(C_{n}\) and \(D_{n}\) can be found by requiring normalized states. It is worth noting that one may run into numerical instabilities when solving for \(k_{n}^{\prime\prime}\) and \(\delta_{n}\). We implement some safety procedures to check the validity of our solutions. The phase shift should never be higher than \(\pi\), and the magnitude of \(\delta_{n}\) can oscillate but should be contained within a decaying envelope function going to zero as \(n\) increases. If these conditions are not satisfied for any given pair \(\delta_{n}\), \(k_{n}^{\prime\prime}\), we find that rewriting the quantization conditions may help in solving the equations.
## Appendix B Computational details for the decoherence functions
In this section, we give some computational details for evaluating eq. (10).
For each temperature \(T\), we need to determine the chemical potential \(\mu\). We do this by solving the equation
\[\operatorname{tr}\hat{n}=N_{s}, \tag{21}\]
with \(N_{s}\) the number of particles in our gas. The trace is computed over a large basis set. In our calculations, we used \(\sim 10^{4}\) basis states.
We go to the thermodynamic limit by increasing the number of particles in the gas \(N_{s}\) while keeping the density \(\bar{n}=N_{s}/L\) fixed until convergence is reached. For the timescales we are considering, we find that using \(N_{s}\) in the range of \(500-1000\) gives good convergence avoiding finite size effects in the time scale we are interested in.
We need to calculate the matrix elements of operators like
\[\hat{A}=1-\hat{n}+\hat{n}e^{i\hat{h}_{\uparrow\uparrow}t/\hbar}e^{-i\hat{h}_{ \downarrow\uparrow}t/\hbar}. \tag{22}\]
This matrix is in principle infinite-dimensional, but we can get a good approximation by using a finite basis set of size \(N\). We determine \(N\) by requiring
\[\big{|}\sum_{i=1}^{N}f(E_{i})-N_{S}\big{|}<\epsilon, \tag{10}\]
with \(f(E)\) is the Fermi-Dirac distribution of the unperturbed gas. We find that using \(\epsilon=10^{-3}\) gives good convergence. In Appendix A we calculated the eigenfunctions and energies of the single particle operators. We insert the resolution of identity and get the matrix elements of the \(N\crosscross N\) matrix
\[A_{nm}=(1-f(E_{n}))\delta_{nm}+f(E_{n})\sum_{i=1}^{N^{\prime}}\sum_{j=1}^{N^{ \prime}}\sum_{k=1}^{N^{\prime\prime}}e^{i(E_{i}^{\prime}-E_{j}^{\prime})t/ \hbar}\bra{\psi_{n}}\ket{\psi_{i}^{(\uparrow\downarrow)}}\bra{\psi_{i}^{( \uparrow\downarrow)}}\ket{\psi_{k}}\bra{\psi_{k}}\ket{\psi_{j}^{(\downarrow \uparrow)}}\bra{\psi_{j}^{(\downarrow\uparrow)}}\ket{\psi_{m}}. \tag{11}\]
We have to fix the size of the perturbed basis set \(N^{\prime}\). We do this by the requirement of unitarity
\[\sum_{n=1}^{N^{\prime}}\big{|}\bra{\psi_{m}}\ket{\psi_{n}^{(\uparrow\downarrow )}}\big{|}^{2}>1-\epsilon, \tag{12}\]
for any \(m\leq N\). We then use this to determine the size of the unperturbed basis set we also insert \(N^{\prime\prime}\) by requiring
\[\sum_{n=1}^{N^{\prime\prime}}\big{|}\bra{\psi_{m}^{(\uparrow\downarrow)}}\ket{ \psi_{n}}\big{|}^{2}>1-\epsilon, \tag{13}\]
for any \(m\leq N^{\prime}\). We find that using \(\epsilon\sim 10^{-4}\) yields excellent convergence. In a completely analogous way, we can calculate the other 3 decoherence functions.
## Appendix C Cumulant expansion for weak coupling and low temperature
In this section, we derive the behavior of the decoherence functions, both in the power-law and thermal regimes. We explicitly go through the derivation for \(\nu_{\uparrow\downarrow,\downarrow\uparrow}(t)\) and state the results for the other three. They can be found in a completely analogous way. Our results are valid in the weak coupling \(k_{F}a\ll 1\) and low temperature \(T/T_{F}\ll 1\) regime. This calculation follows closely to the one discussed in the supplemental material of [39].
The starting point of our derivation is the many-body expectation value of eq. (8) of the main text which for the decoherence function we consider reads
\[\nu_{\uparrow\downarrow,\downarrow\uparrow}(t)=\Big{\langle}e^{i\hat{H}_{ \uparrow\downarrow}t/\hbar}e^{-i\hat{H}_{\downarrow\uparrow}t/\hbar}\Big{\rangle}. \tag{14}\]
We might write the Hamiltonians appearing as \(\hat{H}_{\sigma}=\hat{H}_{0}+V_{\sigma}\) with
\[\hat{H}_{0}=\sum_{k}E_{k}c_{k}^{\dagger}c_{k} \tag{15}\]
\[\hat{H}_{\sigma}=\sum_{n,m}V_{nm}^{(\sigma)}c_{n}^{\dagger}c_{m}, \tag{16}\]
where \(c_{n}^{\dagger}\) creates a fermion in the state \(\psi_{n}\) given in eq. (15). The interaction matrix \(V_{nm}^{(\sigma)}\) has elements given by
\[\begin{split} V_{nm}^{(\sigma)}&=\int_{-L/2}^{L/2} \mathrm{d}x\psi_{n}(x)\lambda\delta(x\pm x_{0})\psi_{m}(x)\\ &=\frac{2\lambda}{L}\sin\left(k_{n}x_{0}\pm\frac{\pi}{2}\right) \sin\left(k_{m}x_{0}\pm\frac{\pi}{2}\right),\end{split} \tag{17}\]
where \(\lambda=-\hbar^{2}/ma\), \(k_{n}=n\pi/L\), and the plus (minus) sign is valid for \(\sigma=\uparrow\downarrow\) (\(\downarrow\uparrow\)).
To simplify notation we will from now denote \(\nu_{\uparrow\downarrow,\downarrow\uparrow}=|\nu|e^{i\phi}\). We can write this as a time-ordered exponential, which can be expanded in terms of time-ordered cumulants [110]
\[\begin{split}|\nu|e^{i\phi}&=\left\langle\mathcal{T }\exp\bigg{[}\int_{\gamma}\mathrm{d}t^{\prime}\frac{\hat{V}(t^{\prime})}{i \hbar}\bigg{]}\right\rangle\\ &\approx\exp\Bigg{[}\bigg{\langle}\int_{\gamma}\mathrm{d}t^{ \prime}\frac{\hat{V}(t^{\prime})}{i\hbar}\bigg{\rangle}_{c}+\frac{\mathcal{T}}{ 2}\bigg{\langle}\bigg{(}\int_{\gamma}\mathrm{d}t^{\prime}\frac{\hat{V}(t^{ \prime})}{i\hbar}\bigg{)}^{2}\bigg{\rangle}_{c}\Bigg{]},\end{split} \tag{18}\]
where the symbol \(\mathcal{T}\) denotes time ordering on the curve \(\gamma\) shown in Fig. 6 such that operators on the branch \(t_{+}\) occurs at a later time than operators on the \(t_{-}\) branch. The integration is performed over this same curve. \(\langle\bullet\rangle\) denotes the thermal expectation value with respect to the initial thermal state, and \(\langle\bullet\rangle_{c}\) is the corresponding
Figure 6: The contour \(\gamma\) on which the time-ordering needs to be performed. In the time-ordering, operators on the lower branch are later than operators on the upper branch.
cumulant. The operator \(\hat{V}(t)\) in the equation above takes the form
\[\hat{V}(t)=\begin{cases}\hat{V}_{\uparrow\downarrow}(t),&t\in t_{-}\\ \hat{V}_{\downarrow\uparrow}(t),&t\in t_{+}\end{cases} \tag{101}\]
on the two branches of the curve \(\gamma\). The operators are written in the interaction picture such that \(\hat{V}_{\sigma}(t)=e^{i\hat{H}_{0}t/\hbar}\hat{V}_{\sigma}e^{-i\hat{H}_{0}t/\hbar}\). In the second line of Eq. (100) we have neglected terms of order \(\mathcal{O}(\hat{V}^{3})\).
By using the thermal expectation value \(\left\langle c_{n}^{\dagger}c_{m}\right\rangle=f(E_{n})\delta_{nm}\), with \(f(E)\) being the Fermi-Dirac distribution, we find that the first cumulant is
\[\begin{split}&\int_{\gamma}\mathrm{d}t^{\prime}\left\langle V \hat{(t)}\right\rangle_{c}=\int_{0}^{t}\mathrm{d}t^{\prime}\frac{\left\langle \hat{V}_{\uparrow\downarrow}(t^{\prime})\right\rangle_{c}}{i\hbar}+\int_{t}^{ 0}\mathrm{d}t^{\prime}\frac{\left\langle\hat{V}_{\downarrow\uparrow}(t^{ \prime})\right\rangle_{c}}{i\hbar}\\ &=\frac{t}{i\hbar}\sum_{n}f(E_{n})\left(V_{nn}^{(\uparrow\downarrow )}-V_{nn}^{(\downarrow\uparrow)}\right)=0.\end{split} \tag{102}\]
To capture the decay of the decoherence function we calculate the second cumulant. We use the standard relation for Gaussian fermionic states \(\left\langle c_{n}^{\dagger}c_{m}c_{k}^{\dagger}c_{l}\right\rangle_{c}=f(E_{ n})[1-f(E_{m})]\delta_{nl}\delta_{mk}\) to get
\[\begin{split}&\left\langle\hat{V}_{\sigma}(t)\hat{V}_{\sigma^{ \prime}}(t^{\prime})\right\rangle_{c}\\ &=\sum_{n,m}V_{nm}^{(\sigma)}V_{mn}^{(\sigma^{\prime})}e^{i(E_{n} -E_{m})(t-t^{\prime})/\hbar}f(E_{n})[1-f(E_{m})].\end{split} \tag{103}\]
The terms with \(n=m\) will contribute a term proportional to \(t^{2}\), and will go to zero as \(L^{-1}\), and can be neglected in the thermodynamic limit. We then use this result to compute the second cumulant. After integrating over the contour \(\gamma\), being careful of the time ordering, we find that
\[\begin{split}&-\Gamma(t)\equiv-\frac{\mathcal{T}}{2}\bigg{\langle} \bigg{(}\int_{\gamma}\mathrm{d}t^{\prime}\frac{\hat{V}(t^{\prime})}{i\hbar} \bigg{)}^{2}\bigg{\rangle}_{c}\\ &=\sum_{n\neq m}f(E_{n})[1-f(E_{m})]\frac{1-\cos[(E_{n}-E_{m})t /\hbar]}{(E_{n}-E_{m})^{2}}V_{nm}^{2},\end{split} \tag{104}\]
where we have defined \(V_{nm}=V_{nm}^{(\uparrow\downarrow)}-V_{nm}^{(\downarrow\uparrow)}\). Note that Eq. (104) is a real number, so the decoherence function is real at least to second order in \(\hat{V}\). This is unique for the \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\), and will not hold true for the other decoherence functions.
To simplify Eq. (104), we introduce the spectral density
\[\begin{split} J(\omega)&=\frac{1}{\hbar}\sum_{n,m }V_{nm}^{2}f(E_{n})[1-f(E_{m})]\delta(\hbar\omega+E_{n}-E_{m})\\ &=\frac{1}{\hbar}\int_{0}^{\infty}\mathrm{d}Ef(E)[1-f(E+\hbar \omega)]\\ &\times\sum_{n\neq m}V_{nm}^{2}\delta(E-E_{n})\delta(\hbar\omega +E-E_{m})\end{split} \tag{105}\]
representing the coupling strength to particle-hole excitations weighted by the finite temperature density of states. The decoherence rate then takes the form
\[\Gamma(t)=-\fint_{-\infty}^{\infty}\mathrm{d}\omega\frac{1-\cos(\omega t)}{ \omega^{2}}J(\omega), \tag{106}\]
where \(\fint\) denotes the principal value integral excluding \(\omega=0\). To make further progress we take the continuum limit, introducing the s-wave density of states
\[D_{s}(E)=\frac{1}{L}\sum_{n}\delta(E-E_{n})=\frac{1}{\pi\hbar}\sqrt{\frac{m}{2E}} \tag{107}\]
valid when \(L\to\infty\). We replace \(V_{nm}\) by \(V(E)\) by letting
\[k_{n}\to k(E)=\sqrt{\frac{2mE}{\hbar^{2}}}. \tag{108}\]
Doing this simplifies the spectral density to take the form
\[\begin{split} J(\omega)=\frac{2\lambda^{2}}{\hbar}\int_{0}^{ \infty}&\mathrm{d}Ef(E)[1-f(E+\hbar\omega)]\\ &\times D_{s}(E)D_{s}(E+\hbar\omega)g(E,\omega).\end{split} \tag{109}\]
where we have introduced the function \(g\) capturing the effect of the impurities on the gas. We can write it as
\[g(E,\omega)=1-\cos(2k(E)x_{0})\cos(2k(E+\hbar\omega)x_{0}). \tag{110}\]
We are interested in low temperatures such that \(T\ll T_{P}\) and \(\mu\approx E_{F}\). For \(\hbar\omega\leq-E_{F}\), \(J(\omega)\) is exponentially suppressed. If we for a second ignore \(g(E,\omega)\) in Eq. (109), we see that \(J(\omega)\sim\sqrt{\omega\tau_{F}}\) for \(\hbar\omega\geq E_{F}\). Going back to Eq. (106), such contributions can be neglected in \(\Gamma(t)\). Including \(g(E,\omega)\), will only make \(J(\omega)\) grow slower as a function of \(\gamma\), and thus we can restrict ourselves to low frequencies \(\hbar|\omega|\ll E_{F}\). In this regime, the function \(f(E)[1-f(E+\hbar\omega)\) is sharply peaked around \(E=E_{F}\), and we may replace \(\sqrt{E(E+\hbar\omega}\to\sqrt{E_{F}(E_{F}+\hbar\omega}\approx E_{F}\). In \(g(E,\omega)\) we make the replacement \(E\to E_{F}\) and make a series expansion in \(\hbar\omega/E_{F}\) in \(k(E+\hbar\omega)\). We also introduce the interaction time \(\tau_{i}=2x_{0}/v_{F}\) as discussed in the main text to get the expression
\[g(\omega)=1-\cos(2\tau_{i}/\tau_{F})\cos(2\tau_{i}/\tau_{F}+\tau_{i}\omega). \tag{111}\]
The remaining integral in Eq. (109) can then be computed to yield
\[J(\omega)=\frac{1}{2}\alpha\omega g(\omega)\left[1+\coth(\hbar\beta\omega/2) \right], \tag{112}\]
which is valid for \(|\omega|<\Lambda\), where \(\Lambda\sim E_{F}/\hbar\) is a cutoff-frequency. Here, we have introduced the dimensionless coupling strength \(\alpha=(\pi k_{F}a)^{-2}\).
Going back to Eq. (106), we have to calculate the
following integral
\[\begin{split}\Gamma(t)=&-\frac{\alpha}{2}\fint_{- \Lambda}^{\Lambda}\mathrm{d}\omega\frac{1-\cos\omega t}{\omega}g(\omega)[1+\coth( \hbar\beta\omega/2)]\\ =&-\frac{\alpha}{2}\fint_{-\infty}^{\infty}\mathrm{d} \omega\frac{1-\cos\omega t}{\omega}g(\omega)[1+\coth(\hbar\beta\omega/2)]\\ &+\alpha\int_{\Lambda}^{\infty}\mathrm{d}\omega\frac{1-\cos \omega t}{\omega}g(\omega),\end{split} \tag{103}\]
where in the third line we have used that \(\coth(\hbar\beta\omega/2)\approx\mathrm{sign}(\omega)\) for \(|\omega|>\Lambda\), which is valid as long as \(\hbar\beta\Lambda\gg 1\). The two integrals in the second equality of Eq. (103) can be evaluated by writing the cosines as complex exponentials and we end up with 9 terms of the form
\[-\frac{\alpha}{2}\fint_{\infty}^{\infty}\mathrm{d}\omega e^{iy\omega}[1+\coth( \hbar\beta\omega/2)]/\omega+\alpha\int_{\Lambda}^{\infty}\mathrm{d}\omega e^{ iy\omega}/\omega. \tag{104}\]
For the first of these integrals, we extend \(\omega\) to the complex plane, and depending on the sign of \(y\) we close the contour in either the upper or lower plane, being careful going around the origin. We then employ the residue theorem, and end up with a geometric series over the residues at the poles at \(\omega=i\omega_{n}\) with \(\omega_{n}=2n\pi/\hbar\beta\) the bosonic Matsubara frequencies. \(n\) runs over \(n=\pm 1,\pm 2,\dots\) where the + (-) sign is valid if \(y>0\) (\(y<0\)) such that the integrand vanish on the edge of the contour.
The second integral in Eq. (104) can be explicitly evaluated by adding a small imaginary part to \(y\to y+i\epsilon\) and taking the limit \(\epsilon\to 0\). The integral will be proportional to the incomplete gamma function \(\Gamma(0,iy)\). Combining this we obtain the following expression for the decoherence rate
\[\begin{split}&\Gamma(t)=-2\alpha\left[\ln\frac{\hbar\Lambda \beta}{\pi}\sinh\left(\frac{\pi t}{\hbar\beta}\right)-\mathrm{Ci}(\Lambda t)+ \gamma_{E}\right]\\ &-\alpha\cos^{2}\frac{2\tau_{i}}{\tau_{F}}\ln\frac{\left(1-e^{- \frac{2\pi\tau_{i}}{\hbar\beta}}\right)^{2}}{\left(1-e^{-\frac{2\pi(t+\tau_{i })}{\hbar\beta}}\right)\left(1-e^{-\frac{2\pi|(t-\tau_{i})}{\hbar\beta}} \right)}\\ &+\alpha\cos^{2}\frac{2\tau_{i}}{\tau_{F}}\big{[}2\mathrm{Ci}( \tau_{i}\Lambda)-\mathrm{Ci}((t+\tau_{i})\Lambda)-\mathrm{Ci}(|t-\tau_{i}| \Lambda)\big{]}\\ &+\alpha\cos\frac{2\tau_{i}}{\tau_{F}}\sin\frac{2\tau_{i}}{\tau_ {F}}\big{[}2\mathrm{Si}(\tau_{i}\Lambda)-\mathrm{Si}((t+\tau_{i})\Lambda)\\ &\hskip 142.26378pt-\mathrm{Si}(|t-\tau_{i}|\Lambda)\big{]},\end{split} \tag{105}\]
where \(\mathrm{Ci}\) and \(\mathrm{Si}\) denote the cosine and sine integrals, \(\gamma_{E}\) is the Euler constant and we have used the relation
\[\begin{split}& e^{i\theta}\Gamma(0,ix)+e^{-i\theta}\Gamma(0,-ix) \\ &=-2\cos\theta\mathrm{Ci}(x)-2\sin\theta(\mathrm{Si}(x)-\pi/2). \end{split} \tag{106}\]
We are interested in how the decoherence behaves in time, so all constant terms will be neglected. The terms proportional to \(\mathrm{Ci}(\Lambda t)\), \(\mathrm{Ci}(\Lambda(t+\tau_{i}))\) and \(\mathrm{Si}(\Lambda(t+\tau_{i}))\) are oscillating, but will quickly become negligible compared to the other terms. The terms proportional to \(\mathrm{Ci}(|t-\tau_{i}|\Lambda)\) and \(\mathrm{Si}(|t-\tau_{i}|\Lambda)\) regulate the solution at \(t=\tau_{i}\). If these terms did not appear, the decoherence function would go to zero at this point. We are left with a decoherence function of the following form
\[\begin{split}&\nu_{\uparrow\downarrow,\downarrow\uparrow}(t)\propto \left[\frac{\hbar\beta}{\pi\tau_{F}}\sinh\frac{\pi t}{\hbar\beta}\right]^{-2 \alpha}\\ &\times\left[\frac{\left(1-e^{-\frac{2\pi\tau_{i}}{\hbar\beta}} \right)^{2}}{\left(1-e^{-\frac{2\pi(t+\tau_{i})}{\hbar\beta}}\right)\left(1-e ^{-\frac{2\pi|(t-\tau_{i})}{\hbar\beta}}\right)}\right]^{-\alpha\cos^{2}\frac{2 \tau_{i}}{\tau_{F}}}.\end{split} \tag{107}\]
Here, we got rid of the cut-off dependence by using that \((\Lambda/E_{F})^{-2\alpha}\sim 1\).
We find three regimes of the decoherence function. Firstly, if \(t\ll\tau_{i}\ll\hbar\beta\), we have algebraic decoherence with an exponent \(2\alpha\). This is the same as two independent impurities decohering. If we have \(\tau_{i}\ll t\ll\hbar\beta\) the exponent changes to \(-2\alpha(1-\cos(4k_{F}x_{0})\). Thus the bath-induced interaction leads to suppressed decoherence. This effect is known as sub-decoherence [89; 90], and will be discussed more in the next paragraph. As the impurities are only able to produce low-energy, long-wavelength excitations, the decoherence slows down. Finally, for \(t\gg\hbar\beta\) the decoherence is exponential \(\nu_{\uparrow\downarrow,\downarrow\uparrow}(t)\propto e^{-\gamma t}\) with \(\gamma=\frac{2\alpha\pi}{\hbar\beta}\).
To see why the sub-decoherence occurs for \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) it is useful to rewrite the interaction Hamiltonian in the Bell-basis. It can easily be checked that the interaction Hamiltonian can be written as
\[\begin{split}\frac{\hat{H}_{I}}{\lambda}&=\frac{1}{2 }\hat{I}\otimes(\hat{n}_{1}+\hat{n}_{2})\\ &+\frac{1}{2}\left(\big{|}\Phi^{+}\big{\rangle}\!\big{\langle}\Phi ^{-}\big{|}+h.c.\right)\otimes(\hat{n}_{1}+\hat{n}_{2})\\ &+\frac{1}{2}\left(\big{|}\Psi^{+}\big{\rangle}\!\big{\langle}\Psi ^{-}\big{|}+h.c.\right)\otimes(\hat{n}_{1}-\hat{n}_{2}),\end{split} \tag{108}\]
where \(\hat{n}_{1}\) and \(\hat{n}_{2}\) are density operators for the gas at positions \(\pm x_{0}\). It is clear that decoherence functions like \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) couple to density differences in the gas. At long times, only low-frequency excitations are relevant for the decoherence function: in particular, at time \(t\) only excitations with frequencies \(\omega\lesssim 1/t\) contribute significantly. Low frequency generally entails long wavelength, and density excitations with wavelengths much greater than the impurity separation are not able to create an appreciable density difference at the positions of the impurities. In particular, excitations close to the Fermi surface have a wavelength \(\lambda\sim 2\pi v_{F}/\omega\), and therefore the decoherence signal slows down significantly when \(t\gtrsim\tau_{i}\) after which time all relevant wavelengths exceed \(\Delta x\).
Following the approach outlined in this section, we can also compute the behavior of the other decoherence functions. Here, we just state the results. For the decoherence
function \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\), the behavior is that of a single qubit in a thermal bath. For short times \(t\ll\hbar\beta\) we have algebraic decay with exponent \(\alpha\), while for long times \(t\gg\hbar\beta\) we have exponential decay \(\sim e^{-\alpha\pi t/\hbar\beta}\).
For the decoherence function \(\nu_{\uparrow\uparrow,\downarrow\downarrow}\), we have the same behavior as \(\nu_{\uparrow\downarrow,\downarrow\uparrow}\) for short times, that is algebraic decay with exponent \(2\alpha\), but for the intermediate regime, \(\tau_{i}\ll t\ll\hbar\beta\) the exponent changes to \(-2\alpha(1+\cos(4k_{F}x_{0})\). As can be seen from Eq. (C), this decoherence function couples to low-frequency excitations, leading to the phenomena of super-decoherence.Finally, in the limit \(t\gg\hbar\beta\) the decoherence is exponential \(\sim e^{-2\alpha\pi t/\hbar\beta}\).
Finally, for \(\nu_{\uparrow\uparrow,\uparrow\downarrow}\), the low-\(T\) weak coupling expansion breaks down. The derivation of the cumulant expansion of \(\nu_{\uparrow\uparrow,\uparrow\downarrow}\) will be almost identical to that given above, the only change being that \(\hat{V}^{(\downarrow\uparrow)}\rightarrow\hat{V}^{(\uparrow\uparrow)}\), where
\[\hat{V}^{(\uparrow\uparrow)}_{mn}=\hat{V}^{(\uparrow\downarrow)}_{mn}+\hat{ V}^{(\downarrow\uparrow)}_{mn} \tag{102}\]
are the matrix elements of the interaction Hamiltonian when both impurities are in the state \(\ket{\uparrow}\). Looking at Eq. (100), this means that the \(V_{nm}\) that appears will change into \(V_{nm}=V^{(\uparrow\downarrow)}_{nm}-V^{(\uparrow\uparrow)}_{nm}=-V^{( \downarrow\uparrow)}_{mn}\). This in turn means that the effect of the first impurity interacting with the gas cancels out, and we end up with essentially the same behavior for \(\nu_{\uparrow\uparrow,\uparrow\downarrow}\) as for \(\nu_{\uparrow\downarrow,\downarrow\downarrow}\), i.e. a single impurity interacting with a bath of fermions. In order to capture the slowing down of the decoherence observed in Fig. 2 of the main text, we need to consider higher-order cumulants.
|
2305.06129 | Do code refactorings influence the merge effort? | In collaborative software development, multiple contributors frequently
change the source code in parallel to implement new features, fix bugs,
refactor existing code, and make other changes. These simultaneous changes need
to be merged into the same version of the source code. However, the merge
operation can fail, and developer intervention is required to resolve the
conflicts. Studies in the literature show that 10 to 20 percent of all merge
attempts result in conflicts, which require the manual developer's intervention
to complete the process. In this paper, we concern about a specific type of
change that affects the structure of the source code and has the potential to
increase the merge effort: code refactorings. We analyze the relationship
between the occurrence of refactorings and the merge effort. To do so, we
applied a data mining technique called association rule extraction to find
patterns of behavior that allow us to analyze the influence of refactorings on
the merge effort. Our experiments extracted association rules from 40,248 merge
commits that occurred in 28 popular open-source projects. The results indicate
that: (i) the occurrence of refactorings increases the chances of having merge
effort; (ii) the more refactorings, the greater the chances of effort; (iii)
the more refactorings, the greater the effort; and (iv) parallel refactorings
increase even more the chances of having effort, as well as the intensity of
it. The results obtained may suggest behavioral changes in the way refactorings
are implemented by developer teams. In addition, they can indicate possible
ways to improve tools that support code merging and those that recommend
refactorings, considering the number of refactorings and merge effort
attributes. | Andre Oliveira, Vania Neves, Alexandre Plastino, Ana Carla Bibiano, Alessandro Garcia, Leonardo Murta | 2023-05-10T13:24:59Z | http://arxiv.org/abs/2305.06129v1 | # Do code refactorings influence the merge effort?
###### Abstract
In collaborative software development, multiple contributors frequently change the source code in parallel to implement new features, fix bugs, refactor existing code, and make other changes. These simultaneous changes need to be merged into the same version of the source code. However, the merge operation can fail, and developer intervention is required to resolve the conflicts. Studies in the literature show that 10 to 20 percent of all merge attempts result in conflicts, which require the manual developer's intervention to complete the process. In this paper, we concern about a specific type of change that affects the structure of the source code and has the potential to increase the merge effort: code refactorings. We analyze the relationship between the occurrence of refactorings and the merge effort. To do so, we applied a data mining technique called association rule extraction to find patterns of behavior that allow us to analyze the influence of refactorings on the merge effort. Our experiments extracted association rules from 40,248 merge commits that occurred in 28 popular open-source projects. The results indicate that: (i) the occurrence of refactorings increases the chances of having merge effort; (ii) the more refactorings, the greater the chances of effort; (iii) the more refactorings, the greater the effort; and (iv) parallel refactorings increase even more the chances of having effort, as well as the intensity of it. The results obtained may suggest behavioral changes in the way refactorings are implemented by developer teams. In addition, they can indicate possible ways to improve tools that support code merging and those that recommend refactorings, considering the number of refactorings and merge effort attributes.
Software Merge, Merge Effort, Refactoring, Association Rules, Data Mining.
## I Introduction
Developers frequently change in parallel the same source code during the software development process due to time to market. Eventually, these parallel changes need to be merged. Previous work reported that 10 to 20 percent of all merges fail [1, 2], some projects experiencing rates of almost 50 percent [1, 3]. The effort for merging parallel changes might be high due to various factors, such as the need to resolve conflicts. Over the years, many merge conflict resolution techniques have been developed, such as those described by Mens [4] and Apel et al. [5]. These techniques differ considerably when comparing two artifact versions and how they resolve merge conflicts. There are many proposals for approaches that seek to resolve these conflicts in an automated or semi-automated way [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. Despite this, developers often need to intervene in conflicts that cannot be resolved automatically, demanding manual effort.
Code refactoring is a widely used practice to improve software modularity and, as such, facilitate parallel code changes [17]. Refactoring is a code change to modify software's internal structure, in which the resulting code is expected to bring various benefits in the long term [18]. However, one should understand the short-term effort implications of applying code refactoring in collaborative software development. Otherwise, developers may blindly apply code refactoring that increase merge effort; for instance, they might perform refactorings that lead to conflicts later, potentially increasing merge effort.
Some pieces of work in the literature [19, 20, 21, 22, 23] already presented studies analyzing the effects of refactorings and code merging. Some of them [19, 20, 22] have proposed tools capable of identifying a small subset of refactorings before performing the code merge. The purpose of these studies is to identify the most common refactorings, such as renaming and moving code, to assist developers in making decisions before performing the code merge. Mahmoudi and Nadi [21] investigated and reported the most common types of refactorings that occur in practice and analyzed the possibility of having automated support for merging them but they have not developed corresponding tools. Mahmoudi et al. [23] carried out an empirical study to assess the relationship between 15 types of refactorings and the occurrence of merge conflicts. As a result, they found that 22% of merge conflicts involve refactorings and concluded that these conflicts are more complex than those without refactorings. Moreover, they concluded that 11% of conflicting regions have at least one refactoring involved.
Nevertheless, these studies have not investigated the relationship between the occurrence of refactorings and the practical effort to perform the merge operation. Moreover, studies that quantify the intensity of this relationship have not been carried out either. Another aspect that has not been analyzed concerns where refactorings are implemented in the branches of a merge commit. For example, whether their occurrence simultaneously in two branches can generate more or less merge effort. In addition, they only considered a limited subset of refactorings types. Fowler's catalog [18] describes an expressive set of different kinds of refactorings.
Our research analyzes, from a different perspective: i) the relationship between the occurrence of refactorings in the branches of a merge commit, and ii) the effort to merge the branches. We focus on evaluating the effort required for performing the merge operation instead of analyzing the areas of conflict involving refactorings and their size, as proposed by Mahmoudi et al. [23]. We adopted a descriptive data mining technique called association rule extraction to understand and quantify this relationship. The application of this technique had the purpose of analyzing how much the presence of refactorings influences the chances and the intensity of effort during the merge.
Our experiments were carried out with data collected from 28 open-source projects hosted on GitHub and considering 33 different types of refactoring. More specifically, we seek to answer the following research questions, detailed in Section III:
**RQ1**: Does the occurrence of refactorings in the branches increase the chances of merge effort?
**RQ2**: Does the amount of refactorings in the branches increase the chances of merge effort?
**RQ3**: Does the amount of refactorings in the branches increase the intensity of merge effort?
When evaluating the merge of branches, most papers in the literature indicate when the merge fails or not or, at most, count the number of conflicting chunks. However, the main issue regarding merging is not whether they fail or have many conflicts but how hard it is to fix them. In this paper, we consider the effort performed during merge resolution. To do so, we adopted code churn as a surrogate for the merge effort. According to the literature [24], code churn may be a reasonable alternative for code maintenance effort, with a significant moderate to strong Spearman correlation of 0.59 to 0.66 between both metrics for corrective and evolutive maintenance, respectively.
During the research, we identified that 7.1% of the merge commits failed; that is, it took some effort to resolve them. We emphasize that this percentage is comparable with the literature (from 10% to 20%, as previously mentioned). This range may appear small at first glance, but merge is a frequent operation (e.g., in our dataset, 1 in every 10.6 commits is a merge commit), and in some projects of our sample, merge fails multiple times every month. Moreover, some of these merge failures are cumbersome for the developers, negatively impacting their productivity and potentially affecting the software quality [25, 26].
We found that the occurrence of refactorings can increase by 24% the chances of merge effort. Moreover, as the number of refactorings in the branches of a merge commit increases, the chances of merge effort also increase. For instance, for merges with hundreds or more refactorings, the chances of having effort increase in 143%. In addition, we identified that the number of refactorings in the branches of a merge also influences the intensity of merge effort. We noticed that, for merges with hundreds or more refactorings, the chances of having hundreds or more lines of code changed during merge increased by 232%. Furthermore, these percentages tend to be even higher when refactorings co-occur in both branches of the merge commit. In this case, the chances of having effort increase by 114%. Likewise, when there are many refactorings (hundreds or more) in both branches, this percentage of having effort is even higher: 308%. In this scenario with many refactorings, we also observed a 751% increase in the chances of the effort intensity being high.
Our results may indicate to systems managers and developers the application of alternative ways to use branches to implement refactorings. Applying these alternatives aims to avoid or minimize the implementation of refactorings in parallel. From the tool builders' perspective, our findings may indicate the need for new merge strategies, considering the number of refactorings implemented in both branches and the predicted merge effort level. This approach could signal the most appropriate moment to perform the merge regarding a specific threshold of refactorings that may decrease the chances of high merge effort. Furthermore, our results may encourage more particular studies on which types of refactorings tend to generate more merge effort. We will discuss these possible implications in more detail in Section III-D.
The remainder of this paper is organized as follows. Section II presents the research process adopted in our work, the dataset used in the experiments, and an overview of the employed techniques. Then, Section III shows the obtained results and discusses their implications, answering the research questions. Section IV discusses the threats to the validity of this study. In Section V, we present a discussion about related pieces of work. Finally, Section VI concludes this work by highlighting our main contributions and presenting perspectives for future work.
## II Materials and Methods
The experimental process of our work can be divided into three phases, as shown in Figure 1. In the first phase, we defined the criteria for selecting the projects analyzed in our experiments, as detailed in Section II-A. The second phase aims to collect information about merge branches and refactorings, and compute the merge effort. This phase is described in Section II-B. In the last phase, presented in Section II-C, we employed a data mining technique named association rules extraction to answer the research questions raised in Section I. Applying this technique, we can discover hidden information in the analyzed data that could have been ignored if a manual exploratory analysis had been conducted. The complete experimental package, containing the data and scripts used in this research, and the reproducibility instructions, is available at [https://github.com/gems-uff/refactoring-merge](https://github.com/gems-uff/refactoring-merge).
### **Project Corpus**
When selecting the corpus, we aimed at mature and relevant open-source projects hosted on GitHub. We first used the GitHub GraphQL API (v4)1 to search for all public repositories that were not forks of other repositories, had at least
5,000 stars, were not archived, and received at least one push in the last three months. According to Kalliamvakou et al. [27], avoiding forks is essential to guarantee that the corpus contains only one repository per project. However, we are aware of forks that are much more successful than the original forked projects. To ensure that we were not excluding one of these projects, we checked for forked projects that met our selection criteria and found none. Moreover, restricting the number of stars to at least 5,000 guarantees that our corpus contains just relevant and popular repositories [28]. Finally, avoiding archived repositories or repositories that did not receive pushes in the last three months ensures a certain degree of activity in all repositories of our corpus. This search was performed on September 20, 2021, and returned 3,201 repositories.
Afterward, we analyzed the metadata of these 3,201 repositories to perform additional filters on the number of contributors (10 or more) and the number of commits (5,000 or more) in the default branch. Filtering out repositories with less than ten contributors aims at avoiding personal or coursework projects in our corpus [27]. Moreover, restricting the number of commits in the default branch to 5,000 or more is an attempt to remove immature or short-term projects from our corpus. After applying the filter for the number of contributors, 2,941 projects remained, and of these, only 477 repositories had at least 5,000 commits.
From these 477 repositories, we applied another filter, considering only projects whose primary language reported on GitHub is Java. The Java language was chosen considering the following criteria: i) the need to focus on a specific language, as refactoring analysis is language-dependent; ii) its popularity, according to TIOBE ranking [29] and StackOverflow Survey [30]; iii) and the fact that the tool that identifies refactorings (RefactoringMiner [31]), used in this work, can analyze only code written in Java. After applying this filter, 42 projects remained [32].
We then conducted a manual inspection of the remaining 42 repositories, examining the GitHub repository and the web page of each project. This analysis aims to eliminate those not containing a software project and those not documented in the English language. The first criterion refined the automatic filter for the primary programming language. Even though GitHub was able to detect a primary language for all repositories in our corpus at this point, one of them stored only documentation (e.g., books, software documentation, main pages, etc.), eventually having source code used as examples (this explains the GitHub classification of primary language). Only one repository did not meet this criterion, resulting in 41. We only selected project artifacts in English to guarantee that we would be able to understand the documentation of the projects in the corpus. At this point, two projects were removed because their documentation was written in Chinese, lasting 39 projects. Even though automatic translators are available, we decided not to include those repositories in the corpus due to the poor quality of the translation for some repositories. Thus, including them would be a threat to our analysis.
The last filter considered the number of valid merge commits. In our study, a valid merge commit cannot be a no-fast-forward type, as our goal is to evaluate the merge effort. A fast-forward merge can be performed when there is a direct linear path from the source branch to the destination branch. In a fast-forward merge, Git moves the target branch pointer to the exact location as the source branch pointer without creating an extra merge commit. The _git merge -no-ff_ command merges the specified branch into the current branch, creating a new merge commit and keeping the structure and history of the branches intact, even if a fast-forward merge is possible. We computed the distribution of valid merge commits for each project, as shown in Figure 2. Considering the low and upper limits and the quartile values of this boxplot, calculated using Tukey's fences formula _Q3 + 1.5 \(\times\) IQR_[33], we defined some other criteria to select the projects.
At first, we discarded all projects with valid merge commits above the maximum threshold (4,504 merge commits). This action was taken so that the experiments could be conducted in a more homogeneous dataset, avoiding, for example, that a large project alone dominates the overall results. After applying this filter, four projects were discarded (graal, spring-boot, neo4j, and intelligij-community), resulting in 35 projects. We also removed projects with a number of valid merge commits smaller than the limit of the first quartile of the boxplot (173 merge commits). Likewise, we discarded these projects as they would have little representation in the general context of building our dataset. Seven projects did not meet this criterion (CoreNLP, bazel, buck, guava, litho, presto, and selenium), and finally, our project corpus was composed of 28 projects.
Table I shows the characteristics of the projects selected for analysis, presenting the number of commits (NC), the number of merge commits (NMC), the number of merge commits using the -no-ff_ flag (NMC-nff), and the number of valid merge commits (NVMC). The final version of the dataset
Fig. 1: The phases of the experiments.
discarded about 26.88% of the merges because they were generated by the Git command -_no-ff_, resulting in 40,248 merge commits.
### **Refactorings and Merge Effort**
As shown in Figure 1, the second phase was performed in 3 steps: (i) identify the merge commits that have code refactorings, (ii) collect and store the structure branches of the merge commits, indicating the commits containing refactorings in each branch, and (iii) calculate the merge effort.
We have considered in our study 33 different types of refactorings, 26 of which are described in Fowler's catalog [18]: Change Return Type, Extract Attribute, Extract Class, Extract Interface, Extract Method, Extract Subclass, Extract Superclass, Extract Variable, Inline Method, Inline Variable, Merge Attribute, Move Attribute, Move Class, Move Method, Pull Up Attribute, Pull Up Method, Push Down Attribute, Push Down Method, Rename Attribute, Rename Class, Rename Method, Rename Parameter, Rename Variable, Split Attribute, Split Parameter, and Split Variable. The other seven types of refactorings were defined by Tsantalis et al. [31]: Change Parameter Type, Change Variable Type, Merge Parameter, Merge Variable, Parameterize Variable, Replace Attribute, and Replace Variable with Attribute. We consider these subsets of refactorings, as a recent study revealed that developers often applied them in practice [34].
To investigate the effects of refactorings on the merge effort, we adopted a metric defined by Prudencio et al. [35] and implemented by Moura and Murta [36]. A more technical explanation of how we compute the merge effort can be found in Moura and Murta [36], but we summarize it as follows. First, we identified the code churn of each branch by performing a _diff_ between the base version (i.e., the common ancestor) and the tip of the branch. The two sets of actions (lines of code added and removed in the branches) are combined, producing a multiset [37] with all actions performed in the branches.
Then, we identified the code churn of the merge by performing a _diff_ between the base version and the merge version. This produces a multiset with all actions that were committed in the merge. Finally, we computed the merge effort by subtracting the former multiset from the latter. The produced multiset contains just the lines of code added or removed during the merge operation, and the merge effort is the total number of actions in this multiset. For instance, a merge that combines two independent methods added in separate files would lead to zero merge effort, since the VCS would perform it automatically. Similarly, if these two independent methods are added to the same file, but in different regions, the merge effort would also be zero, since no additional actions would be needed to conciliate the branches. However, integrating a new feature implemented in parallel to an extensive refactoring would lead to a significant merge effort to adjust the feature to the new code organization imposed by the refactoring.
The following operations are performed over commits with two parents to extract the metrics. Since merges with more than two parents, called octopus, cannot have conflicts and manual edits by definition, these cases would necessarily have zero effort and are ignored. From a merge commit named \(commit\_merge\), its parent commits are obtained, and, from them, the commit in which they were derived, called \(commit\_base\) is identified. Given this information, three diff operations are executed to obtain the actions performed in the merge commit. The first diff is performed between \(commit\_base\) and \(commit\_merge\), thus obtaining the actions incorporated in the merge. The authors formally define merge actions as described in the formula: \(actions_{\textit{merge}}\) = \(diff(commit\_base\), \(commit\_merge)\).
Then, the diff is executed between \(commit\_base\) and the two-parent commits of the merge commit to obtain the actions performed in the two branches. Therefore, the actions performed on branches 1 and 2 are formally defined, respectively, in the formulas: \(actions_{\textit{branch1}}\) = \(diff(commit\_base\), \(commit\_parent1)\) and \(actions_{\textit{branch2}}\) = \(diff(commit\_base\), \(commit\_parent2)\).
These actions make it possible to identify the extra work implemented in a merge commit. For this, it is first necessary
Fig. 2: Boxplot with the distribution of valid merge commits across projects
to identify the actions that were performed in one of the branches, which are calculated from the sum of the actions performed in branches 1 and 2, as shown in the formula: \(actions_{branches}\) = \(actions_{branches1}\) + \(actions_{branches2}\).
Finally, to determine the extra work, we use the relative complement of the branch actions in the merge actions. The merge effort considered in the experiments of this work is used in an absolute way, applying the module operation in \(actions_{extra}\), according to the formulas: \(action_{extra}\) = \(actions_{merge}\) - \(actions_{branches}\) and \(effort\) = \(|\)\(actions_{extra}|\).
To collect versioning information from projects, we used Python's pygit2 library, and to identify refactorings in commits, we used version 2.1 of the RefactoringMiner tool, released in March 2021, which can identify 62 different types of refactoring operations. In studies performed by RefactoringMiner's authors, the tool is reported to achieve 98% of precision and 93% of recall [38, 39], which makes it the current state-of-the-art tool for automated refactoring detection. When using RefactoringMiner, some commits took a long time to process, sometimes causing the process to hang. Therefore, we applied a timeout of 5 minutes. If RefactoringMiner does not finish processing a commit within 5 minutes, we terminate the process and skip to the next commit. We adopted 5 minutes in alignment with the literature [23]. This timeout occurred in only 0.14% of the commits of the analyzed projects.
Footnote 2: [https://www.pygit2.org/](https://www.pygit2.org/)
### _Association Rules_
The extraction of association rules is an important task in data mining, whose goal is to find relationships among the attributes in a database [40]. An association rule represents a relational pattern between data items in the application domain that happens with a specific frequency. The extraction of association rules is a technique in data mining that allows the identification of meaningful patterns from the data.
In an attempt to identify whether the occurrence of refactorings influences the merge effort, we use some attributes that quantify the refactorings involved in each branch of a merge commit, the total number of refactorings that took place in both branches, and the merge effort (code churn). These mined attributes are presented in Table II.
The method proposed in this work uses the concept of multidimensional association rules [40]. Given a relation (or table) \(D\), a multidimensional association rule \(X\to Y\), defined on \(D\), is an implication of the form: \(X_{1}\wedge X_{2}\wedge\cdots\wedge X_{n}\to Y_{1}\wedge Y_{2}\wedge\cdots \wedge Y_{m}\), where \(n\geq 1,m\geq 1,\) and \(X_{i}(1\leq i\leq n)\) as well as \(Y_{j}(1\leq j\leq m)\) are conditions defined in terms of the distinct attributes of _D_[40, 41].
The rule \(X\to Y\) indicates, with a certain degree of assurance, that the occurrence of the antecedent \(X\) implies the occurrence of the consequent \(Y\). The relevance of an association rule is evaluated by three main measures of interest: _Support_, _Confidence_, and _Lift_[41]. The _Support_ metric is defined by the percentage of instances in \(D\) that satisfy the conditions of the antecedent and the conditions of the consequent. It is computed as follows: \(Sup_{(X\to Y)}=T_{X\cup Y}/T\), where \(T_{X\cup Y}\) represents the number of records in \(D\) that satisfy the conditions in \(X\) and the conditions in \(Y\), and \(T\) is the number of records in \(D\). On the other hand, _Confidence_ represents the probability of occurrence of the consequent, given the occurrence of the antecedent. It is obtained in the following manner: \(Conf_{(X\to Y)}=T_{X\cup Y}/T_{X}\), where \(T_{X}\) represents the number of records in \(D\) that satisfy the conditions of the antecedent \(X\). _Support_ and _Confidence_ are used as a filter in the process of mining association rules, that is, only the rules characterized by having a minimum _Support_ and a minimum _Confidence_ (defined as input parameters) are extracted.
To better illustrate the calculation of the _Support_ and _Confidence_ measures, let \(D\) be the relation shown in Table III, which includes entries about refactorings and merge effort. The amounts of refactorings in each branch have been discretized into four ranges of values: zero ("0"), units ("u"), dozens ("d"), and hundreds or more ("\(\geq 100\)"). Considering the rule R: _b1 = "u" \(\wedge\) b2 = "d" \(\rightarrow\) effort = "true"_, we can find four records in \(D\) that satisfy the three conditions of R, which are rows 1, 4, 6, and 8. Thus, \(T_{(X\cup Y)}=4\) and, since \(D\) has 8 entries (_T_ = 8), we can conclude that _Sup(R)_ = 50% (4/8). In this case, the _Confidence_ of the rule is 66.6% (4/6), since \(T_{(X\cup Y)}=4\) and the conditions in the antecedent of the rule (_b1 = "u" \(\wedge\) b2 = "d"_) are satisfied in 6 entries (\(T_{X}=6\)), as we can see in rows 1, 3, 4, 6, 7, and 8.
Another measure of interest considered in this work is the _Lift_ of a rule \(X\to Y\), which indicates how more frequently the conditions in \(Y\) occurs given that the conditions in \(X\) occur. _Lift_ is obtained by the quotient of the _Confidence_ of the rule and the _Support_ of its consequent, i.e., \(Lift_{(X\to Y)}=Conf_{(X\to Y)}/Sup_{(Y)}\), where \(Sup_{(Y)}\) represents the number of records in the relation that satisfy the conditions in \(Y\). When \(Lift=1\) there is a conditional independence between \(X\) and \(Y\), that is, the antecedent does not interfere in the occurrence of the consequent. On the other hand, \(Lift>1\) indicates a positive dependence between the antecedent and the consequent, meaning that the occurrence of \(X\) increases the chances of the occurrence of \(Y\). Conversely, when \(Lift<1\) there is a negative dependence between the antecedent and the consequent, which indicates that the occurrence of \(X\) decreases the chances of the occurrence of \(Y\).
Taking into account the rule R used to exemplify the _support_ and _confidence_ measures, _b1 = "u" \(\wedge\) b2 = "d" \(\rightarrow\) effort =
_"true"_, the _Support_ of the consequent (_Sup(effort = "true"_)) of the rule in \(D\) is equal to 50%, that is, the percentage of entries that satisfy the condition _effort_ = _"true"_. Thus, the _Lift_ obtained for the rule R is 1.33, since _Lift(R) = 66.6/50 = 1.33_, where 66.6% is the _confidence_ of the rule. In this case, the result indicates that, when there are few (units) of refactorings in branch 1 and some (dozens) in branch 2, the chances of having a merge effort increase by 33%. In other words, we observe that the probability of the occurrence of _effort_ = _"true"_ in \(D\), which is 50%, increases by a factor of 1.33 (becoming 66.6%) given the occurrence of the antecedent _b1 = "u"_ \(\wedge\)_b2 = "\(d\)".
We have used _Lift_ values to avoid random implications, however _Lift_ is a symmetric metric. Consequently, as \(Lift_{(X\to Y)}=Lift_{(Y\to X)}\), we could not conclude whether X implies Y or vice-versa. Thus, we performed a confidence analysis to perceive the direction of the implication, especially when the difference between the confidence of both rules is significant. For example, when the confidence value of rule in the direction \(X\to Y\) is significantly higher than the one in the direction \(Y\to X\), we say that \(X\) influences \(Y\) and not the other way around.
We used the well-known Apriori [42] algorithm for the extraction of association rules, which is available in a Python library3. We mined rules with minimum support of 0.05%, given the large number of instances in our dataset, which represents a total of 20 merge commits.
Footnote 3: http://[http://rasht.github.io/mlxtend/user_guide/frequent_patterns/apriori/](http://rasht.github.io/mlxtend/user_guide/frequent_patterns/apriori/)
## III Results and Discussion
In this section, we report and discuss the obtained results and present the answers to the research questions defined in Section II. In Section III-A, we analyze the influence of refactoring occurrences on the changes of having merge effort. Section III-B focuses on the relationship between the number of refactorings and the merge effort. In Section III-C, we discuss the association between the number of refactorings and the merge effort intensity. Finally, in Section III-D we present a discussion of the possible implications of our findings.
### _RQ 1 - Does the occurrence of refactorings in the branches increase the chances of merge effort?_
In an attempt to answer this research question, we initially discretized the dataset attributes to a binary domain, considering only a subset of attributes: number of refactorings in branch 1, number of refactorings in branch 2, the total number of refactorings, and merge effort. Therefore, we discretized each attribute as "true" or "false", indicating the occurrence (or not) of refactorings and merge effort. The graph in Figure 3 summarizes the results. On the x-axis, we present three groups of association rules extracted from the discretized dataset: when no refactorings occurred in the branches (refactorings = "false"); when some refactoring has been implemented in at least one of the branches (refactorings = "true"); and when refactorings occurred in parallel in both branches (_b1="true"_ \(\wedge\)_b2="true"_). Light gray bars refer to merge commits that did not require any merge effort and dark gray bars refer to merge commits that required some effort. The y-axis represents the _Lift_ value, calculated for each group of extracted rules, which indicates the strength of the relationship between the antecedent (i.e., the occurrence of refactorings) and the consequent (i.e., the occurrence of merge effort) of the rules.
At first, we decided to analyze the first two groups of rules, which only indicate the presence of refactorings, regardless of the branch in which they occurred. The _Lift_ values of the rules when there is no merge effort (light gray bars - rules: _refactorings_s=_"false"_ \(\rightarrow\)_effort_=_"false"_ and _refactorings_="true"_ \(\rightarrow\)_effort_=_"false"_) demonstrate that the presence or not of refactorings does not influence the non-occurrence of effort since these values are very close to 1. However, in the two rules where there is an effort (dark gray bars - rules: _refactorings_=_"false"_ \(\rightarrow\)_effort_=_"true"_ and _refactorings_=_"true"_ \(\rightarrow\)_effort_=_"true"_), we can observe that **the occurrence of refactorings increases the chances of merge effort by 24%** ( _Lift = 1.24_ ). Conversely, **having no refactorings decreases the chances of merge effort by 27%** ( _Lift = 0.73_).
Note that this discrepancy of almost no effect on the absence of effort and some effect on the occurrence of effort is a consequence of the frequency of merges with effort in our dataset. From a total of 40,248 valid merges, just 2,861 (7.1% of the total) demanded some effort. Thus, a small fluctuation in the size of the light gray bars (from 1.02 to 0.98), which represent the bigger population of merges with no effort, leads to a big fluctuation in the size of the dark gray bars (from 0.73 to 1.24), which represent the small population of merges with some effort.
This increase of effort in the face of refactorings motivated us to check whether the occurrence of refactorings in parallel would be even more relevant, considering that some refactorings may be incompatible and lead to conflicts. Thus, we extracted the rule shown in the third group to assess whether the strength of this relationship would increase when refactorings were implemented simultaneously in both branches (rule: _b1="true"_ \(\wedge\)_b2="true"_ \(\rightarrow\)_effort_=_"true"_). This rule has a _Lift_ of 2.14, which indicates that **refactorings in both branches further increase the chances of merge effort by 114%**.
### _RQ 2 - Does the amount of refactorings in the branches increase the chances of merge effort?_
To answer this research question, we kept the binary discretization of the effort attribute in the original dataset, that is, "true" or "false". However, the number of refactorings was discretized into four ranges of values: "0" (zero), "u" (units = 1 to 9), "d" (dozens = 10 to 99) and "\(\geq 100\)" (hundreds or more). Thus, the antecedent of the rules extracted now can contain one of these four values. Considering the attribute number of refactorings (_number_of_refactorings_) as the antecedent of the rules and the merge effort (_effort_) as the consequence, we would have 8 possible rules. A rule in this case would have the following format: _number_of_refactorings_=_"0" / "u" / "\(d\)" / "\(\geq 100\)" \(\rightarrow\) effort="true" / "false". We chose not to subdivide the last range ("\(\geq 100\)") after analyzing that this range of values was the least frequent in all attributes of the dataset. In this way, we aim to minimize the imbalance of the intervals of values considered in this study.
The graph in Figure 4 was constructed similarly to the graph in Figure 3. Nevertheless, the x-axis now represents the four ranges of values of the proposed discretization for the attribute _number_of_refactorings_: "0" (zero), "u" (units), "d" (dozens) and "\(\geq 100\)" (hundreds or more). The _Lift_ values for the extracted rules demonstrate that the absence or the occurrence of few (units) refactorings tends to decrease the chances of merge effort. This behavior can be observed in the first two bars in dark gray color, with _Lift_ of 0.73 and 0.64. These values indicate that the complete absence of refactorings ("0") decreases the chances of merge effort by 27%, while refactorings in the units ("u") range decrease by 36% the changes of merge refactorings. As the number of refactorings increases, the possibility of a merge effort also increases. This statement can be verified in the dark gray bars with the number of refactorings in dozens ("d") and hundreds or more ("\(\geq 100\)"). **The occurrence of dozens of refactorings increases the chances of merge effort by 35% (_Lift_ = 1.35). With a hundred or more refactorings, this increase is even greater: 143%** (_Lift_ = 2.43). Analyzing the light gray bars, when there is no merge effort, we observe values very close to 1, showing no relevant relationship between the antecedent and the consequent of these association rules. Only in the last bar the _Lift_ value drops to 0.89, indicating that many refactorings decrease in 11% the possibility of having no merge effort.
We observed some merge commits to assess whether the application of many refactorings was really necessary. In the project Netty, at the merge commit 81a6fb4, the developer applied three refactorings in the class RuleBasedIPFilter. These refactorings were applied to support code modifications of other refactorings in the classes ChannelHandlerContext and InetSocketAddress changed in the branches. In the merge commit, the developer applied two Extract Methods to create a new constructor, extracting code from the old constructor and the method accept, and a Change Attribute Type on the attribute rules. The method accept uses variables of the classes ChannelHandlerContext and InetSocketAddress. We observed the old constructor has a single line of code after these refactorings, and it calls the new constructor, adding a default boolean value as a parameter. Although it is common in Object-Oriented languages such as Java to use method overhiding for implementing default parameters, here this strategy seems inappropriate because many code modifications were applied and developers may get confused whether the new constructor should be used or not. This effort could be minimized if the developer applies an Extract Method to the old constructor that checks the boolean value, keeping the design simpler with just one constructor. Thus, the application of two Extract Methods was unnecessary in this case - using just one Extract Method would lead to a better design and also decrease the merge effort. This example motivates us to suggest the creation of tools or guidelines for developers to avoid the application of unnecessary refactorings that can increase the merge effort. In this case, the tool could alert the developer about the problem to update the calls of the new constructor when it is necessary, or suggest other refactoring to minimize the effort.
Footnote 4: [https://github.com/nerty/netty/commit/81a6fb](https://github.com/nerty/netty/commit/81a6fb)
As we did in RQ1, we decided to evaluate whether increasing the number of refactorings in both merge branches increases the chances of merge effort. In this analysis, we do not just assess whether refactorings have taken place in the two branches, but we consider the number of refactorings according to the four discretization ranges. This would lead us to 32 possible rules, but we focused only on 6 of them
Fig. 4: Influence of the number of refactorings on the occurrence of the merge effort.
Fig. 3: Influence of refactorings on the occurrence of the merge effort.
with the same amount of refactorings in both branches. The graph in Figure 5 presents on the x-axis three groups of rules whose antecedent considers the number of refactorings that co-occurred in both branches: the first one with refactorings in the range of the units, the second one in the range of dozens, and the third one in the range of hundreds or more. The occurrence of few refactorings in the two branches (_b1_="_u_" \(\wedge\)_b2_="_u_") increases by 13% (_Lift_ = 1.13) the chances of having effort. This increase is even greater as the number of refactorings increases: **for dozens** (_b1_="_d_" \(\wedge\)_b2_="_d_") **of refactorings in both branches this percentage is 167% (_Lift_ = 2.67), **and with a hundred or more** (_b1_="_\(\geq 100\)_" \(\wedge\)_b2_="_\(\geq 100\)_"_) **refactorings it raises to impressing 308% (_Lift_ = 4.08). By analyzing the light gray bars, we verify the opposite behavior, although less intense. Note that the occurrence of many ("\(\geq 100\)") refactorings in both branches reduces by 24% (_Lift_ = 0.76) the chances of having no effort.**
### _RQ 3 - Does the amount of refactorings in the branches increase the intensity of merge effort?_
To answer the third research question, we performed a new form of discretization of the original dataset. In addition to discretizing the antecedent into four ranges of values ("0", "u", "d" and "\(\geq 100\)"), we did the same with the consequent, representing the merge effort. We discretized the consequent in this way to analyze to what extent the merge effort can be affected as we increase the number of refactorings in the branches of a merge commit.
As mentioned, the focus of this research question is to evaluate the intensity of the merge effort given the number of refactorings that occurred in the branches of a merge commit. Thus, the graph in Figure 6 presents four scales to represent the level of effort according to the proposed discretization for the consequent of the extracted association rules. These four scales are displayed in the graph legend in grayscale, where the lightest gray represents the absence of effort, and the darkest gray represents the greatest possible effort, i.e., hundreds or more ("\(\geq 100\)").
The results obtained for the first two blocks of rules, represented on the x-axis by zero ("0") and units ("u"), demonstrate that fewer refactorings tend to generate fewer merge effort. This is particularly noticeable in the darkest gray bar, where, **for zero refactorings, the chances of having merge effort in the range of hundreds or more ("\(\geq 100\)") decrease by 36% (_Lift_ = 0.64), and **for units ("u"), this percentage is even higher, at 55% (_Lift_ = 0.45). Still in this part of the graph (refactorings = "u"), it is possible to notice a decrease in the value of _Lift_, reinforcing that the number of refactorings has a great influence on the merge effort. Analyzing the rules of the third group (refactorings = "d"), we can notice that dozens of refactorings increase the chances of having merge effort in the intensity of "u", "d" and "\(\geq 100\)" almost in the same proportion, by 34% (_Lift_ = 1.34), 39% (_Lift_ = 1.39), and 34% (_Lift_ = 1.34), respectively. This behavior is profoundly accentuated when we have many refactorings, as shown in the last rule block (refactorings = "\(\geq 100\)"). The _Lift_ values show that **many refactorings decrease the chances of having no merge effort by 11% (_Lift_ = 0.89), and **significantly increase the chances of having merge effort in units ("u"), in dozens ("d"), and in hundred or more ("\(\geq 100\)") ranges, presenting respectively the following percentages: 92% (_Lift_ = 1.92), **215%** (_Lift_ = 3.15) and **232%** (_Lift_ = 3.32).
In the same way as in RQ1 and RQ2, we analyzed the situation when refactorings co-occur in both branches. In this RQ, we aimed to assess whether, as the number of parallel refactorings increases in both branches, the intensity of the merge effort also increases. The results shown in the graph of Figure 7 demonstrate that many refactorings occurring in parallel in both branches increase the chances of the merge effort being large. **For dozens of refactorings in the two branches** (_b1_="_d_" \(\wedge\)_b2_="_d_"), for example, **this increase is quite noticeable, increasing by 102% (_Lift_ = 2.02) the chances of the effort being in the units range, 199% (_Lift_ = 2.99) of being in the dozens range, and 416% (_Lift_ = 5.16) of being in the hundreds or more range**. The chances of no effort in this situation are reduced by 13% (_Lift_ = 0.87). This behavior was even more evident **when hundreds or more** (_b1_="\(\geq 100\)_" \(\wedge\)_b2_="\(\geq 100\)_") **refactorings occurred in parallel in both branches**. In this case, **the chances of having effort in the units range increased by 11% (_Lift_ = 2.11), of being in the dozens range by 54% (_Lift_ = 6.44), and of being in the hundreds or more range by 751% (_Lift_ = 8.51). The chances of having no effort when there are hundreds or more refactorings in parallel in both branches are
Fig. 5: Influence of the number of refactorings in parallel in both branches on the occurrence of merge effort.
Fig. 6: Influence of the number of refactorings on the merge effort intensity.
reduced even more, by 24% (_Lift_ = 0.76). The first block of bars in the graph, when few refactorings occurred in parallel in both branches (_b1="u" \(\wedge\) b2="u"_), also shows a slight increase in the chances of having effort in units and dozens, with 15% in both cases. However, not enough merge cases were identified that meet the minimum support for the effort in the hundreds or more range in this situation. The results of these analyses demonstrate that the number of refactorings that occurred in parallel in both branches does not only influence the occurrence of merge effort but also its intensity.
### **Discussion**
In this section, we will discuss some implications based on the results of our experiments.
**Leveraging Collaborative Refactoring-Aware Changes**. From the point of view of systems development managers or even of developers (when there is no clearly defined manager role), our results may suggest some analysis on how to conduct the implementation of refactorings in a project: (i) analyze the feasibility of avoiding parallel branches to implement refactorings to minimize the chances of having high-intensity merge effort, (ii) avoid allocating extensive refactorings in parallel with other important changes, (iii) motivate developers to synchronize with their peers before extensive refactorings, (iv) evaluate the possibility of applying alternative techniques to the branches, such as the use of trunk-based development [43] with features toggles [44]. The idea is to enable a more transparent collaborative development process so that new features are implemented directly on the mainline of development without creating branches. This strategy allows developers to be aware of parallel changes as soon as they are made, avoiding accumulating changes that only appear to other developers during the merge. Existing work [45] already indicates a reduction of merge effort when trunk-based development is in place. Although merges of independent clones still occur, they are known to be less complex than merges of named branches, which are suppressed due to the use of feature flags.
**New Merge Strategies for Refactoring Changes**. From the tool builders' perspective, our results may indicate the need for new merge strategies, for example, first merging edits (without considering refactorings) and then replaying the refactorings. This approach may avoid the occurrence of merge semantic conflicts [4], as the refactorings would be merged into a branch that already contains all edits not classified as refactorings. This way of performing a merge involving refactorings can be inspired by the operation-based merging approach, as discussed in Mens [4]. Moreover, the response to RQ3 may suggest creating tools that signal the most appropriate time to perform the merge, considering the number of refactorings implemented in both branches and the predicted merge effort level. In this way, developers could receive alerts to execute or even postpone the merge, considering a specific threshold of refactorings, thus potentially minimizing the merge effort.
**Improving Search-Based Refactoring Recommenders with Merge Effort Estimates**. Our findings in Sections III-A and III-B also indicate there is room for proposing refactoring recommenders that better support developers working in parallel. Even though software maintenance is increasingly being parallelized, existing multi-objective refactoring approaches (e.g. [46, 47, 48]) are not designed to take parallel changes into consideration. They are often defined in terms of objective functions (or criteria) capturing static [46, 47, 48] and dynamic [48] quality attributes, feature dependencies [48], test coverage [47] and consistency with already made changes [47].
This negligence may induce developers in selecting and performing refactorings in one of the branches that, although end up maximizing the satisfaction of all the aforementioned criteria, will either require or increase merge effort. If the other refactoring and non-refactoring changes being made in parallel are considered (even if partially), the recommender algorithms will search for alternative refactoring solutions that achieve the same objectives and are less prone to (higher) merge effort later. Another possible alternative is the recommender to simply suggest the refactoring is postponed until the possibly conflicting merge of the other branch is concluded. Finally, given our finding in Section III-C, which indicated that shorter sequences of refactorings lead to lower merge effort, one using a multi-objective refactoring recommender might consider using the amount of refactorings as an additional objective function. Thus, the recommender would prioritize for shorter refactorings.
**Relating Refactoring Types and Merge Effort**. In addition, our results may encourage more specific studies on which types of refactorings tend to generate more merge effort. Still in this line, we intend to assess the incompatibility of refactorings in the branches of a merge, analyzing the following question: which types of refactorings (implemented in parallel in the branches) result in a greater chance of having a high-intensity merging effort?
## IV **Threats to Validity**
Some internal, external, construction, and conclusion threats may have influenced the results reported in this work, as presented in the following:
**Internal Validity**. The association rule extraction technique helped understand the intensity of the relationships among the refactorings and merge effort attributes. However, a third
Fig. 7: Influence of the number of refactorings implemented simultaneously in both branches on the merge effort intensity.
variable not considered in our study may have affected both attributes. Moreover, as mentioned in Section II-B, when using RefactoringMiner to detect refactorings in a given commit, we employed a 5-minute timeout. We recorded every time RefactoringMiner took more than 5 minutes, and the process was terminated. Out of 428,109 commits analyzed with RefactoringMiner, only 615 (0.14%) reached this timeout. This is a tiny percentage and does not pose a severe threat to the validity of our results.
**External Validity**. The process of building the project corpus may have disregarded projects relevant to our experiments. To mitigate this risk, we followed a systematic selection methodology, as described in section II-A. In addition, our study focuses only on Java projects. However, as we were limited to open-source software systems and did not have access to closed-source enterprise Java projects, we cannot claim that our findings can be generalized to all Java software systems. Likewise, we cannot generalize our results to other programming languages. Thus, we note that the ability to generalize our findings is restricted to the characteristics of the selected projects. Our analysis, however, showed that there are patterns of behavior in the sample we chose, and we believe that they may also be present in other projects and languages. Nevertheless, Additional studies are needed. Additionally, although the RefactoringMiner 2.1 tool can identify 62 types of refactorings, our experiments considered only 33 types, which are the most frequently applied in other studies [34]. We conjecture that a study evaluating a more extensive set of refactoring types would possibly find an even more substantial involvement of refactoring operations in increasing merge effort.
**Construct Validity**. Our methodology looked for refactorings that influence the merge effort. However, it is not easy to determine whether the refactoring was the only cause of the merge effort. This is because the refactorings implementation is often tangled with other types of changes [49]. Moreover, the merge effort calculation is based on the code churn (_i.e._, the number of lines added or removed) metric. Although this type of approach, considering the code churn, is widely used in software engineering to assess effort [45, 50, 51], qualitative evaluations may be necessary to confirm the obtained results. Additionally, we only considered merge commits with two parents during the dataset build process. In theory, a merge commit may have more than two parents, which is called _octopus_ in Git parlance. Nonetheless, in practice, an octopus merge commit rarely happens. In our dataset, for example, we found only four commits with three or more parents, representing only 0.007% of the total merge commits (55,044). These four commits were found in only three projects: graal (1), Jenkins (2), and spring-framework (1). Even in these rare cases, assessing the effort of _octopus_ merges is irrelevant, considering that, by definition, Git only allows octopus merges when there is no conflict and no manual edits.
**Conclusion Validity**. We first note that some association rules we identified have relatively little support. Therefore, some of them may have happened by chance. However, suppressing such rules would hinder rare but relevant rules.
## V **Related Work**
We identified some works related to code refactorings and code merge operations [19, 20, 21, 22, 23]. Most of them [19, 20, 21, 22] focused on building, improving, or proposing merge tools that take into account the occurrence of refactorings in the branches. Only Mahmoudi et al. [23] did more analytical work, evaluating the relationship between refactorings and merge conflicts. In this section, we will discuss the contributions and research gaps of these works in more detail.
Angyal et al. [22] extended the three-way merge algorithm to support renamings. The main idea is that the renamings should be detected before the merge and considered while reconciling the changes. Similarly, Dig et al. [19] presented a software configuration management tool, called MolhadoRef, capable of identifying some types of refactorings before performing the code merge. They used the RefactoringCrawler tool, capable of identifying up to seven types of refactorings. MolhadoRef uses the operation-based approach, treating both refactorings and edits (other code changes not classified as refactorings) as change operations that are recorded and replayed. As edits and refactorings are mixed, they proposed inverting the refactorings, performing a textual merge, and replaying the refactoring operations. The main idea is to merge refactorings to eliminate merge errors and unnecessary merge conflicts. Lessenich et al. [20] proposed improvements to their code merge tool (JDIME) to deal with some types of refactorings: renamings and code move. They used a heuristic method and evaluated their approach on real-world merge scenarios (48 projects and 4,878 merge scenarios). Mahmoudi and Nadi [21] performed an empirical study to understand the nature of changes that phone vendors make versus modifications made in the original development of Android. The study focused on the simultaneous evolution of several versions of the Android operating system by investigating the overlap of different changes. Although the authors have not developed a tool, they have investigated and reported the most common types of refactorings that occur in practice and have looked at the possibility of having automated support for merging them. As a proxy case study, they analyzed the changes in the popular community-based variant of Android, LineageOS, and its corresponding Android versions. The experiments considered a small subset of refactorings (six different types), considered by the authors to be the most common.Nevertheless, while our work focused on assessing and quantifying the impact of refactorings on the merge effort, these workfocused on creating or improving merge tools that involve code refactorings. Their contributions have the potential to alleviate some of the problems discussed in our paper.
Mahmoudi et al. [23] carried out a large-scale empirical study, with about 3,000 open-source projects, to assess the relationship between 15 types of refactorings and the occurrence of merge conflicts. To analyze how often merge conflicts involve refactored code, they checked whether a code change that led to a conflict has involved a refactoring
change. The term "involved refactoring" was used to describe a refactoring that occurred in an evolutionary change that overlaps the conflicting region. As a conflicting merge scenario can have multiple conflicting regions, if at least one of the conflicting regions in a conflicting merge scenario contained involved refactorings, they consider that the merge scenario has involved refactorings. As a result, they found out that 22% of merge conflicts involve refactorings and that 11% of conflicting regions have at least one refactoring involved. Their studies also presented that Extract Method refactorings are involved in more conflicts than their typical overall frequency, while most refactoring types are engaged in conflicts less frequently. The authors also conducted experiments to analyze whether conflicts involving refactorings are more difficult to resolve. For that, they used two metrics to define merge effort: the number of conflicting lines in the conflicting chunks and the number of commits with evolutionary changes that contain refactorings. The results show that conflicting regions involving refactorings tend to be larger (i.e., more complex) than those without refactorings. Furthermore, conflicting merge scenarios with involved refactorings include more evolutionary changes (i.e., changes that lead to conflict) than conflicting merge scenarios with no involved refactorings.
As in our work, Mahmoudi et al. [23] analyzed the consequences of refactorings on the merge operation. However, their experiments have considered the size of the conflicting chunks and not the effort required to perform the merge. It is essential to point out that an analysis purely based on individual chunks cannot perceive semantic conflicts, which can occur outside of chunks or even due to multiple simultaneous chunks. That is, non-conflicting merges (physical conflicts raised by the version control system) may generate syntactic or semantic code conflicts that potentially demand effort during the merge operation. By measuring the code churn required during the merge operation, we can quantify the total merge effort, even if is a consequence of syntactic or semantic conflicts. Furthermore, using the association rules extraction technique, our approach allowed us to quantify the intensity of the relationship between refactorings and merge effort. Our study also presented an initial evaluation on the place where the refactorings are implemented, that is, the branches. In this first moment, we focused on evaluating the parallelism of the refactorings in the two branches of a merge. The results showed that implementing refactorings in parallel in both branches generates a more significant merge effort.
## VI Conclusion
This work presented a study that analyzes the relationship between the implementation of refactorings in the branches and the effort required to merge such branches. Our main results indicate that (i) the implementation of refactorings in the branches of a merge commit increases the chances of having merge effort; (ii) the number of refactorings implemented in the branches of a merge commit influences the occurrence of merge effort: the more refactorings, the greater the chance of effort; (iii) the number of refactorings implemented in the branches of a merge commit influences the intensity of the merge effort: the more refactorings, the greater the chance of the effort being of greater intensity; and (iv) refactorings co-occurring in both branches of the merge commit tend to increase even more the chances of there being effort, as well as the intensity of it.
As future work, we propose to carry out experiments to analyze which types of refactorings tend to demand more significant merge effort. In this paper, we found that refactorings in parallel could harm the merge process. However, not all parallel refactoring is necessarily harmful. By knowing which types of refactorings are more or less incompatible, we could improve the precision of our current result. In other words, instead of saying _"do not do refactorings in parallel"_, we could say _"do not do refactoring X in parallel with refactoring Y"_. Such a finding could motivate researchers to devise tools for alerting developers about incompatible refactorings, suggesting a branch synchronization before applying the refactoring, or even helping automatize the merge of incompatible refactorings.
We also propose to extend the experiment to a more extensive set of projects. A larger number of repositories will allow for a more focused analysis per project to identify the characteristics of the projects and their refactorings in the merge effort. In addition, we suggest using other attributes in the data mining process, such as: the size of commits, the existence of merge conflicts, and identifying whether the merge commit is associated with a pull request. Including the size of commits in the antecedent of the association rules can help understand this attribute's relationship with the merge effort. We expect larger commits to increase the chances of merge conflicts and, consequently, the effort for resolution. It is also interesting to analyze whether commits with refactorings tend to be larger on average than non-refactoring commits. The existence of conflicts can help to understand whether semantic conflicts result in less effort than the conflicts identified by Git at the time of the merge. Similarly, we can analyze whether merging a pull request branch into a project can lead to less merge effort since a code review process has been performed before the merge.
Furthermore, we intend to explore the compatibility of the types of refactorings that co-occur in both branches; that is, to discover those implemented in parallel in both branches that increase the chances of having effort and its intensity. Experiments can also be conducted for private software projects and in systems developed in other programming languages to analyze whether the behavior observed in this work repeats in projects with different characteristics.
|
2306.08454 | Gesper: A Restoration-Enhancement Framework for General Speech
Reconstruction | This paper describes a real-time General Speech Reconstruction (Gesper)
system submitted to the ICASSP 2023 Speech Signal Improvement (SSI) Challenge.
This novel proposed system is a two-stage architecture, in which the speech
restoration is performed, and then cascaded by speech enhancement. We propose a
complex spectral mapping-based generative adversarial network (CSM-GAN) as the
speech restoration module for the first time. For noise suppression and
dereverberation, the enhancement module is performed with fullband-wideband
parallel processing. On the blind test set of ICASSP 2023 SSI Challenge, the
proposed Gesper system, which satisfies the real-time condition, achieves 3.27
P.804 overall mean opinion score (MOS) and 3.35 P.835 overall MOS, ranked 1st
in both track 1 and track 2. | Wenzhe Liu, Yupeng Shi, Jun Chen, Wei Rao, Shulin He, Andong Li, Yannan Wang, Zhiyong Wu | 2023-06-14T11:54:39Z | http://arxiv.org/abs/2306.08454v1 | # Gesper: A Restoration-Enhancement Framework for General Speech Reconstruction
###### Abstract
This paper describes a real-time **G**eneral **S**pech **R**econstruction (Gesper) system submitted to the ICASSP 2023 Speech Signal Improvement (SSI) Challenge. This novel proposed system is a two-stage architecture, in which the speech restoration is performed, and then cascaded by speech enhancement. We propose a complex spectral mapping-based generative adversarial network (CSM-GAN) as the speech restoration module for the first time. For noise suppression and dereverberation, the enhancement module is performed with fullband-wideband parallel processing. On the blind test set of ICASSP 2023 SSI Challenge, the proposed Gesper system, which satisfies the real-time condition, achieves 3.27 P.804 overall mean opinion score (MOS) and 3.35 P.835 overall MOS, ranked 1st in both track 1 and track 2.
Wenzhe Liu\({}^{1}\), Yupeng Shi\({}^{1}\), Jun Chen\({}^{1,2}\), Wei Rao\({}^{1}\), Shulin He\({}^{1}\), Andong Li\({}^{3}\), Yunnan Wang\({}^{1}\), Zhiyong Wu\({}^{2}\)\({}^{1}\)Tencent Ethereal Audio Lab, Tencent, Shenzhen, China
\({}^{2}\)Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
\({}^{3}\)Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
{wenzheliu, yupengshi, ellenwrao, ronslhe, yannanwang}@tencent.com,
[email protected], [email protected], [email protected]
**Index Terms**: speech signal improvement, two-stage, speech restoration, speech enhancement
## 1 Introduction
Real-time communication (RTC) systems such as teleconferencing systems, smartphones and telephones, have become a necessity in the life and work of individuals. In order to achieve high-quality communication experiences, it is crucial to address the challenges of speech signal quality in RTC systems. However, due to the influence of acoustical capturing, noise/reverberation corruption and network congestion, the speech quality of current RTC systems is still deficient. The ICASSP 2023 SSI Challenge1 focuses on improving the speech signal quality in RTC systems, which involves tackling the difficulties of noise, coloration, discontinuity, loudness, and reverberation of speech in a variety of complex acoustic conditions. Noisiness includes background noise, circuit noise and coding noise. Coloration results from bandwidth limitation and frequency response distortions of the microphone. Packet loss results in speech discontinuity. Loutness problem includes clipping, nonlinear distortion and far-field recording.
Footnote 1: [https://www.microsoft.com/en-us/research/academic-program/speech-signal-improvement-challenge-icassp-2023/](https://www.microsoft.com/en-us/research/academic-program/speech-signal-improvement-challenge-icassp-2023/)
In the speech enhancement field, a "noise suppression and speech restoration" architecture has been proposed recently [1, 2, 3]. These methods usually contain two-stage processing modules. In the first stage, The noise suppression (NS) module is used to reduce the noise or background components. However, the mask-based or mapping-based NS modules often adversely affect the speech component as more noise is suppressed, which tends to be increasing distortions of the speech or signal component. To reduce the speech degradation, the second stage module is adopted to re-process the NS enhanced speech and restore the higher-quality speech spectrum based on the time-frequency context information. Leaning on the strong generative capability of the vocoders, generative models have been introduced in the restoration stage [4, 3, 5]. a vocoder such as WaveNet [6], LPCNet [7] or WaveGlow [8] is applied in the restoration stage to re-generate speech waveform based on the Mel spectrum enhanced by the NS modules in the first stage. More recently, VoiceFixer [9] is also proposed to performed in the above-mentioned enhancement and restoration procedure including the noise reduction, dereverberation, bandwidth extension and clipping tasks.
However, due to the complexity of the acoustic scenarios provided in SSI Challenge, speech is distorted heavily. The above framework may further damage the quality of the speech. Excessive suppression of the degraded speech signal caused by the noise reduction methods may significantly increase the difficulty in restoring the desired speech signal without the guidance of semantic information. Therefore, a "restoration and enhancement" two-stage framework namely Gesper addresses the complicated problems in the SSI Challenge. Since the generation model in the time domain has poor high-frequency representation ability and abandons phase information, to overcome this limitation, a complex spectrum mapping-based generative model has been introduced. We first employ CSM-GAN as the restoration module for speech distortion restoration, narrowband bandwidth expansion (BWE) as well as preliminary denoising and dereverberation. Moreover, since there may still exist residual noise components and artifacts in the output of the restoration module, to further improve the quality of the speech signal, the enhancement module is applied in the second stage. As mentioned in [10], independent processing with wideband and fullband signal respectively improves speech enhancement performance which reduces the dynamic range of the spectrum. Parallel processing has been utilized to improve the efficiency of full-band speech enhancement.
In summary, this paper makes the following contributions:
* We propose a novel restoration-enhancement framework for general speech quality improvement, to address the difficulties of noise, coloration, discontinuity, loudness, and reverberation of speech.
* We design a complex spectrum mapping-based generation model for speech restoration, which shows better performance with respect to the previous vocoders.
* We introduce a wideband and fullband parallel processing method for full-band speech enhancement.
The rest of the paper is organized as follows. In Section 2, we illustrate the overall diagram. We present the experimental setup in Section 3. The results and analyses are given in Section 4, and the final conclusions are drawn in Section 5.
## 2 Proposed system
As shown in Fig.1, our proposed system is composed of three parts: sound level adjustment, restoration module, and enhancement module. The sound level adjustment is adopted as data pre-processing
and the restoration module and the enhancement module constitute the two-stage improvement framework. The input audio waveform is first adapted to the appropriate volume by sound level adjustment, and then the short-time Fourier transform (STFT) is applied to obtain the complex spectrogram. The real and imaginary parts of the complex spectrum are then fed into a two-stage architecture: 1) the restoration module first performs speech distortion restoration, and preliminary denoising and dereverberation with a generative adversarial network; 2) the enhancement module further eliminates residual noise components and artifacts based on a relatively high-quality speech complex spectrum generated by the restoration module. Eventually, the output of the enhancement module passes through the inverse STFT (STFT) to yield the final prediction of the model. Each module will be described in detail in the following parts.
### Sound Level adjustment
The role of our causal sound level adjustment is to tune the audio waveform to the appropriate loudness. It adjusts the energy of the waveform on half the length of the STFT frame at a time with the WebRTC auto gain control (AGC) algorithm. Specifically, within the half-frame length of the waveform, we query the private gain experience table to obtain the gain according to the calculated amplitude, and then apply it to the waveform. The gained waveform is then passed through the STFT to obtain the complex spectrum.
### Restoration module
Excessive suppression of the damaged speech signal by the enhancement module may lead to the speech signal not being restored correctly. To avoid this issue, we first employ the restoration module for speech distortion restoration, narrowband BWE and primary denoising and dereverberation.
Previously available restoration models usually generate speech waveform based on the mel-domain [11], which is borrowed from Text-to-Speech (TTS) task. Nevertheless, the poor high-frequency representation of the generative model and the inadequate utilization of phase information by the mel-domain generative model render them both inappropriate for the complex scenario of this challenge. It is well known that phase recovery is helpful for speech enhancement. In this paper, we propose a complex spectrum mapping-based GAN as the restoration module by leveraging recent advances in speech enhancement and speech synthesis.
The generator of CSM-GAN follows the "encoder-sequence modeling-decoder" architecture, which takes the complex spectrum as the input and obtains the corresponding restoration results. As Fig.2 shows, the encoder contains a convolution layer followed by the dense block, and 3 convolution layers are stacked after that. The decoder comprises the corresponding transposed convolution layers and transposed convolution-dense layers. The kernel of the convolution layer is set to (2, 3) in the time and frequency axis, and the stride is (1, 2). Between the encoder and decoder, there are stacked temporal convolutional network blocks for temporal modeling. Skip connections are added to avoid gradient vanishing. To reduce the number of parameters and computational effort, the fullband complex spectrum is divided into 3 subbands, and we then concatenate them in the channel dimension and hand them over to the generator finally.
Regarding the discriminators, multi-resolution frequency discriminators [12] and our proposed multi-band discriminators are adopted together. Multi-resolution frequency discriminators are composed of stacked convolutional blocks, which are used to capture spectral structures of different frequency resolutions. The magnitude spectrum and its logarithmic spectrum are concatenated as the input. Each discriminator is composed of 7 2D convolution layers with a kernel size of \((3,3)\) and a stride of \((1,1)\) or \((2,2)\). Weight normalization and LeakyReLU are applied sequentially after each convolution layer except the last one. For the multi-band discriminators, the network architecture is the same as the multi-resolution frequency discriminator, while a band spectrum is replaced as the input. With the multi-band discriminators, the problem of a large dynamic range in different subbands is overcome.
With CSM-GAN, we can fully utilize the phase information and efficiently tackle the high-frequency components of speech.
The training loss comprises a combination of components: a term for reconstruction loss, a term for adversarial loss, and a term for feature match loss. The reconstruction loss is made up of a multi-resolution fullband and subband short-time Fourier transform (STFT) loss.
To achieve the multi-resolution STFT loss, we minimize the spectral convergence loss [13], along with the L1 distance in the logarithmic magnitude spectral domain, while utilizing various FFT analysis parameters, which can be written as:
\[\mathbb{\mathcal{E}}_{s}(X)\!=\!\sum_{r}\Bigl{(}\|log(X_{r})\!-\!log(\widehat {X}_{r})\|_{1}\!+\!\mathbb{\mathcal{E}}_{sc}(X)\Bigr{)}, \tag{1}\]
where \(X_{r}\) and \(\widehat{X}_{r}\) are the spectrum of the clean speech and the predicted waveform with FFT-point of \(r\). The spectral convergence
Figure 1: The general schematic of the proposed system. The ”AGC” denotes auto gain control.
Figure 2: The architecture of CSM-GAN.
loss function can be written as:
\[\mathcal{L}_{sc}(X)\!=\!\frac{||X_{r}\!-\!\widehat{X}_{r}||_{F}}{||\widehat{X}_{ r}||_{F}}. \tag{2}\]
The loss functions for the fullband and subband cases are denoted by \(\mathcal{L}_{s}(S)\) and \(\mathcal{L}_{s}(S^{sub})\), respectively. Here, \(S\) represents the magnitude spectrum of the complete signal \(s\), whereas \(S^{sub}\) corresponds to the subband signals \(s^{sub}\) obtained by decomposing the signal using pseudo-quadrature mirror filters (PQMF).
To train the generator and discriminator, we use LS-GAN [14] adversarial loss. This ensures that the generator is able to deceive the discriminator during training. Additionally, the LS-GAN helps the discriminator distinguish between clean samples (labeled as 1) and samples estimated by the restoration module (labeled as 0). The generator \(G_{S}\) and discriminator \(D_{S}\) loss functions are given by:
\[\mathcal{L}_{adv}\!=\!\mathbb{E}\big{[}(1\!-\!D_{S}(\hat{s}))^{2} \big{]}, \tag{3}\] \[\mathcal{L}_{D_{S}}\!=\!\mathbb{E}\big{[}(D_{S}(s)\!-\!1)^{2}\!+ \!(D_{S}(\hat{s}))^{2}\big{]}, \tag{4}\]
Moreover, to reduce the L1 distance between the feature maps of the discriminator for genuine and synthesized audio, a feature matching loss is computed, as presented in [15]. This approach has proven to be successful in previous research.
\[\mathcal{L}_{feat}\!=\!\mathbb{E}\Bigg{[}\frac{1}{L}\sum_{l=0}^{L-1}\!\!|D_{S }^{l}(s)\!-\!D_{S}^{l}(\hat{s})|\Bigg{]}, \tag{5}\]
which \(L\) denotes the number of the discriminator's layer.
The generator's total loss is a combination of the aforementioned loss components, weighted appropriately:
\[\mathcal{L}_{G_{S}}\!=\!\mathcal{L}_{s}(S)\!+\!\mathcal{L}_{s}(S^{sub})\!+\! \lambda_{adv}\!:\!\mathcal{L}_{adv}\!+\!\lambda_{feat}\!:\!\mathcal{L}_{feat}, \tag{6}\]
where \(\lambda_{adv}\) and \(\lambda_{feat}\) are set to 1 and 20, respectivelly.
### Enhancement module
There may still exist residual noise and artifacts in the output of the restoration module. We apply the enhancement module in the second stage to eliminate these residual noises and artifacts to further improve the quality of the speech signal.
For maintaining the performance and reducing the computational efforts, we conduct the fullband-wideband parallel processing in the enhancement module. Note that this paper tends to provide a framework for solving complex speech impairment problems, where the networks are all replaceable. More specifically, we divide the fullband complex spectrum into two groups of features: the complex spectrum of the wideband speech and 32 equivalent rectangular bandwidth (ERB) bands containing fullband information by splitting bands. Subsequently, the wideband TaylorEnhancer [16] (TaEr) and fullband masking-based UNet (FBM UNet) [17] are introduced to handle the wideband complex spectrum and ERB bands in parallel, respectively. They are trained from scratch. Taer is an all-neural denoising framework to mimic the behavior of Taylor's series and can be modeled as the superimposition of the 0th-order and high-order polynomials, where the former only concerns the magnitude recovery and the latter polynomials are tasked with complex-residual estimation. TaEr has superior wideband noise suppression capability and focuses on wideband speech enhancement, while the FBM UNet provides the advantage of low complexity for fullband processing. The outputs of the two sub-networks are then integrated into the enhanced fullband complex spectrum by the band-merge operation.
The loss function is defined as:
\[\mathcal{L}(X)\!=\!\lambda_{cplx}\!\times\!\mathcal{L}_{cplx}(X)\!+\!\lambda _{mag}\!\times\!\mathcal{L}_{mag}(X), \tag{7}\]
where \(\lambda_{cplx}\) and \(\lambda_{mag}\) denote the weights of the complex loss function and the magnitude loss function, respectively. The complex spectrum loss function and the magnitude spectrum loss function can be defined as:
\[\mathcal{L}_{cplx}(X)\!=\!\left\|\left|X^{0.3}\frac{X}{|X|}\!-\!|\widehat{X} |^{0.3}\frac{\widehat{X}}{|\widehat{X}|}\right|\right|_{2}^{2}, \tag{8}\]
\[\mathcal{L}(X)\!=\!\||X|^{0.3}\!-\!|\widehat{X}^{0.3}|\|_{2}^{2}, \tag{9}\]
## 3 Experiments
### Datasets
We randomly selected subsets from the DNS Challenge corpus [18] and our internal dataset with different sampling rates as our clean set and the noise set. For convenience, all the clean and noise data were resampled to 48kHz. The room impulse responses (RIRs) were generated based on the image method. We subjectively analyzed the problematic audio from SSI challenge deveat and simulated a 1500-hour dataset according to the proportion of various specific cases of issues including coloration, discontinuity, loudness, background noise and reverberation, etc.
The training data simulation procedure was shown in Fig. 3. Specifically, the clean input proportionally with non-linear distortions was firstly mixed with noise and reverberation to generate the noisy-reverberant data. And then, to simulate the various cutoff frequencies and distortion types of the receiving microphone, the noisy-reverberant data is processed by a low-pass filter with different cutoff frequencies ranging from 1kHz to 24kHz and applied with various receiver distortions such as spectral leakage, clipping, half-wave rectification, etc. After that, the received data was processed by our private noise suppressor (NS) and blind bandwidth extension (BWE) module. Finally, several open-source codecs (AAC [19], OPUS [20], etc.) with different bit-rates and packet loss rates were adopted to the BWE output to simulate RTC network transmission.
The test set is provided by the organizer, which is a blind set of 500 devices/environments, which have an approximately uniform distribution for the impairment areas including noisiness, coloration, discontinuity, loudness and reverberation.
### Experimental setup
We applied the Hanning window with a 20 ms window length and a 10 ms frame shift. All utterances are segmented into 4 seconds. The models are trained with a maximum step of 20000000 with AdamW optimizer. The learning rate is 2e-4 and the batch size is set to 16.
## 4 Results and Analysis
In this section, we evaluate the proposed system across the objective and subjective evaluation. For the objective evaluation, DNS-MOS [21] and NISQA [22] are chosen to evaluated the performance of the systems, where DNSMOS is a non-intrusive perceptual objective speech quality metric to evaluate noise suppression, and NISQA is a non-intrusive objective speech quality assessment metric to evaluate speech quality and naturalness including noisiness, discontinuity, loudness and coloration. Subjective listening set includes two tests. The first evaluation metric is based on P.835, measures SIG, BAK, and OVRL while the second evaluation is based on an extension of P.804 (listening phase) and P.863.2 (Annex A) and relies on crowdsourcing and the P.808 toolkit developed by the organizers.
### Ablation study
The ablation study spans the following three aspects: First, we verify the superiority of the complex spectrum mapping-based GAN namely CSM-GAN over the GAN in the time domain namely TD-GAN which takes SEA-Net as the generator and has the same discriminator with CSM-GAN. Then, the necessity of the "restoration and enhancement" framework namely Gesper is verified, and the "enhancement and restoration" architecture is called as ER-Net. Table 1 and Fig. 4 show the performance of these methods. According to the table, several observations can be discovered. Firstly, CSM-GAN shows better performance than TD-GAN, although TD-GAN has higher computational cost. This is because that high-frequency components which correspond to the fine structures of speech signals are hard to modeling for the time-domain generator. Secondly, Gesper outperforms the CSM-GAN, indicating that the following enhancement module is necessary to remove artifacts generated by CSM-GAN and further suppress noise and reverberation components. Finally, it can be found that the speech quality processed by ER-Net is poor compared with Gesper or even CSM-GAN, demonstrating that applying noise reduction in the first stage will severely damage the speech signal in the complexity acoustic cases, which results in the following regeneration model being unable to recover these components. The above ablation experiments show that the validity and reasonableness of the proposed "restoration and enhancement" framework.
### Evaluation on the SSI Challenge blind test set
Table 1 reports the performance of Gesper in terms of DNSMOS and NISQA. Compared to the baselines, the proposed system achieves significant improvement in terms of all metrics consistently, 0.74 DNSMOS and 1.63 NISQA gains are obtained, respectively. Table 2 and Table 3 show partial results of a multi-dimensional subjective test in term of subjective evaluation on the SSI Challenge blind test set. And the proposed system yields a significant improvement in all metrics relative to the noisy signals and other submissions. This indicates that our proposed system efficiently alleviates the difficulties of noise, coloration, discontinuity, loudness and reverberation, which play a vital role in speech signal quality.
### Parameter number and real-time factor
Moreover, we counted the number of parameters and the real-time factor (RTF). The proposed model has a total parameter number of 12.1 M, and its RTF on an Intel Core i5 Quadcore CPU (clocked at 2.4 GHz) with single thread is 0.37.
## 5 Conclusions
This paper introduces our submission to the ICASSP 2023 SSI Challenge. Our proposed two-stage framework achieves impressive results in addressing the challenges of noise, coloration, discontinuity, loudness and reverberation that reduce the speech quality. The proposed real-time system is ranked first place in tracks 1 and 2 of the ICASSP 2023 SSI Challenge.
\begin{table}
\begin{tabular}{c|c c c|c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{DNSMOS} & \multicolumn{4}{c}{NISAQ} \\ \cline{2-7} & SIG & BAK & OVR & MOS & Not. & Dis. & Col. & Load. \\ \hline noisy & 2.89 & 3.45 & 2.46 & 2.34 & 3.11 & 3.39 & 2.80 & 2.87 \\ CSM-GAN & 3.44 & 4.03 & 3.14 & 3.74 & 4.03 & 4.19 & 3.67 & 3.94 \\ TD-GAN & 3.27 & 3.99 & 2.98 & 3.24 & 3.79 & 3.78 & 3.28 & 3.66 \\ ER-Net & 3.33 & 3.98 & 3.02 & 3.33 & 3.65 & 3.81 & 3.33 & 3.62 \\ Gesper & **3.45** & **4.12** & **3.20** & **3.97** & **4.33** & **4.28** & **3.79** & **4.09** \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons of DNSMOS new and NISQA using different methods, The best results are boldfaced. Noi., Dis., Col. and Load. indicate noisiness, discontinuity, coloration and loudness, respectively.
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{ITU-T P835 MOS} \\ \cline{2-5} & Overall & Signal & Background \\ \hline Noisy & 2.824 & 3.147 & 3.453 \\ Hitit & 3.089 & 3.312 & 4.074 \\ Gesper & 3.350 & 3.581 & 4.208 \\ \hline \end{tabular}
\end{table}
Table 2: Subjective evaluation results based on ITU-T P835 on the SSI Challenge blind test set.
Figure 4: Spectrograms of results.
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{P804 MOS} \\ \cline{2-5} & Coloration & Discontinuity & Loudness & Reverberation \\ \hline Noisy & 3.029 & 4.061 & 2.992 & 3.852 \\ Hitit & 3.248 & 4.005 & 3.916 & 4.477 \\ Gesper & 3.598 & 4.201 & 4.109 & 4.316 \\ \hline \end{tabular}
\end{table}
Table 3: Part of the subjective evaluation results based on P.804 on the SSI Challenge blind test set.
Figure 3: The pipeline of data simulation, where “NS” refers to noise suppression and “DC” indicates direct current. |
2305.11455 | Shattering the Agent-Environment Interface for Fine-Tuning Inclusive
Language Models | A centerpiece of the ever-popular reinforcement learning from human feedback
(RLHF) approach to fine-tuning autoregressive language models is the explicit
training of a reward model to emulate human feedback, distinct from the
language model itself. This reward model is then coupled with policy-gradient
methods to dramatically improve the alignment between language model outputs
and desired responses. In this work, we adopt a novel perspective wherein a
pre-trained language model is itself simultaneously a policy, reward function,
and transition function. An immediate consequence of this is that reward
learning and language model fine-tuning can be performed jointly and directly,
without requiring any further downstream policy optimization. While this
perspective does indeed break the traditional agent-environment interface, we
nevertheless maintain that there can be enormous statistical benefits afforded
by bringing to bear traditional algorithmic concepts from reinforcement
learning. Our experiments demonstrate one concrete instance of this through
efficient exploration based on the representation and resolution of epistemic
uncertainty. In order to illustrate these ideas in a transparent manner, we
restrict attention to a simple didactic data generating process and leave for
future work extension to systems of practical scale. | Wanqiao Xu, Shi Dong, Dilip Arumugam, Benjamin Van Roy | 2023-05-19T06:21:15Z | http://arxiv.org/abs/2305.11455v1 | # Shattering the Agent-Environment Interface for Fine-Tuning Inclusive Language Models
###### Abstract
A centerpiece of the ever-popular reinforcement learning from human feedback (RLHF) approach to fine-tuning autoregressive language models is the explicit training of a reward model to emulate human feedback, distinct from the language model itself. This reward model is then coupled with policy-gradient methods to dramatically improve the alignment between language model outputs and desired responses. In this work, we adopt a novel perspective wherein a pre-trained language model is itself simultaneously a policy, reward function, and transition function. An immediate consequence of this is that reward learning and language model fine-tuning can be performed jointly and directly, without requiring any further downstream policy optimization. While this perspective does indeed break the traditional agent-environment interface, we nevertheless maintain that there can be enormous statistical benefits afforded by bringing to bear traditional algorithmic concepts from reinforcement learning. Our experiments demonstrate one concrete instance of this through efficient exploration based on the representation and resolution of epistemic uncertainty. In order to illustrate these ideas in a transparent manner, we restrict attention to a simple didactic data generating process and leave for future work extension to systems of practical scale.
## 1 Introduction
While recent years have witnessed a dramatic shift in the capabilities of generative AIs across numerous data modalities, excitement and discourse surrounding natural language processing (NLP) and large language models (LLMs) in particular has become near-ubiquitous within just the last few months [55; 84], leading to an unprecedented proliferation of daily users probing and exploring these models' impressive capabilities through prolonged, interactive dialogues. With this attention has also come an onslaught of challenges for the AI and machine learning research communities, ranging from the rigorous benchmarking of capabilities [47], adherence to copyright law [30], concerns for privacy [46; 45], and insight into the key methodologies for training these models [55], to name just a few. One question that lies at the heart of the last issue revolves around how much the successes of fine-tuned, autoregressive LLMs are driven by the reinforcement learning from human feedback (RLHF) [76; 66] pipeline?
While the classic NLP task of language modeling is easily formulated and solved through traditional supervised-learning techniques [15; 52], the RLHF paradigm has found great empirical success by interpreting this as merely a preliminary pretraining phase and further incorporating a subsequent fine-tuning phase that leverages human feedback when refining responses to be more accurate and more preferable. From the perspective of a sequential decision-making process, two hallmark characteristics of this pipeline include **(1)** viewing a language model as a policy, mapping a rich, pretrained representation of a sequence of tokens along with a partial response to a next-token distribution and **(2)** interpreting human feedback as identifying a terminal reward function that assigns scalar feedback to completed prompt-response pairs so as to incentivize preferred responses. In this work, we introduce a novel perspective on LLMs that extends **(1)** and renders **(2)** moot, giving rise to a novel and statistically-efficient fine-tuning method.
We recognize that by virtue of vast amounts of unstructured Web data, a pretrained LLM can be simultaneously viewed as a policy, a reward function, and an environment simulator. Traditionally, a policy is implemented within a decision-making agent whereas the reward function and simulator are properties of the environment and, therefore, reside external to the agent. Thus, our novel perspective blurs the traditional boundary between agent and environment found throughout the reinforcement-learning literature [79]. Nevertheless, in this paper we demonstrate the value of this triumvirate through a meticulous collection of simple yet illustrative experiments, designed to highlight how foundational concepts from reinforcement learning can still be successfully brought to bear for fine-tuning LLMs.
Concretely, through the lens of viewing a pretrained LLM as a reward function, we propose a new fine-tuning algorithm, Inclusive Learning From Human Feedback (ILHF), that offers two key advantages over current RLHF approaches. Firstly, from a computational perspective, ILHF avoids the need for further downstream application of policy-gradient methods [75] in order to align LLM responses to human preferences. Secondly, from a statistical perspective and as our method's name suggests, the LLMs resulting from ILHF are _inclusive_[8] and, therefore, demonstrably converge to the preferred population response distribution over the course of fine-tuning; this stands in stark contrast to the _agglomerative_ models that arise from the standard RLHF approach, which are encouraged to place all probability mass on a singular, "best" response that is preferred by the majority of the population. Beyond empirical results that validate the emergence of such inclusive and agglomerative models, we further demonstrate how ILHF, a supervised-learning approach, can still be made more statistically efficient by leveraging judicious exploration strategies borne out in the reinforcement-learning literature [48].
The paper proceeds as follows: in Section 2 we establish notation, review the current RLHF pipeline, and briefly present empirical results on a didactic example to highlight the difference between inclusive and agglomerative LLMs. We then proceed to outline ILHF in Section 3 followed by details of our experimental protocol in Section 4 and a discussion of empirical results in Section 5.
## 2 Preliminaries
### Notation
For any natural number \(N\in\mathbb{N}\), we denote the index set as \([N]\triangleq\{1,2,\ldots,N\}\). For any arbitrary set \(\mathcal{Z}\), we use the Kleene plus \(\mathcal{Z}^{+}\) to denote the set of all sequences of length at least one formed by elements of \(\mathcal{Z}\). Orthogonally, we use \(\Delta(\mathcal{Z})\) to denote the set of all probability distributions supported on \(\mathcal{Z}\). At the most abstract level, a LLM is an autoregressive mapping that, given a current sequence of tokens, generates a probability distribution over next tokens; modern applications of LLMs often elide this low-level mechanistic view of these models and instead adopt the more holistic perspective that a LLM maps an input prompt to a response, both of which are variable-length sequences of tokens. If \(\mathcal{V}\) is the vocabulary or set of all possible tokens, then a LLM is represented as a mapping \(\pi_{\phi}:\mathcal{V}^{+}\rightarrow\Delta(\mathcal{V})\) parameterized by \(\phi\in\Re^{d}\) where some initial, non-empty sequence of tokens from \(\mathcal{V}\) (constituting a prompt) and subsequently sampled tokens for the response are autoregressively passed back into \(\pi\) as inputs to generate the next-token distribution. With a slight abuse of notation, we use \(\overline{\pi}_{\phi}:\mathcal{V}^{+}\rightarrow\Delta(\mathcal{V}^{+})\) to denote the associated mapping from an input prompt to a distribution over full, complete responses.
### Reinforcement Learning from Human Feedback (RLHF)
Current approaches to RLHF [76, 66] are characterized by three distinct phases: pretraining, reward model learning, and fine-tuning. Pretraining is facilitated by curating a large dataset of \(N\in\mathbb{N}\) prompt-response pairs \(\mathcal{D}=\{(\overline{X}_{i},\overline{Y}_{i})\}_{i=1}^{N}\) where \(\overline{X}_{i},\overline{Y}_{i}\in\mathcal{V}^{+}\), typically representing unstructured text data scraped from the Web. Pretrained LLM parameters \(\phi^{\mathrm{pre}}\in\Re^{d}\) are obtained through the standard supervised language-modeling objective which, in this context, aligns with the classic behavioral cloning [11, 68] loss function used widely in imitation learning: \(\mathcal{L}^{\mathrm{pre}}(\phi)=-\frac{1}{N}\sum\limits_{i=1}^{N}\sum \limits_{j=1}^{\lambda(i)}\log\left(\overline{\pi}_{\phi}(\overline{Y}_{i,j} \mid\overline{X}_{i},\overline{Y}_{i,1:j-1})\right),\) where \(\lambda(i)\) denotes the length of the \(i\)th response, \(\overline{Y}_{i}\). The challenge posed after the completion of pretraining is that the unstructured text data curated in \(\mathcal{D}\) is only an approximation to proper, natural text data that end users want and expect from a LLM; this is a direct consequence of quickly collating \(\mathcal{D}\) by scraping the Internet. Moreover, beyond these initial syntactic issues, such Web sources are also fraught with errors and factual inaccuracies that need to be corrected as well. Oftentimes, these errors can be easily identified and remedied by human evaluators though the challenge lies in propagating such corrections completely throughout the vast space of possible prompts and responses.
Since the acquisition of feedback is the limiting reactant that inhibits scalability, a reward model is trained over the course of \(H\in\mathbb{N}\) rounds to emulate human feedback obtained by iteratively fetching a single prompt \(X_{i}\), sampling two random responses \(Y_{i}^{A},Y_{i}^{B}\sim\overline{\pi}_{\phi^{\mathrm{pre}}}(\cdot\mid X_{i})\), and then querying a human evaluator for a binary indicator \(L_{i}\in\{0,1\}\) that communicates their preference (or lack thereof) for the first response \(Y_{i}^{A}\). An external reward model \(r_{\psi}:\mathcal{V}^{+}\times\mathcal{V}^{+}\rightarrow\Re\) parameterized by \(\psi\in\Re^{m}\) can then be trained via supervised learning by minimizing \(\mathcal{L}^{\mathrm{reward}}(\psi)=-\frac{1}{H}\sum\limits_{i=1}^{H}\mathrm{ LogSigmoid}\Big{(}R_{\psi}\big{(}X,Y_{i}^{A},Y_{i}^{B},L_{i}\big{)}\Big{)}\), where
\[R_{\psi}\big{(}X,Y_{i}^{A},Y_{i}^{B},L_{i}\big{)}=\begin{cases}r_{\psi}(X,Y_{i }^{A})-r_{\psi}(X,Y_{i}^{B})&\text{if }L_{i}=1\\ r_{\psi}(X,Y_{i}^{B})-r_{\psi}(X,Y_{i}^{A})&\text{otherwise}\end{cases}.\]
With the fully-trained reward model \(r_{\psi^{*}}\) in hand, subsequent prompts can have their corresponding responses aligned to human preferences via reinforcement learning but without the need for laboriously querying a live human labeler. Specifically, for each of \(T\in\mathbb{N}\) fine-tuning prompts \(X_{1},\dots,X_{T}\), one can sample two responses \(Y_{t,A},Y_{t,B}\sim\overline{\pi}_{\phi}(\cdot\mid X_{t})\), obtain a synthetic human feedback signal \(L_{t}(A,B)\) based on \(r_{\psi^{*}}(X_{t},Y_{t,A})\) as well as \(r_{\psi^{*}}(X_{t},Y_{t,B})\), and apply policy-gradient methods [80, 81, 40, 53, 75] in order to maximize \(\mathcal{J}(\phi)=\mathbb{E}_{X_{t},Y_{t,A},Y_{t,B}}\left[L_{t}(A,B)\right], \forall t\in[T]\). Naturally, as this is an objective for fine-tuning the LLM, initial policy parameters are set to be \(\phi^{\mathrm{pre}}\). The default standard choice for carrying out this policy optimization in the existing literature is Proximal Policy Optimization (PPO) [75].
### Inclusive vs. Agglomerative AIs
By design, a LLM trained via the RLHF pipeline as outlined in the previous section learns to emit responses that maximize the likelihood of being preferred by human evaluators. Consequently, a RLHF model generating responses \(Y_{t,A}\) across \(T\) evaluation prompts \(X_{1},\dots,X_{T}\) would likely be preferred (or be designated at least as good) as the alternative responses \(Y_{t,B}\) of some other model \(B\) according to the criterion \(\sum\limits_{t=1}^{T}L_{t}(A,B)\). Unfortunately, as discussed and analyzed in Arumugam et al. [8], LLMs that are preferred under this criterion are _agglomerative_ and allow for per-prompt response distributions that place all probability mass on a hypothetical "best" response which is simply preferred by the
Figure 1: ILHF succeeds and REINFORCE+KL penalty fails to match human response probability.
majority of human evaluators sampled from a given population (see Theorem 2 of Arumugam et al. [8]). Without delving into the details of the theoretical argument for this result, a simple intuition is that the fine-tuning phase of the RLHF pipeline operates as a contextual bandit [42], for which there always exists an optimal policy that is a Dirac delta distribution on the optimal arm with highest expected mean reward, for each context. Not only does such a degenerate response distribution qualitatively fail to reflect the diverse interests and preferences of the overall population, but it also quantitatively precludes further downstream gradient-based optimization to redress the issue or cater to any shifts in the desired response distribution altogether. In contrast to an agglomerative model, one might instead favor an inclusive model that strives to preserve the full population response distribution. The primary contribution of this paper is a fine-tuning algorithm that leverages feedback signals derived from this preferred response distribution to yield such inclusive models.
To concretize the issues surrounding agglomerative models and to verify that such problems do manifest from current RLHF practice, we report results for a toy experiment using an extension of the simple, didactic example discussed in Section 2.2 of Arumugam et al. [8]. The example consists of multiple rounds of fine-tuning performed on exactly one prompt and where a response is a single token from \(\{-1,1\}\). Initially, the model was pretrained to emit \(-1\) with probability 0.77 and \(1\) with probability 0.23. For fine-tuning, the desired population response distribution prefers response \(-1\) with probability \(\frac{2}{3}\) while response \(1\) is favored with probability \(\frac{1}{3}\).
For such a small-scale and simple problem, we capture the essence of the RLHF fine-tuning process through policy search by using REINFORCE [86], rather than PPO, along with KL-regularization towards the pretrained model response distribution; this implies that we explicitly forego the benefits of variance reduction that come from a critic as well as the potential for faster convergence through off-policy policy gradient updates. Results are provided in Figure 1 varying the value of the KL-regularization coefficient. The primary observation is that as the REINFORCE fine-tuning updates induce an agglomerative model that emits the preferred token near-deterministically, the less-preferred token probability decays under the fine-tuned response distribution. Moreover, since the initial pretraining distribution underestimates the preference of the less-desired token, intensifying the KL-regularization can only halt this undesired behavior by pinning the model to the pretraining response distribution, but cannot otherwise alleviate the issue and recover the desired response distribution. In the next section, we introduce our ILHF fine-tuning approach that is also pictured in Figure 1 and learns an inclusive model to emit the less-preferred token with the correct probability.
## 3 Approach
In this section, we outline the precise manner in which LLMs violate the agent-environment interface typically observed throughout the reinforcement-learning literature before introducing an alternative approach to RLHF fine-tuning that obviates the need for separate reward learning and policy optimization phases. While the resulting ILHF optimization does constitute a supervised learning problem, we proceed to introduce an augmented fine-tuning procedure that leverages solution concepts for efficient exploration in reinforcement learning.
### Shattering the Agent-Environment Interface
We may formalize the sequential decision-making problem encapsulated by a LLM as a finite-horizon, episodic Markov Decision Process (MDP) [14; 69] defined by \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},\beta,H\rangle\). Specializing these MDP components to the language modeling problem, we may observe that the state space \(\mathcal{S}=\mathcal{V}^{+}\) represents a sampled prompt as well as the current response generated thus far, the action space \(\mathcal{A}=\mathcal{V}\) is the vocabulary of all possible tokens the LLM may generate, the initial state distribution \(\beta\in\Delta(\mathcal{V}^{+})\) represents an arbitrary distribution over prompts whose responses are adjusted over the course of the fine-tuning process, and the horizon \(H\in\mathbb{N}\) is the maximum allowed response length which still enables variable-length responses shorter than \(H\), akin to an episode of an MDP where the agent transitions to an absorbing terminal state before exhausting all \(H\) steps. Naturally, a particular LLM with parameters \(\phi\in\Re^{d}\) embodies a policy of this MDP \(\pi_{\phi}:\mathcal{S}\rightarrow\Delta(\mathcal{A})\). All that remains is to define the reward function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\Re\) providing evaluative feedback signals to the agent and the deterministic transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) yielding next states for each state-action pair.
Notably, the mechanics by which a single episode unfolds in this MDP violates the standard agent-environment interface, as the agent itself is a simulator that can sample rollouts for any given prompt. At the start of each episode, a new prompt is sampled \(s_{1}\sim\beta\) and, for each timestep \(h\in[H]\), a LLM samples a next token \(a_{h}\sim\pi_{\phi}(\cdot\mid s_{h})\) before appending it to the current response yielding a deterministic next state \(s_{h+1}=\mathcal{T}(s_{h},a_{h})\). More importantly, the fine-tuning approach introduced in the next section capitalizes on the realization that a suitable reward function for MDP \(\mathcal{M}\) can be induced directly from the policy itself as \(\mathcal{R}_{\phi}(s,a)=\log\left(\pi_{\phi}(a\mid s)\right)\). This again breaks the standard interface whereby rewards are computed within the confines of the environment and direct updates to policy parameters \(\phi\) do not explicitly change the underlying reward function. While the idea of inducing a reward function (or, more generally, a cumulant [81]) from a policy and vice versa is not new [13], the implications for LLMs in particular stand to be quite profound; namely, it establishes a direct relationship between reward learning and policy optimization that the current RLHF paradigm segregates into distinct phases. In the next section, we provide a novel fine-tuning algorithm that leverages the equivalence between reward learning and policy optimization to consolidate these latter two stages of the RLHF pipeline.
### ILHF: A New Fine-Tuning Algorithm
The previous section sets the stage for interpreting the output of the LLM pretraining phase as producing reward function parameters \(\phi^{\mathrm{pre}}\) which analogously function as policy parameters. As pretraining typically occurs with a dataset that represents a crude approximation to proper written language (such as text scraped widely from the Internet), the corresponding reward function \(\mathcal{R}_{\phi^{\mathrm{pre}}}(s,a)=\log\left(\pi_{\phi^{\mathrm{pre}}}(a \mid s)\right)\) is misspecified and the associated reward-maximizing policy \(\overline{\pi}_{\phi^{\mathrm{pre}}}\) does not accurately reflect the desired response distribution. This begets the need for a loss function that refines reward function parameters to more accurately depict response preferences and, in doing so, refine the LLM policy parameters to induce a response distribution reflective of those preferences.
To that end, we offer the following loss function for optimizing reward function parameters and refining the LLM response distribution jointly. For any sampled prompt \(X\sim\beta\), denote two i.i.d. sampled responses as \(Y^{A}_{i},Y^{B}_{i}\sim\overline{\pi}_{\phi}(\cdot\mid X)\) which are judged by a human evaluator according to \(L_{i}\in\{0,1\}\). Define the binary probability distribution \(\mathcal{P}_{i}=[L_{i},\quad 1-L_{i}]\) induced from the human evaluator. Then, we may induce a complementary distribution over the two sampled LLM responses as
\[\mathcal{Q}^{\phi}_{i}=\text{Softmax}\left(\left[\mathcal{R}_{\phi}(X,Y^{A}_{ i}),\mathcal{R}_{\phi}(X,Y^{B}_{i})\right]\right)=\left[\frac{\pi_{\phi}(Y^{A}_{i} \mid X)}{\pi_{\phi}(Y^{A}_{i}\mid X)+\pi_{\phi}(Y^{B}_{i}\mid X)},\quad\frac{ \pi_{\phi}(Y^{B}_{i}\mid X)}{\pi_{\phi}(Y^{A}_{i}\mid X)+\pi_{\phi}(Y^{B}_{i} \mid X)}\right],\]
in accordance with the Bradley-Terry model for pairwise comparisons [16]. Then, our proposed fine-tuning loss aims to minimize the KL-divergence between the induced human label distribution \(\mathcal{P}\) and the current LLM response preference distribution \(\mathcal{Q}_{\phi}\): \(\mathcal{L}^{\mathrm{ILHF}}(\phi)=\mathbb{E}_{X_{i}}\left[D_{\mathrm{KL}} \left(\mathcal{P}_{i}\mid\mid\mathcal{Q}^{\phi}_{i}\right)\right]\)1. In Section 4, we provide an empirical confirmation that fine-tuning via ILHF does indeed yield an inclusive model by converging to the desired response distribution.
Footnote 1: We use the standard convention that \(0\cdot\log(0)=0\).
### Efficient Exploration
While our proposed ILHF loss function can be optimized via traditional supervised-learning techniques, a LLM model can only utilize human feedback for the responses it generates, akin to a reinforcement-learning agent that may only perceive reward signals for the actions executed under its own policy. Given the vastness of the space of possible responses for each prompt, this implies that a LLM model must also content with the challenge of exploration in its MDP. Fortunately, the reinforcement-learning literature has long-studied the problem of exploration and developed a wide range of solution concepts with varying degrees of statistical efficiency and computational tractability [36; 17; 35; 9; 77; 33; 59; 58; 3; 34; 49]. While future work will likely benefit from a deep and meticulous investigation of which concepts from reinforcement learning might fruitfully transfer over to improve the efficiency of LLM fine-tuning, we here offer one concrete suggestion through the use of uncertainty-based exploration.
Briefly, one principled exploration strategy represents and maintains an agent's epistemic uncertainty [21] in the underlying MDP or value function and uses it as a quantitative signal to foster exploratory behaviors in a manner that is both provably-efficient [56; 61; 58; 3; 63; 50] and
computationally-scalable [60; 62; 54; 23; 64; 23]. While the numerous flavors of uncertainty-based explorations schemes also appear with varying degrees of sophistication [72; 73; 50], we leave an investigation of more complex candidates to future work and, instead, focus our attention on those grounded in Thompson sampling [83; 74], which is both computationally simple and widely used in practice [44; 18; 28]. Posterior-sampling methods that employ Thompson sampling for reinforcement learning [78; 59; 57; 1; 3; 58; 49] operate in each time period by drawing a single statistically-plausible hypothesis for optimal behavior from the agent's beliefs and proceed to act optimally with respect to the single sample as if it reflects reality. The simplest candidate for maintaining an agent's beliefs and refining them as data accumulates is via ensemble sampling [48] which maintains a finite number of randomly-initialized models that can be sampled in each time period and optimized via bootstrapped mini-batches of data [24]. Specifically, for each prompt, an Ensemble-ILHF agent samples one model from its ensemble to generate the two responses which induce human feedback.
## 4 Experiment Setup
We discuss in this section how we simulate agent-human interactions for all agents used in our empirical evaluation. A reader familiar with reinforcement learning should interpret this section as describing a particular choice of MDP. While language models typically involve a large numbers of tokens in the vocabulary, we will restrict our scope to exactly two: \(-1\) and \(1\). Typical language models also include a STOP token in the vocabulary that allows for variable-length responses; instead, our synthetic language simply assumes that all response lengths are homogeneous. This is intended to offer a microcosm for studying methods that process and generate tokenized language data. While a reader may feel discouraged at the prospect of results obtained at such a scale orders of magnitude smaller than what is currently driving practical and deployed models in this space, our goal throughout this work is to leverage such simplicity in order to convey maximal clarity. Moreover, if ILHF agents can be shown to bear fruit in such a basic setting, the potential benefits of tackling the same challenges we outline at a larger scale could be far more substantial.
### A Token-Generating Process & Pretraining
Consider a synthetic, stateful token-generating process that is governed by a vector \(\mu\in\Re^{d}\), a matrix \(W\in\Re^{d\times d}\), and a vector \(U\in\Re^{d}\). The process begins in an initial state \(S_{0}\in\Re^{d}\) sampled from a fixed distribution and, at any time \(t\), given the state \(S_{t}\in\Re^{d}\) of this process, a next token \(X_{t+1}\in\{-1,1\}\) and state \(S_{t+1}\in\Re^{d}\) are generated according to
\[X_{t+1}=\left\{\begin{array}{ll}1&\text{w.p. }\frac{\exp\left(\mu^{\top}S_{t} \right)}{1+\exp(\mu^{\top}S_{t})}\\ -1&\text{otherwise}\end{array}\right.,\qquad S_{t+1}=\tanh(WS_{t}+UX_{t+1}).\]
Note that \(\tanh\) is applied component-wise. This generating process can be interpreted as a recurrent neural network with a single hidden layer and a softmax output; indeed, as discussed in the Appendix, all the agents we evaluate adhere to this exact network architecture, only with a larger hidden dimension. To keep things simple, all prompts and responses will be of a fixed length \(\tau\in\mathbb{N}\). One might wonder why this particular token generating process is worth further study. While it is true that there are numerous stateful stochastic processes one could use to model token generation, the one presented above is clearly among the simpler choices while still retaining a sufficient degree of nontrivial structure.
We offer a preliminary experiment to make the preceding statement precise; namely, that the output token \(X_{t}\) at each time \(t\) exhibits long-term dependencies on the history of tokens. Consequently, it suffices for the next output logit, i.e., the probability of the next token being 1, to depend on a relatively long history of logits, since the distribution of the next token is completely determined by the next logit. To demonstrate the dependence, we plot the autocorrelation function of logits in Figure 2 (please see the Appendix for the exact autocorrelation formula).
As the goal of our token generating process is to serve as a simplified surrogate to natural language, an important distinction arises between _ideal text_ and _shadow text_. One should think of ideal text as exemplary of well-written text that perfectly aligns with the standards of human evaluators. This excludes, at the bare minimum, any garbled or malformed token sequences, and more broadly, those that are in discordance with ideal human-level responses; such malig
instances of shadow text, (sometimes crude) approximations of ideal text which, for example, appear frequently throughout the Internet and are often intertwined or alongside ideal text.
We will think of our aforementioned token generating process as one that yields _ideal_ text so as to be emblematic of proper, linguistically-correct natural language reflective of human preferences. Modern approaches to pretraining, however, through their reliance on textual data scraped from the Internet, do not rely on text generated by this process but instead on shadow text, an approximation which a downstream agent must use for learning. For example, the text may not be written by an eloquent writer, may express harmful thoughts, or espouse erroneous responses. Maintaining fidelity to this reality, we assume that this shadow process uses the same matrix \(W\in\Re^{d\times d}\) and vector \(U\in\Re^{d}\) but an approximation \(\theta\in\Re^{d}\) of the vector \(\mu\).
We conclude this section with a brief discussion of how pretraining with shadow text transpires, clarifying what information all agents in our evaluation will be initialized with at the start of fine-tuning.The pretraining dataset \(\mathcal{D}=\{X_{i,1:2\cdot r}\}_{i=1}^{N}\) consists of documents generated according to the shadow process. Each document can be seen as a concatenated prompt-response pair. Using this dataset, an initial policy \(\phi\) is pretrained to maximize the log-probability of predicting the next tokens given the corresponding preceding strings \(\min_{\phi}\mathcal{L}^{\text{pre}}(\phi)=\frac{1}{N}\sum_{i=1}^{N}\left(\log \pi_{\phi}(X_{i,1})+\sum_{t=2}^{2\tau}\log\pi_{\phi}(X_{i,t}|X_{i,1:t-1}) \right),\) in a manner that resembles the behavioral cloning [11, 68] algorithm used for imitation learning. We denote \(\phi^{\text{pre}}\) as the policy parameters obtained by pretraining.
### Simulating Agent-Human Interactions
We take each \(i\)th prompt \(X_{i,1:\tau}\) to be sampled independently from our token generating process but with \(\theta\) taking the place of \(\mu\). During training, prompts to language models are often truncated from streams of shadow text; when deployed in practice, one also does not assume that prompts are ideal text free of typographical errors. These are echoed by our use of the approximate parameter \(\theta\) in prompt generation. Yet another reason that motivates using the shadow process to generate prompts is that, the agent ought to obtain all relevant information about the error \(\theta-\mu\) from human feedback. Thus, prompts should not reveal information about the parameter \(\mu\) of the ideal process.
We consider binary human feedback, where each bit indicates a preference between two responses. Given the \(i\)th prompt \(X_{i,1:\tau}\), we generate an associated state sequence \(S_{i,1:\tau}\) according to \(S_{i,t+1}=\tanh(WS_{i,t}+UX_{i,t})\). Similarly, for each \(i\) and \(b\in\{0,1\}\), we let \(S_{i,0}^{(b)}=S_{i,\tau}\) and generate a state sequence \(S_{i,1:\tau}^{(b)}\) associated with response \(b\) according to \(S_{i,t+1}^{(b)}=\tanh(WS_{i,t}^{(b)}+UX_{i,t}^{(b)})\).
For each \(i\), denote the likelihood function that the ideal text generating process assigns to each response \(b\in\{0,1\}\) by \(\ell_{i,b}(\mu)=\prod_{t=1}^{\tau}\left((1-X_{i,t}^{(b)})\frac{1}{1+\exp(\mu^ {\top}S_{i,t}^{(b)})}+X_{i,t}^{(b)}\frac{\exp(\mu^{\top}S_{i,t}^{(b)})}{1+ \exp(\mu^{\top}S_{i,t}^{(b)})}\right).\)
Then, the binary preference \(B_{i}\) of a random individual is sampled according to \(B_{i}=\begin{cases}0&\text{with probability }\frac{\ell_{i,0}(\mu)}{\ell_{i,0}(\mu)+\ell_{i,1}(\mu)}\\ 1&\text{with probability }\frac{\ell_{i,1}(\mu)}{\ell_{i,0}(\mu)+\ell_{i,1}(\mu)} \end{cases},\) which is the classic Bradley-Terry model for the human preference between response \(X_{i,t}^{(b)}\) and \(X_{i,t}^{(0)}\). The human feedback we consider is based on the ideal process that employs parameter \(\mu\), in spirit with the goal of aligning language model outputs to human preference. We assume that the human annotator communicates relatively high-quality signals, with the only error stemming from random sampling.
### Evaluation Metrics
A key advantage of studying a simple data-generating process is the ability to, for any prompt generated as described above, tractably compute the KL-divergence between the ideal distri
Figure 2: Long-tailed dependence
bution of responses and distribution of responses produced by any agent as an evaluation metric. Concretely, the ideal process generates a set of independently sampled sequences \(\{X_{i,1:2\tau}\}_{i=1}^{N}\), which can also be viewed as concatenated prompt-response pairs, as well as the corresponding set of state sequences \(S_{i,1:2\tau}\). Like the token generating process, a stateful agent then generates corresponding agent state sequences \(\{\widehat{S}_{i,1:2\tau}\}_{i=1}^{N}\). For each \(i\), let \(\widehat{\ell}_{i}(\mu)=\prod_{t=\tau+1}^{2\tau}\left((1-X_{i,t})\frac{1}{1+ \exp(\mu^{\top}S_{i,t})}+X_{i,t}\frac{\exp(\mu^{\top}S_{i,t})}{1+\exp(\mu^{ \top}S_{i,t})}\right)\) denote the likelihood function that the ideal text generating process assigns to the \(i\)-th response, and let \(\widehat{\ell}_{i}(\phi)\) denote the likelihood function that the agent's model assigns to the \(i\)-th response which is identical to \(\widehat{\ell}_{i}(\mu)\) with \(\phi\) swapped for \(\mu\) and \(\widehat{S}_{i,t}\) in place of \(S_{i,t}\). The Monte-Carlo estimation of the KL-divergence can then be expressed as \(\mathcal{S}^{\text{KL}}(\phi)=\frac{1}{N}\sum_{i=1}^{N}\left(\ln\widehat{ \ell}_{i}(\mu)-\ln\widehat{\ell}_{i}(\phi)\right).\) Note that with the token-generating process introduced in Section 4.1, there exist an agent that attains zero KL-divergence.
### Inclusive Agents
As a concrete instantiation of our ILHF fine-tuning algorithm, we consider an agent that computes a maximum _a posteriori_ (MAP) estimate of \(\phi\) at each round of human interaction. The agent initializes its parameter \(\phi_{0}\) with \(\phi^{\text{pre}}\); for the \(k\)th interaction, it first samples responses from the token generating process with parameter \(\phi_{k}\) and then aims to minimize \(\mathcal{L}^{\text{ILHF}}(\phi)\) which, in the context of our particular token-generating process, simplifies as \(\mathcal{L}^{\text{ILHF}}(\phi)=-\frac{1}{N}\sum_{i=1}^{N}\left(\ln\ell_{i,b_ {i}}(\phi)-\ln(\ell_{i,0}(\phi)+\ell_{i,1}(\phi))\right).\) This agent operates in a greedy fashion, using the MAP estimate of parameters to generate responses that humans can subsequently rate. Note that in our simplified token generating process, we take only one optimization step between interactions, whereas one may engage a significantly larger number of updates in larger models. The full procedure is shown as Algorithm 1 in the Appendix.
As an alternative ILHF agent design that leverages exploration strategies to accelerate convergence, we consider a second fine-tuning algorithm, Ensemble-ILHF, that fits an ensemble of models that approximates a posterior distribution. Before fine-tuning starts, an ensemble of parameters are drawn independently from a normal distribution centered around the pretrained parameter \(\phi^{\text{pre}}\). The agent's belief about the variance of the posterior distribution is captured by a covariance matrix \(\Sigma\) which, over the course of fine-tuning, is further conditioned on the observed human feedback data. The full procedure for this ensemble agent is shown as Algorithm 2 in the Appendix.
## 5 Results and Discussion
In this section, we present two sets of computational studies that compare REINFORCE, ILHF, and Ensemble-ILHF agents. The first set of experiments aim to illustrate how our approach produces an inclusive agent for the didactic example introduced in Section 2.3. The second set of experiments centers around the token generating process introduced in the previous section, again demonstrating how our fine-tuning procedure yields an inclusive model that captures the desired response distribution while also highlighting the benefits of efficient, uncertainty-based exploration schemes.
In both experiments, all agents follow the same pretraining protocol with 1,000 pretraining samples generated using a shadow process with perturbation variance of 0.3 from the ideal process, yielding identical parameters \(\phi^{\text{pre}}\) at the start of fine-tuning for all agents. All agents use the Adam optimizer [37] with a learning rate of 0.001 for both pretraining and finetuning. In each finetuning episode, exactly 64 prompts are provided to the agents to respond to and gain feedback from our synthetic human labels. All error bars and shaded areas correspond to 1 standard error over 20 seeds.
### Didactic Experiment
Figure 3 provides a continuation of the preliminary results shown in Figure 1 for the didactic example of Section 2.3 only now showing the KL-divergence metric introduced in Section 4.3. We first pretrain for 200 epochs before starting the finetuning phase. Our KL-divergence metric shows that ILHF is able to learn the ground-truth token distribution, whereas all REINFORCE agents (acting as proxies for the current RLHF paradigm) fail regardless of the KL penalty scale. Notably, the gap between these RLHF agents and our proposed ILHF agent is entirely a function of the divergence
between the pretraining and desired response distributions; thus, whenever fine-tuning occurs to correct a significant shift between the two response distributions than what is shown here, the gap between ILHF and RLHF could be significantly larger.
### Efficient Exploration via Ensemble Sampling
Our next set of experiments follows the experiment setup introduced in Section 4. All agents are first pretrained over 20 episodes, then finetuned over 100 episodes of human interactions, as represented by the ideal data generating process. Note that due to the online nature of the REINFORCE algorithm, during each interaction episode, only one gradient update is performed for each agent model. In Figure 3(a), we compare the KL-divergence between the response distribution of the ideal process and the response distribution of various agents. In addition to the KL-penalized REINFORCE agents and ILHF, we also examine the performance of ILHF equipped with an ensemble of models (also called particles) to facilitate exploration, as is introduced in Section 3.3. The figure shows that only ILHF and Ensemble-ILHF can learn the ideal process, while all REINFORCE agents diverge from it regardless of the KL penalty scale. At the same time, using an ensemble to account for epistemic uncertainty in an inclusive agent significantly accelerates learning. We provide an ablation study on problem-specific parameters in the Appendix.
Although our synthetic data-generating process allows us to exactly evaluate the KL-divergence between the response distribution of the ideal process and that of an agent, it is not viable to compute in general. In practice, a head-to-head competition between two agents is typically involved to determine which one performs better. In a spirit to echo such a procedure, we also carry out a head-to-head competition between our best Ensemble-ILHF agent with 50 particles against all other agents considered in this experiment after finetuning. Since our goal is to select inclusive agents, instead of employing the agglomerative criterion discussed in Section 2.3, we consider the _inclusive score_ introduced in [8] which selects the agent that better represents the distribution of human preferences. For each agent being compared to the Ensemble-ILHF agent with 50 particles, we perform a normalization procedure over the inclusive scores that produces a single statistic, _inclusive score ratio_. The competing agent wins if the ratio is greater than 1 and loses otherwise. Figure 3(b) indicates that Ensemble-ILHF 50 consistently outperforms all other agents in head-to-head combat with statistically significant differences.
Figure 4: Ensembles improve ILHF, which greatly improves on REINFORCE with KL penalty
Figure 3: Reinforce+KL penalty is not inclusive
Conclusion
Excitement around the capabilities and prospects of LLMs is a burgeoning area of machine learning, whose prevalence will only continue to grow. Notably, these successes are driven by the RLHF paradigm, which hinges on learning a separate reward model that sits distinct from the LLM itself. To mitigate the challenges of such agglomerative models that collapse towards a single best response, we have proposed Inclusive Learning from Human Feedback (ILHF) as an alternative LLM fine-tuning approach which leverages insights from the field of reinforcement learning to produce inclusive models that preserve the population response distribution. Future work in this area may benefit from incorporating other reinforcement learning ideas to design and optimize LLMs in a more statistically-efficient manner.
|
2310.15485 | A Berry-Essen Type Theorem for Finite Free Convolution | We prove that the rate of convergence for the central limit theorem in finite
free convolution is of order $n^{1/2}$ | Octavio Arizmendi, Daniel Perales | 2023-10-24T03:30:19Z | http://arxiv.org/abs/2310.15485v1 | # A Berry-Esseen type theorem for finite free convolution
###### Abstract.
We prove that the rate of convergence for the central limit theorem in finite free convolution is of order \(n^{-1/2}\).
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 734922.
Thus, for two polynomials of degree \(d\), \(p\) and \(q\), let us define the distance between them to be \(L(p,q):=d_{L}(\mu_{p},\mu_{q})\), where \(d_{L}\) is the Levy distance and the measures \(\mu_{p}\) and \(\mu_{q}\) are the distributions of \(p\) and \(q\), respectively.
In this language we can state our contribution as follows.
**Theorem 1.1**.: _Let \(d\in\mathbb{N}\) and let \(p\) be a real polynomial of degree \(d\) such that the first two moments of \(\mu_{p}\) are \(m_{1}=0\) and \(m_{2}=1\). Then, there exists an **absolute constant**\(C_{d}\), only depending on \(d\), such that for all \(n>0\),_
\[L\left(D_{1/\sqrt{n}}(p^{\boxplus_{d}n}),D_{1/\sqrt{d}}(H_{d})\right)<\frac{C_ {d}}{\sqrt{n}},\]
The main tool to prove the above rate of convergence are the cumulants for finite free convolution, as we defined in [1]. These cumulants give a combinatorial approach to investigate this convolution and its relation to free probability. In particular we showed that finite free cumulants approach free cumulants, providing a combinatorial perspective to the fact that finite free convolution approaches free convolution in the limit. Using these cumulants we were able to show that some properties of free convolution are valid already in the finite free case. The above theorem is another instance of the fact that many properties in free probability already appear in the finite level.
Apart from this introduction this note consists of two sections. Section 2 gives the preliminaries for the theory of finite free probability and in Section 3 we give the proof of the main theorem, Theorem 1.1.
## 2. Preliminaries
We give very basic preliminaries on finite free convolution we refer to [1, 7] for details.
### Finite Free Convolution
For two polynomials, \(p(x)=\sum_{i=0}^{d}x^{d-i}(-1)^{i}a_{i}^{p}\) and \(q(x)=\sum_{i=0}^{d}x^{d-i}(-1)^{i}a_{i}^{q}\), the finite free additive convolution of \(p\) and \(q\) is given by
\[p(x)\boxplus_{d}q(x)=\sum_{k=0}^{d}x^{d-r}(-1)^{r}\sum_{i+j=r}\frac{(d-i)!(d-j)!}{d!(d-i-j)!}a_{i}^{p}a_{j}^{q}.\]
The finite \(R\)-transform of a polynomial is defined by
\[\mathcal{R}_{p}^{d}(s)\equiv-\frac{1}{d}\frac{\partial}{\partial s}\ln\left( \sum_{i=0}^{d}\frac{(-d)^{i}a_{i}^{p}}{(d)_{i}}s^{i}\right)\qquad\text{mod }[s^{d}], \tag{2.1}\]
when \(p\) is the monic polynomial \(p(x)=\sum_{i=0}^{d}x^{d-i}(-1)^{i}a_{i}^{p}\).
We consider the truncated \(R\)-transform given by the sum of the first \(d\) terms in the series expansion of \(\mathcal{R}_{p}^{d}\), which will have the cumulants as coefficients.
**Definition 2.1** ([1]).: Let \(p\) be a monic polynomial of degree \(d\), and suppose the \(\mathcal{R}_{p}^{d}(s)\) satisfies
\[\mathcal{R}_{p}^{d}(s)\equiv\sum_{j=0}^{d-1}\kappa_{j+1}(p)s^{j}\quad\text{ mod }[s^{d}].\]
1. We call the sum of the first \(d\) terms in the series expansion of \(\mathcal{R}^{d}\) the _truncated \(R\)-transform_ and denote by \(\tilde{\mathcal{R}}^{d}_{p}(s)\), i.e. \[\tilde{\mathcal{R}}^{d}_{p}(s):=\sum_{j=0}^{d-1}\kappa_{j+1}(p)s^{j}.\]
2. The numbers \(\kappa_{1}(p),\kappa_{2}(p),\ldots,\kappa_{d}(p)\) will be called the finite free cumulants. To simplify notation we will omit the dependence on \(p\) when we deal with only one polynomial.
We want to use the combinatorial framework in terms of moments for these cumulants. Hence, for a polynomial \(p\) with roots \(\lambda_{1},....,\lambda_{n}\) we define the moments of \(p\), by the formula \(m_{n}(p)=\frac{1}{d}\sum_{i=1}^{d}\lambda_{i}^{n}\).
These finite free cumulants satisfy the following properties which are the analog of the properties in the axiomatization of cumulants by Lehner [6], in non-commutative probability.
1. **Polynomial in the first \(n\) moments:**\(k_{n}(p)\) is a polynomial in the first \(n\) moments of \(p\) with leading term \[\frac{d^{n}}{(d)_{n}}m_{n}(p).\]
2. **Homogeneity:** for all monic polynomials \(p\) and \(\lambda\neq 0\) we have \[\kappa_{n}(D_{\lambda}(p))=\lambda^{n}\kappa_{n}(p).\]
3. **Additivity:** for all monic polynomials \(p\) and \(q\), we have \[\kappa_{n}(p\boxplus_{d}q)=\kappa_{n}(p)+\kappa_{n}(q).\]
### Moment-cumulant formula
Moment-cumulant formulas involve summing over partitions on the set \([n]\). Let us introduce this definition and some notation.
**Definition 2.2**.: We call \(\pi=\{V_{1},...,V_{r}\}\) a **partition** of the set \([n]:=\{1,2,\ldots,n\}\) if \(V_{i}\) (\(1\leq i\leq r\)) are pairwise disjoint, non-void subsets of \([n]\), such that \(V_{1}\cup V_{2}...\cup V_{r}=\{1,2,\ldots,n\}\). We call \(V_{1},V_{2},\ldots,V_{r}\) the **blocks** of \(\pi\). The number of blocks of \(\pi\) is denoted by \(|\pi|\). We will denote the set of partitions of \([n]\) by \(\mathcal{P}(n)\).
The set \(\mathcal{P}(n)\) can be equipped with the partial order \(\leq\) of reverse refinement (\(\pi\leq\sigma\) if and only if every block of \(\pi\) is completely contained in a block of \(\sigma\)). With this order the minimum is given by the partition with \(n\) blocks, \(0_{n}=\{\{1\},\{2\},\cdots,\{n\}\}\), and the maximum is given by the partition with \(1\) block, \(1_{n}=\{\{1,2,\cdots,n\}\}\).
Thus one can consider the incidence algebra of \(\mathcal{P}(n)\). For two partitions \(\sigma,\rho\) in the set of partitions \(\mathcal{P}(n)\) the Mobius function is given by
\[\mu(\sigma,\rho)=(-1)^{|\sigma|-|\rho|}(2!)^{r_{3}}(3!)^{r_{4}}\cdots((n-1)!)^ {r_{n}},\]
where \(r_{i}\) is the number of blocks of \(\rho\) that contain exactly \(i\) blocks of \(\sigma\). In particular, for \(\sigma=0_{n}\) we have
\[\mu(0_{n},\rho)=(-1)^{n-|\rho|}(2!)^{t_{3}}(3!)^{t_{4}}\cdots((n-1)!)^{t_{n}},\]
where \(t_{i}\) is the number of blocks of \(\rho\) of size \(i\).
Given a sequence of complex numbers \(f=\{f_{n}\}_{n\in\mathbb{N}}\) we may extend \(f\) to partitions in a multiplicative way by the formula
\[f_{\pi}=f_{|V_{1}|}f_{|V_{2}|}\cdots f_{|V_{n}|},\]
where \(V_{1},\ldots,V_{n}\) are the blocks of \(\pi.\) In this note we will frequently use the multiplicative extensions of the Pochhammer sequence \((d)_{n}=(d)(d-1)\cdots(d-n+1)\) and the factorial sequence \(n!,\) whose extensions will be denoted by \((d)_{\pi}\) and \(N!_{\pi},\) respectively.
In [1], we gave formulas that relate the moments and coefficients of a polynomial with its finite free cumulants. First, we have a formula that writes coefficients in terms of cumulants.
**Proposition 2.3** (Coefficient-cumulant formula).: _Let \(p(x)=\sum_{i=0}^{d}x^{d-i}(-1)^{i}a_{i}\) be a polynomial of degree \(d\) and let \((\kappa_{n})_{n=1}^{d}\) be its finite free cumulants. The following formulas hold._
\[a_{n}=\frac{(d)_{n}}{d^{n}n!}\sum_{\pi\in\mathcal{P}(n)}d^{|\pi|}\mu(0_{n},\pi )\kappa_{\pi},\qquad n\in\mathbb{N}. \tag{2.2}\]
We also have a moment-cumulant formula for finite free cumulants:
**Proposition 2.4**.: _Let \(p\) be a monic polynomial of degree \(d\) and let \((m_{n})_{n=1}^{\infty}\) and \((\kappa_{n})_{n=1}^{d}\), be the moments and cumulants of \(p\), respectively. Then_
\[\kappa_{n}=\frac{(-d)^{n-1}}{(n-1)!}\sum_{\sigma\in\mathcal{P}(n)}d^{|\sigma| }\mu(0,\sigma)m_{\sigma}\sum_{\pi\geq\sigma}\frac{\mu(\pi,1_{n})}{(d)_{\pi}},\]
_for \(n=1,\ldots,d\) and_
\[m_{n}=\frac{(-1)^{n}}{d^{n+1}(n-1)!}\sum_{\sigma\in\mathcal{P}(n)}d^{|\sigma| }\mu(0,\sigma)\kappa_{\sigma}\sum_{\pi\geq\sigma}-\mu(\pi,1_{n})(d)_{\pi},\]
_for \(n\in\mathbb{N}\)._
**Remark 2.5**.: The explicit moment-cumulant formulas of the first three finite cumulants are
\[\kappa_{1} = m_{1},\qquad\kappa_{2}=\frac{d}{d-1}(m_{2}-m_{1}^{2}),\] \[\kappa_{3} = \frac{d^{2}}{(d-1)(d-2)}(2m_{1}^{3}-3m_{1}m_{2}+m_{3}),\]
and the explicit moment-cumulant formulas of the first three finite moments are
\[m_{1} = \kappa_{1},\qquad m_{2}=\frac{d-1}{d}\kappa_{2}+\kappa_{1}^{2},\] \[m_{3} = \frac{(d-1)(d-2)}{d^{2}}\kappa_{3}+\frac{3(d-1)}{d}\kappa_{2} \kappa_{1}+\kappa_{1}^{3}.\]
### Convergence of polynomials and Levy distance
In this setting of [1, 7] convergence of polynomials is pointwise convergence of the coefficients. We prefer to consider the weak convergence of the induced measures since it is common with the free probability setting. Thus, for a polynomial \(p\), with roots \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\), we define its distribution \(\mu_{p}\) as the uniform measure on the roots of \(p\), \(\mu_{p}=\frac{1}{d}\sum_{i}\delta_{\lambda_{i}}\).
To quantify this convergence we use the Levy distance
\[d_{L}(\mu,\nu):=\inf\{\epsilon>0\:|\:F(x-\epsilon)-\epsilon\leq G(x)\leq F(x+ \epsilon)+\epsilon\ \ \text{for all}\ x\in\mathbb{R}\},\]
where \(F\) and \(G\) are the cumulative distribution functions of \(\mu\) and \(\nu\) respectively.
## 3. Proof of Theorem 1.1
Before going in to the proof of the main theorem we prove a couple of lemmas about the support and cumulants of polynomials with mean \(0\) and variance \(1\).
**Lemma 3.1**.: _Let \(p\) be a real polynomial of degree \(d\) with \(\kappa_{1}=0\) and \(\kappa_{2}=1\). Then the support of \(p\) is contained in \((-\sqrt{d-1},\sqrt{d-1})\)._
Proof.: If \(\kappa_{1}=0\) and \(\kappa_{2}=1\) then
\[1=\kappa_{2}=\frac{d}{d-1}m_{2}=\frac{1}{d-1}\sum_{i=1}^{d}\lambda_{i}^{2}.\]
This means that \(\lambda_{i}^{2}<d-1\) (strict because there is at least another non-zero \(\lambda\)) and thus \(|\lambda_{i}|<\sqrt{d-1}\) for all \(i=1,\ldots,d\).
**Lemma 3.2**.: _Let \(p\) be a real polynomial of degree \(d\) with \(\kappa_{1}=0\) and \(\kappa_{2}=1\). Then there exists a constant \(c_{d}\), depending only on \(d\), such that \(\max_{2\leq s\leq d}|\kappa_{s}(p)|<c_{d}\)._
Proof.: By the previous lemma \(m_{n}\leq(d-1)^{n}\) and then \(\max_{2\leq s\leq d}|m_{s}(p)|<(d-1)^{d}\), so we can bound uniformly \(\kappa_{n}\) by the moment-cumulant formulas.
Now we are able to prove the main theorem which we state again for convenience of the reader.
**Proposition 3.3**.: _Let \(p\) be a real polynomial with \(\kappa_{1}=0\) and \(\kappa_{2}=1\). Then, there exists \(C_{d}\) such that for all \(n>0\)_
\[L\left(D_{1/\sqrt{n}}(p^{\boxplus_{d}n}),D_{1/\sqrt{d}}(H_{d})\right)<\frac{C_ {d}}{\sqrt{n}}.\]
Proof.: Let us denote \(h=D_{1/\sqrt{d}}(H_{d})\), \(p_{n}=p^{\boxplus_{d}n}\) and \(q_{n}=D_{1/\sqrt{n}}(p_{n})\). By the coefficient-cumulant formula, we know that
\[a_{j}^{q_{n}} = \frac{(d)_{j}}{d^{j}j!}\sum_{\pi\in\mathcal{P}(j)}d^{|\pi|}\mu(0_ {j},\pi)\kappa_{\pi}(q_{n})\] \[= a_{j}^{h}+\frac{(d)_{j}}{d^{j}j!}\sum_{\pi\in\mathcal{P}(j) \backslash\mathcal{P}_{12}(j)}d^{|\pi|}\mu(0_{j},\pi)\kappa_{\pi}(q_{n}),\]
where \(\mathcal{P}_{12}(j)\) is the set of partitions \(\pi=(V_{1},\ldots,V_{r})\in\mathcal{P}(j)\) such that \(|V_{i}|\leq 2\) for all \(i\in\{1,\ldots,r\}\) (i.e., \(\pi=(V_{1},\ldots,V_{r})\in\mathcal{P}(j)\backslash\mathcal{P}_{12}(j)\), if \(|V_{i}|>2\) for some \(i\in\{1,\ldots,r\}\)).
Recall that
\[|\kappa_{s}(q_{n})|=|\kappa_{s}(D_{1/\sqrt{n}}(p_{n}))|=\frac{n}{n^{s/2}}| \kappa_{s}(p)|\leq n^{1-s/2}c,\]
for \(s=3,\ldots,d\), where \(c:=c_{d}\) from Lemma 3.2. Thus, for any \(3\leq j\leq d\) and \(\pi=(V_{1},\ldots,V_{r})\in\mathcal{P}(j)\backslash\mathcal{P}_{12}(j)\) we get
\[|\kappa_{\pi}(q_{n})|\leq c^{r}\cdot n^{r}\cdot n^{-\frac{|V_{1}|+\ldots+|V_{r }|}{2}}=c^{r}n^{r-\frac{j}{2}}\leq c^{r}n^{\frac{j}{3}-\frac{j}{2}}=c^{r}n^{- \frac{j}{6}}\leq c^{d}n^{-\frac{1}{2}}. \tag{3.1}\]
Then,
\[|a_{j}^{q_{n}}-a_{j}^{h}|\leq\frac{c^{d}K_{1}(d)}{\sqrt{n}},\qquad\forall j\in \{1,\ldots,d\}\]
where
\[K_{1}(d)=\max_{1\leq j\leq d}\frac{(d)_{j}}{d^{j}j!}\sum_{\pi\in\mathcal{P}(j) \setminus\mathcal{P}_{12}(j)}d^{|\pi|}|\mu(0_{j},\pi)|.\]
Let's denote \(z_{1},z_{2},\cdots,z_{d}\) the \(d\) distinct roots of \(h\) and \(\delta=\frac{1}{2}\min_{1\leq i<j\leq d}|z_{i}-z_{j}|\). For \(0<\varepsilon<\delta\) we define \(B_{i}=\{z\in\mathbb{C}:|z-z_{i}|\leq\varepsilon\}\) and \(\partial B_{i}=\{z\in\mathbb{C}:|z-z_{i}|=\varepsilon\}\). For a fixed root \(i\), using the previous bound we can see that for any \(z\in\partial B_{i}\) we have that
\[|q_{n}(z)-h(z)|\leq\left|\sum_{j=0}^{d}z^{d-j}(-1)^{j}(a_{j}^{q_{n}}-a_{j}^{h} )\right|\leq\sum_{j=1}^{d}|z|^{d-m}|a_{j}^{q_{n}}-a_{j}^{h}|\]
\[\leq\frac{c^{d}K_{1}(d)}{\sqrt{n}}\sum_{j=1}^{d}(|z_{i}|+|\varepsilon|)^{d-j} \leq\frac{c^{d}K_{1}(d)K_{2}(d)}{\sqrt{n}}\]
where
\[K_{2}(d)=\max_{1\leq i\leq d}\sum_{j=1}^{d}(|z_{i}|+|\varepsilon|)^{d-j}.\]
On the other hand, if \(z\in\partial B_{i}\), we know that
\[|h(z)|=|(z-z_{0})\cdots(z-z_{n-1})|=|z-z_{1}|\cdots|z-z_{n}|\geq|z-z_{i}| \delta^{d-1}=\varepsilon\delta^{d-1}.\]
Finally, if we take
\[n>\frac{c^{2d}K(d)}{\varepsilon^{2}},\]
where \(K(d)=\frac{K_{1}^{2}(d)K_{2}^{2}(d)}{\delta^{2d-2}}\). Since \(c^{2d}K(d)\) does not depend on \(i\), we get that for any \(i=1,\ldots,n\), if \(z\in\partial B_{i}\), then
\[|q_{n}(z)-h(z)|\leq\frac{c^{d}K_{1}(d)K_{2}(d)}{\sqrt{n}}<\varepsilon\delta^{ d-1}\leq|h(z)|\leq|h(z)|+|q_{n}(z)|.\]
Thus, Rouche's theorem implies that \(q_{n}\) and \(h\) have the same number of roots (counting multiplicity) in \(B_{i}\) for \(i=1,\ldots,n\). By the definition of the \(B_{i}\) we know that they are pairwise disjoint and each one contains exactly one of the \(d\) roots of \(h\). Thus, each \(B_{i}\) contains exactly one of the \(d\) roots of \(q_{n}\) implying that distance between the roots of \(q_{n}\) and \(h\), (and therefore the Levy distance) is less than \(\varepsilon\).
Observe that Theorem 1.1 directly gives a bound for \(T\) in the next proposition.
**Proposition 3.4** ([1]).: _Let \(p\neq x^{d}\) be a real polynomial, then there exists \(T>0\) such that for all \(t>T\) the polynomial \(p^{\mathbb{H}_{2}t}\) has \(d\) different real roots._
Finally, we show that one cannot do better than \(O(\sqrt{n})\) as long as \(m_{3}(p)\neq 0\).
**Proposition 3.5**.: _Let \(p\) be a real polynomial with \(\kappa_{1}=0\) and \(\kappa_{2}=1\) and \(|m_{3}|=\alpha\neq 0\). Then, for all \(n>0\)_
\[L\left(D_{1/\sqrt{n}}(p^{\mathbb{H}_{2}n}),D_{1/\sqrt{d}}(H_{d})\right)\geq \frac{\alpha}{3d\sqrt{n}}.\]
Proof.: We use again the notation \(h=D_{1/\sqrt{a}}(H_{d})\), \(p_{n}=p^{\mathbb{H}_{d}n}\) and \(q_{n}=D_{1/\sqrt{n}}(p_{n})\) and suppose that \(L(q_{n},h)<\frac{\alpha}{3d\sqrt{n}}.\) Since \(\kappa_{1}(q_{n})=0\), from the moment cumulant formulas we have \(m_{3}(q_{n})=\frac{(d-1)(d-2)}{d^{2}}\kappa_{3}(q_{n})\) and then
\[|m_{3}(q_{n})|=\frac{(d-1)(d-2)}{d^{2}}|\kappa_{3}(q_{n})|=\frac{(d-1)(d-2)}{d^ {2}}\frac{n}{n^{3/2}}|\kappa_{3}(p)|=\frac{|m_{3}(p)|}{\sqrt{n}}=\frac{\alpha} {\sqrt{n}}.\]
Since \(m_{3}(h)=0\), we can compute
\[m_{3}(q_{n})=\frac{1}{d}\sum_{i=1}^{d}\lambda_{i}^{3}(q_{n})=\frac{1}{d}\sum_{ i=1}^{d}\lambda_{i}^{3}(q_{n})-\frac{1}{d}\sum_{i=1}^{d}\lambda_{i}^{3}(h),\]
and thus
\[|m_{3}(q_{n})| \leq \frac{1}{d}\sum_{i}|\lambda_{i}^{3}(q_{n})-\lambda_{i}^{3}(h)|\] \[= \frac{1}{d}\sum_{i=1}^{d}|\lambda_{i}(q_{n})-\lambda_{i}(h)|| \lambda_{i}^{2}(q_{n})+\lambda_{i}(q_{n})\lambda_{i}(h)+\lambda_{i}^{2}(h)|\] \[< \frac{1}{d}\sum_{i=1}^{d}\left(\frac{\alpha}{3d\sqrt{n}}\right)(d +d+d)=\frac{\alpha}{\sqrt{n}.}\]
Where we used in the last inequality the assumption that \(L(q_{n},h)<\frac{\alpha}{3d\sqrt{n}}.\) This is a contradiction since the inequality is strict.
**Remark 3.6**.: A specific example with \(\kappa_{3}\neq 0\) is the finite free Poisson distribution which has cumulants \(\kappa_{n}=\alpha\) for all \(n\). If \(\alpha d\) is a positive integer we obtain a valid polynomial. This is a modification of a Laguerre polynomial, thus we obtain a precise estimate for the distance between the roots of certain Laguerre polynomials and the Hermite polynomials.
**Remark 3.7**.: A closer look at (3.1) shows that if \(m_{3}(p)=0\) then the convergence rate is of order \(1/n.\) Indeed, \(m_{3}(p)=0\) implies \(\kappa_{3}(q_{n})=\kappa_{3}(p)=0\). So in (3.1) we only need to consider partitions with \(|V_{i}|\geq 4\). In this case, for any \(4\leq j\leq d\) we have
\[|\kappa_{\pi}(p)|\leq c^{r}n^{r-\frac{j}{2}}\leq c^{r}n^{\frac{j}{4}-\frac{j} {2}}=c^{r}n^{-\frac{j}{4}}\leq c^{d}n^{-1}.\]
Finally, let us mention that while it is very tempting to let \(d\) go to infinity, possibly together with \(n\), to obtain a Berry-Esseen bound in free probability, there are two problems. First, the quantity \(C_{d}\), as we obtained it, increases broadly as \(d\to\infty\), and second, there is, for the moment not a good bound between finite free and free convolutions.
|
2302.05487 | Cryogenic nano-imaging of second-order moiré superlattices | Second-order superlattices form when moir\'e superlattices of similar
periodicities interfere with each other, leading to even larger superlattice
periodicities. These crystalline structures have been engineered utilizing
two-dimensional (2D) materials such as graphene and hexagonal boron nitride
(hBN) under specific alignment conditions. Such specific alignment has shown to
play a crucial role in facilitating correlation-driven topological phases
featuring the quantized anomalous Hall effect. While signatures of second-order
superlattices have been identified in magnetotransport experiments, any
real-space visualization is lacking to date. In this work, we present
\NT{electronic transport measurements and cryogenic nanoscale photovoltage (PV)
measurements} that reveal a second-order superlattice in magic-angle twisted
bilayer graphene closely aligned to hBN. This is evidenced by long-range
periodic photovoltage modulations across the entire sample backed by the
corresponding electronic transport features. Supported by theoretical
calculations, our experimental data show that even minuscule strain and
twist-angle variations on the order of 0.01$^\circ$ can lead to a drastic
change of the second-order superlattice structure between local
one-dimensional, square or triangular types. Our real-space observations
therefore serve as a strong `magnifying glass' for strain and twist angle and
can shed new light on the mechanisms responsible for the breaking of spatial
symmetries in twisted bilayer graphene, and pave an avenue to engineer
long-range superlattice structures in 2D materials using strain fields. | Niels C. H. Hesp, Sergi Batlle-Porro, Roshan Krishna Kumar, Hitesh Agarwal, David Barcons-Ruiz, Hanan Herzig Sheinfux, Kenji Watanabe, Takashi Taniguchi, Petr Stepanov, Frank H. L. Koppens | 2023-02-10T19:48:03Z | http://arxiv.org/abs/2302.05487v2 | # Cryogenic nano-imaging of second-order moire superlattices
###### Abstract
Second-order superlattices form when moire superlattices of similar dimensions interfere with each other, leading to even larger superlattice periodicities. These crystalline structures have been engineered utilizing two-dimensional (2D) materials such as graphene and hexagonal boron nitride (hBN) under specific alignment conditions. Such specific alignment has shown to play a crucial role in facilitating correlation-driven topological phases featuring the quantized anomalous Hall effect. While signatures of second-order superlattices have been found in transport experiments, any real-space visualization is lacking to date. In this work, we present cryogenic nanoscale photovoltage (PV) measurements that reveal a second-order superlattice in magic-angle twisted bilayer graphene (MATBG) closely aligned to hBN. This is evidenced by long-range periodic photovoltage modulations across the entire sample backed by corresponding electronic transport features. Our theoretical framework shows that small strain- or twist-angle variations can lead to a drastic shift between a local one-dimensional, square or triangular superlattices. Our real-space observations shed new light on the mechanisms responsible for breaking spatial symmetries in TBG and pave an avenue to engineer long-range superlattice structures in 2D materials.
## I Introduction
The recently observed collection of strongly correlated phases in magic-angle twisted bilayer graphene (MATBG) has sparked a wave of experimental and theoretical discoveries [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. In these 2D heterostructures, the proximity effects of encapsulating layers to the MATBG plane can be precisely controlled and provides an additional tuning knob [23; 24]. By virtue of such proximity response, alignment of MATBG to adjacent layers of insulating hBN has demonstrated a potential to realize exotic quantum phases, e.g. via an engineered inversion symmetry breaking [25; 26]. Scanning tunneling spectroscopy experiments have demonstrated local imaging of such aligned heterostructures, yet only providing visualization and insights within relatively small areas [27]. The real-space distribution of the second-order superlattice potential (SOSL) on the mesoscale has not yet been reported, thus leaving many open questions about precursors for exotic quantum phases in graphene-based moire heterostructures, and the role of structural characteristics that stabilise these phases [28]. Furthermore, experimental investigations into the implications of superlattice strain have been limited, despite their expected influence on the superlattice potential, possibly resulting in a reconfiguration of the phase diagram [28] and the emergence of novel phases, such as stripe-like orders [29].
In this work, we perform cryogenic near-field optoelectronic experiments on a MATBG aligned to hBN. With this technique, we are capable of probing the photovoltage response at length scales far below the wavelength (here 10.7 \(\mu\)m), where we are only limited by the tip radius (\(\approx 20\) nm) and any spreading induced by the photoresponse mechanism. We observe two sets of fringes rotated with respect to each other by \(\sim\)50\({}^{\circ}\), which we interpret as a manifestation of large-scale local potential variations that originate from the second-order superlattice (SOSL). We complement our experimental findings with a theoretical model that visualizes the SOSL in real space as the interference between the underlying first-order superlattice potentials associated with the twisted bilayer graphene and graphene/hBN superlattices. Our model stresses the high sensitivity of the resulting SOSL to local strain and twist angle variations.
Our experimental workflow is schematically pictured in Fig. 1a. We fabricated a heterostructure consisting of hBN-encapsulated MATBG where the top encapsulating layer is closely aligned to the upper graphene layer as we verified with AFM measurements (Supplementary Note I). The heterostructure is contacted to a set of Cr/Au metallic leads (see Methods and Fig. 1c). In order to visualize the SOSL potential, defined here as electronic potential landscape governed by the atomic lattice structure, we employ a scattering-type scanning near-field imaging microscope operating at a temperature \(T\) down to 10 K. By focusing infrared light (excitation energy \(E=116\) meV, corresponding to wavelength \(\lambda=10.7\)\(\mu\)m) to the apex of a sharp metal-coated AFM tip (radius of \(\approx 20\) nm), a hot-spot of light is generated that induces local photoexcitation of the charge carriers at the nanoscale. This leads to a local photovoltage generation that is probed by the global contacts, facilitated by the Shockley-Ramo mechanism [30] (see Fig. 1b,c). Details of the cryogenic near-field photovoltage measure
ments can be found in the Methods.
**Photovoltage nanoscopy on a second-order superlattice** The main experimental observation is shown in Fig. 1b, featuring a photovoltage map of our MATBG sample at \(T=10\) K. Surprisingly, we observe photovoltage fringes that span across the entire bulk of the sample. In particular, in the top-left corner we observe a clear superposition of two sets of almost vertical and almost horizontal fringes (highlighted by the black circle).
To further unravel the nature of the observed photovoltage periodicities, we map the photovoltage response for different gate voltages (Fig. 2a-d). We observe a very similar photovoltage pattern at charge neutrality (Fig. 2a) and with the Fermi level inside the moire flat bands (Fig. 2b and c), as well as at a partial filling outside the flat band (Fig. 2d). In particular, the observed periodicity does not change with gate voltage. We also find that the photovoltage response is enhanced by an order of magnitude for fillings outside the flat band, with the features appearing to be less pronounced. (Fig. 2d). To accentuate that the underlying structure remains unchanged with filling factor, we show in Fig. 2e the spatial derivatives of the photovoltage response and we find a clear resemblance between the observed features at the Fermi levels inside and outside the flat band fillings. As a consequence, the observed features cannot be linked with electronic interactions, since the electron dynamics changes drastically between the charge densities inside and outside the flat band fillings, but rather originate from a modulating potential background.
Additional evidence of a modulating potential background is further corroborated by photovoltage measurements as a function of temperature. Fig. 2f depicts normalized photovoltage line cuts and shows an unchanged response with temperature, aside from the magnitude (Supplementary Note II shows the full spatial photovoltage maps at different temperatures). This observation also suggests that the role of electronic interactions in creating the periodic photovoltage response remains irrelevant. From these line traces we extract a periodicity of the SOSL potential of \(\sim 500\) nm, which falls within the resolution limit of our experiment (see Supplementary Note III).
In order to confirm the presence of a SOSL and further characterize its properties, we explore its low-temperature electronic transport properties shown in Fig. 3a (see Methods for details). A set of transport curves \(R_{xx}\) vs. band filling \(\nu\) taken at temperatures between 5 K and 80 K clearly indicates the presence of flat bands in our twisted bilayer graphene device. We identify characteristic local resistance maxima at integer fillings close to \(\nu=+1\), \(+2\), and \(+3\) (marked by black lines), confirming the magic-angle nature of our device. In addition, we observe two pronounced band insulator resistance peaks characteristic for fully-filled or -emptied electronic flat bands at \(\nu=\pm 4\). Interestingly, instead of observing a single resistance peak, we observe split resistance maxima at \(\nu=+4\) and a shoulder at \(\nu=-4\) that appear nearly symmetric around the charge-neutrality point (CNP). Here we argue that this electronic transport signature originates from a close alignment between hBN and MATBG, which we further confirm by inspecting the AFM topography (Supplementary Note I) and extracting a large thermally-activated gap at CNP of \(\Delta=3.6\) meV Supplementary Note IV). The latter is a signature signalling the Dirac cones gap opening due to the graphene/hBN alignment [31]. Overall, we find that the lower density peak originates from a single-particle gap in MATBG (marked by the black arrow) and the higher density peak is a result of alignment between MATBG and hBN plane (marked by the red arrow) [32; 33].
After carefully extracting the charge carrier density using the Hall effect, we define the twist angles \(\theta_{\text{hBN}}\) (between hBN and bottom graphene) and \(\theta_{\text{TRG}}\) (between the two graphene layers) based on the resistance peak positions (see Methods). Fig. 3b relates the charge carrier density expected at a fully filled moire band in the
Figure 1: **Nanoscale photovoltage measurements of MATBG/hBN SOSLs at \(T\)=10 K.****a**, Schematic illustration of the design of experiment. A metal-coated AFM tip is positioned in the focus of an infrared laser beam, creating a hot-spot underneath the tip, which in turn locally heats the charge carriers in MATBG. Any long range modulation of the electronic properties can induce a global photovoltage that we read out with the device contacts. The long-range periodicity reflects a SOSL formed by the interference of the underlying short-range superlattices. **b**, An example of a photovoltage measurement on our device in the black dashed area in **c**, revealing two sets of fringes that we associate with the SOSL potential. Excitation energy is 116 meV and the scale bar is 1 \(\mu\)m. **c**, An optical micrograph of the studied sample. The red lines indicate the probes used for the photovoltage measurements in **b**. Scale bar is 4 \(\mu\)m.
case of twisted graphene layers (solid lines, lattice mismatch \(\delta=0\%\)) and in the case of graphene/hBN (dashed lines, lattice mismatch \(\delta=1.64\%\)2). Here, the black markers correspond to charge carrier densities extracted from Fig. 3a, from where we determine \(\theta_{\text{TBG}}=1.03^{\circ}\) and \(\theta_{\text{hBN}}=0.51^{\circ}\). We note that our device shows a little twist angle variations of \(\pm 0.02^{\circ}\) (Supplementary Note IV), in line with the presence of a SOSL across the entire sample, despite its high twist-angle sensitivity.
Footnote 2: The \(\theta_{\text{TBG}}\) values are \(\pm 0.02^{\circ}\) and \(\pm 0.02^{\circ}\), respectively.
Electronic transport measurements strongly suggest that the sample hosts two coexisting superlattices. They are formed by a graphene layer closely aligned to hBN and the TBG superlattice, with periodicities of about 13.4 and 13.7 nm, respectively (deduced as in Fig. 3b). These two superlattices give rise to a SOSL with an even larger periodicity [34, 35, 36]. Figure 1a shows an example of such second-order superlattice, mimicking the configuration in our sample. An hBN flake (green) has a small lattice mismatch and misalignment with respect to a graphene sheet (red), while a second graphene sheet has only a slight twist (blue).
Notably, our photovoltage maps provide a real-space image of the broken inversion symmetry in the MATBG superlattice. MATBG becomes asymmetric with respect to inversion when closely aligned to the adjacent hBN layer (Fig. 2g). Our observation of a complex real-space patterns of the second-order superlattice constitutes a first realization of inversion symmetry breaking imaged over the entire bulk of the studied device. Yet, we do not observe any signatures of the anomalous Hall effect (AHE) in electronic transport measurements. This is consistent with recent theoretical predictions and local magnetometry experiments where only very specific twist angles between TBG and hBN/graphene satisfy a commensurate condition for an AHE state [37, 38].
**Calculation of spatial potential profile** To model the spatial potential profile of the second-order superlattice, we examine its formation in the reciprocal space. In this work, we consider the case of an unstrained and strained heterostructure. First, we start with the case without applied strain. Figure 4a shows the lattice vectors \(\vec{k}\) of the unstrained hBN and graphene lattices, from which we take the difference vectors (blue) that represent their SOSL. These vectors feature a strongly reduced length and by taking the inverse of their moduli (and accounting for a factor \(2\pi\)), we find the periodicity \(\bar{\lambda}_{\text{M}}\) of the SOSL (also defined analytically in the Methods). Figure 4c depicts the SOSL_periodicity for a range of \(\theta_{\text{hBN}}\) and \(\theta_{\text{TBG}}\). We note that \(\bar{\lambda}_{\text{M}}\) is equal in the three principal directions (representing a triangular lattice). We find a limited window of twist angles where a SOSL can be detected in our experiment, that is where \(\bar{\lambda}_{\text{M}}\gg\bar{\lambda}_{\text{M}}\). This is understood as in reciprocal space the superlattice vectors need to line up very closely with each other, while they also depend sensitively on hBN and graphene lattice mismatch \(\delta\), \(\theta_{\text{hBN}}\) and \(\theta_{\text{TBG}}\). For \(\delta=1.64\%\) and the twist angles extracted from the transport measurements, we find \(\bar{\lambda}_{\text{M}}\approx 390\) nm, which is slightly lower than the periodicity extracted in the near-field photovoltage maps. Note, it is a lucky coincidence that the hBN-graphene lattice mismatch happens to be such that SOSLs appear for twist angles near the magic angle.
We now move to the case where lattice strains become important actors in overall SOSL formation. Our minimalistic model of a triangular lattice lacks an explanation
Figure 2: **Gate and temperature response of the observed photovoltage features and broken inversion symmetry.****a**, Local photovoltage map of the entire device taken at \(\nu=0.6\) and \(T=10\) K. The white dashed lines demonstrate two dominant directions of the observed fringes. Scale bar is 1 \(\mu\)m and the locations of the voltage probes are highlighted in yellow. **b-d**, Photovoltage maps taken at different filling factors in the area highlighted by the white dashed-line box in **a**. We observe no significant change in the overall fringe layout, even with the Fermi level in the remote bands (panel **d**). This excludes the effect of interactions being responsible for these periodic structures. **e** Derivatives \(\text{d}V_{\text{PV}}/\text{d}x\) taken for the left parts of the maps in **b** and **d** confirm the insensitivity of the underlying structure to the electronic state of TBG. **f** Photovoltage linecuts taken along the white arrow in **a** for three different temperatures. Positions of the common photovoltage peaks are highlighted by the black arrows, which remain the same over the presented range of temperatures. **g**, Visualization of the broken inversion symmetry in a TBG structure. The lattice structure TBG interferes with a triangular background potential due to the hBN/graphene superlattice. Altogether, inverting the structure around the central point would yield a different structure, and hence the inversion symmetry is broken.
for the observation of periodic fringes in only two directions, which in turn are separated by an unexpected angle of \(\sim 50^{\circ}\) (dashed lines in Fig.2a). To account for that, we introduce a small amount of a uniaxial heterostrain \(\epsilon\) to the graphene layers, which is commonly present in these heterostructures [39, 40, 41, 42, 9]. We apply a strain tensor \(\overline{\epsilon}\) to the graphene lattice vectors [43, 44], following \((\overline{I}+\overline{\epsilon})^{-1}\overline{\epsilon}\), where
\[\overline{\epsilon}=\epsilon\begin{pmatrix}\cos^{2}\alpha-\rho\sin^{2}\alpha&(1 +\rho)\cos\alpha\sin\alpha\\ (1+\rho)\cos\alpha\sin\alpha&\sin^{2}\alpha-\rho\cos^{2}\alpha\end{pmatrix}, \tag{1}\]
where \(\alpha\) is the angle of the principal direction of applied uniaxial strain with respect to the zigzag direction, and \(\rho=0.165\) is the Poisson ratio of graphene [45]. \(\overline{I}\) is the identity matrix. As illustrated in Fig. 4b, strain has a large impact on the relative magnitude and direction of the second-order superlattice vectors. This leads to a deformation of the lattice, such that superlattices act as'magneifying glasses' of strain [46, 47]. We repeat the calculation of \(\tilde{\lambda}_{\text{M}}\) with 0.1% strain [39, 40, 41, 42] applied to both graphene layers along the zigzag direction (Supplementary Note V discusses details of these calculations in the presence of strain). The blue vectors in Fig. 4b become modified and now have three different moduli, which would naively be represented by three separate red color blobs in the \(\theta_{\text{TBG}}\)-\(\theta_{\text{hBN}}\) space.
Figure 4d displays the maximum periodicity of three principal lattice vectors and reveals an intricate picture of strongly varying periodicities for different twist angles. Notably, this distribution is much more complex than expected from a simple picture introduced in Fig. 4a. Here, we take into account that a simple vector summation may end up with a set of lattice vectors representing by obtuse triangles. Since that would lead to an overestimation of \(\tilde{\lambda}_{\text{M}}\) we correct for it by remapping the lattice vectors until we reach an acute triangle, allowing for a fair comparison of \(\tilde{\lambda}_{\text{M}}\) with \(\lambda_{\text{M}}\). This leads to the SOSL unit cell size that is represented by a fractal-like ring of non-vanishing wavelengths (Fig. 4d).
To obtain a better understanding of this result, we focus our attention to three particular sets of parameters (\(\theta_{\text{hBN}}\), \(\theta_{\text{TBG}}\), \(\epsilon\)) and visualize the real-space potential of the resulting second-order superlattice. As demonstrated in Fig. 4e, tiny variations in the twist angle and strain have a large impact on the type of second-order superlattice. For instance, by applying 0.1% of strain, the geometry may change from a triangular to a 1D lattice. Likewise, a square lattice can be formed by changing both twist angles only slightly. We calculate the real-space potential for a wider range of both twist angles, as presented in the birds-eye view in Figure 5. We emphasize that smaller periodicities must emerge in other non-dominant directions as well. However, these cannot be measured using our near-field probe with a limited spatial resolution and spatial selectivity of the global electrical photovotage probes. We can identify the following features: at three resonant points, a 1D lattice emerges with an even longer periodicity than our sample lateral dimensions, and hence is not detectable. Surrounding these points, 1D lattice structures are present with periodicities on the order of several hundreds of nanometers. Further away from these resonant points, the periodicity tends to fall below the experimental resolution and thus could not be observed. On the other hand, a triangular lattice forms right in the middle of these resonant points, which in turn converts via a square lattice towards a 1D lattice near those resonant points. This clearly illustrates the wide variety of lattices that can be formed in the limited parameter space of our experiment: (\(\theta_{\text{hBN}}\), \(\theta_{\text{TBG}}\), \(\epsilon\)).
**PV generation mechanism** How do we interpret the observed bulk photoresponse that does not decay away from contacts? Here, we assume that the periodicities (i.e. the spatial frequencies) embedded in the combined hBN/MATBG lattice structure are preserved in the lattice potential, and subsequently in the electronic properties that could induce a photovoltage. As we established above, there must be at least one non-linearity in the optoelectronic response pathway (Supplementary Note VI).
In many cases, the optoelectronic response in graphene-based devices is primarily facilitated by the photothermelectric effect (PTE), which drives a photovoltage that is
Figure 3: **Electronic transport measurements.****a**, Longitudinal resistivity measurements as a function of MATBG filling factor at different temperatures ranging from 5 to 80 K. Solid black lines indicate positions of the integer fillings \(\nu=1\), 2, 3. Black and red arrows correspond to the resistance peaks originating from fully-filled (\(\nu=+4\)) and fully-empited (\(\nu=-4\)) MATBG moire superlattice. **b**, Theoretical calculations of the superlattice periodicity (red) and corresponding fully filled moiré charge carrier density (blue) as a function of the twist angle \(\theta_{\text{TBG}}\) (solid lines) and \(\theta_{\text{hBN}}\) (dashed lines). Here we assume lattice constant mismatch between graphene and hBN \(\delta=1.64\%\). The black dots correspond to the peaks in a marked by the arrows, from which we deduce twist angles \(\theta_{\text{TBG}}=1.03^{\circ}\) and \(\theta_{\text{hBN}}=0.51^{\circ}\).
proportional to the spatial gradients in the Seebeck coefficient \(S^{30}\)[48, 49, 50, 51]. Also in our case, it could play a role in the photoresponse pathway, since the Seebeck coefficient \(S\) is a function of the electrical conductivity, which in turn is governed by the electronic properties of the atomic lattice. Hence, this pathway will provide a projection of the underlying lattice structure on the photovoltage maps. In this scenario, we can explain the strong enhancement of photovoltage response for higher filling factors (as seen in Fig 2d) by an increased cooling length due to the enhanced charge carrier mobility in the remote bands, as well as by the absorption efficiency and strength of the generating photovoltage mechanism [52]. In the vicinity of more mobile charge carriers (i.e., outside the flat band regime), the photovoltage electrodes become less sensitive to spatial Seebeck coefficient variations, while the reduced charge carrier mobility inside the flat band boosts our probes' spatial sensitivity.
Another possible PV generation mechanism is the second-order photoresponse, which has been reported to be more significant in flat band systems [53, 54, 55, 56], and this is enabled when inversion symmetry is broken by utilizing hBN alignment, as is seen in our system. Additionally, strain can lead to a non-zero bulk photoresponse for unpolarized light. Specifically, infrared inter-band transitions in twisted bilayer graphene were found to produce a dominant second-order photocurrent response (linear shift current). Generally, it is likely that bulk photovoltaic effects will take precedence at low temperatures, and it has been shown to lead to an increased magnitude for fillings above \(|\nu|\)=4. However, we emphasize that comprehensive understanding of the photovoltage generation mechanism is not strictly required to explain the dataset shown in the current study and will remain the focus of the future reports.
**Outlook** To conclude, we have realized and observed a second-order superlattice formed by the alignment of MATBG to one of the adjacent hBN substrates. These observations, corroborated by our theoretical model, demonstrate a wide tuneability of the SOSL structures. In combination with controlled ways [57, 58] to tune \(\theta_{\text{hBN}}\), \(\theta_{\text{TBG}}\) or strain \(\epsilon\), it will open the pathway to explore optoelectronic properties of a variety of different SOSL structures including triangular, square and 1D lattices. In particular, the latter holds a promise to explore Luttinger liquid states with a tuneable crystalline quality 1D channels [59]. The alignment of MATBG and hBN has been found to promote the emergence of unique quantum states displaying the quantum anomalous Hall effect [60]. Our study sheds a new light on the mesoscale precursors of such alignment. Some open questions, however, remain. For example, what exact mechanism drives the
Figure 4: **Calculation of second order superlattice properties.****a**, Reciprocal lattice vectors of unstrained and closely aligned hBN (green) and graphene layers (black). The corresponding dashed vectors account for lattice periodicities in other directions (at multiples of 60\({}^{\circ}\)). The red difference vectors denote the hBN-Gr and TBG superlattice, and their equivalents at multiples of 60\({}^{\circ}\) (dashed red vectors). The resultant SOSL reciprocal vectors (blue) appear from the summation of red vectors. **b**, Same reciprocal vectors with uniaxial strain applied to the graphene lattices. **c**, SOSL periodicity as a function of \(\theta_{\text{TBG}}\) and \(\theta_{\text{hBN}}\) of the unstrained SOSL lattice (corresponding to **a**). **d**, Maximum SOSL periodicity (along one of its principal axes) of the deformed lattice when strain applied to the graphene layers (corresponding to **b**). The star indicates a data point obtained from the electronic transport measurements. **e**, Three examples of different SOSL geometries: a triangular lattice, a square lattice, and a 1D lattice. The color of the frames corresponds to the color of the dots in **d**). The color bar represents the calculated potential, and we note that the underlying first-order superlattices are visible. The SOSL experimentally revealed in this work is highlighted by the green frame. For clarity we smoothed the data with a Gaussian filter of width \(\sigma=3.5\) nm (\(\sigma\) is standard deviation of Gaussian kernel). Scale bar is 250 nm.
photovoltage in the SOSLs and what is at the cause of the required non-linear effect playing a significant role in the photovoltage generation? Further experiments are needed to reveal a mesoscopic picture of inversion symmetry breaking in moire materials due to alignment with hBN and the role of strain profiles.
Figure 5: **Real-space maps of the second-order superlattice in (\(\theta_{\text{TBC}}\)-\(\theta_{\text{hBN}}\))-space.** For each square, the grey-scaled colour map represents a calculation of the total superlattice potential across an area of 1740\(\times\)1740 nm\({}^{2}\) with \(\theta_{\text{TBC}}\) and \(\theta_{\text{hBN}}\) given by the center of that square. The applied strain is the same as in other figures (0.1% along the zigzag direction). To simulate the effect of the finite tip radius and cooling length, we smoothed the data with a Gaussian filter of width \(\sigma=180\) nm (\(\sigma\) is standard deviation of Gaussian kernel). The coloured background is the same calculation of the maximum periodicity along one of the SOSL lattice direction as presented in Fig. 4**d**.
## Methods
### Device fabrication
The device consists of TBG encapsulated in 16 nm bottom hBN and 10 nm top hBN flakes, altogether placed on top of a graphite flake, serving as a local gate. During the stacking process, the graphene flake is cut with an AFM tip, with intention to prevent additional strain building up in the tear-and-stack process used otherwise [23, 61, 62]. To minimize the number of bubbles in the stack, we pick up each flake at a temperature within \(100-110\)\({}^{\circ}\)C [63, 64]. In the final step, when dropping the stack on the target substrate with alignment markers, we repeat the drop-down step at least once to further squeeze out air bubbles. Figure S1 shows an AFM scan of the resulting stack. We choose the cleanest area of the stack to pattern our device in a Hall-bar shape (Fig. 1c).
### Cryogenic near-field photovoltage measurement details
We used a cryogenic scattering-type scanning near-field microscope (cryoSNOM) developed by Neaspec/Attocube to carry our the near-field photovoltage experiments at temperatures between \(10-300\) K. A tuneable quantum cascade laser (Daylight Solutions) acts as an infrared light source, and the data shown in this work were acquired at an excitation energy of \(116\) meV (\(10.6\)\(\mu\)m). We focus approximately \(10\) mW of this light on a PtIr-coated AFM tip (Nanoworld, \(50\) nm coating), which is oscillating above the sample surface at \(\approx 250\) kHz with a tapping amplitude of \(\approx 100\) nm. The AFM feedback loop incorporates a system developed by Neaspec/Attocube by which we can lower the quality factor of the AFM cantilever resonance to the values similar to ambient operation at room temperature. This helps a quick decay of the cantilever motion, and therefore we are less limited in the scanning speed. Finally, to reduce coupling of strong floor vibrations with our microscope, we set up a home-built active damping system that cancels these vibrations and stabilizes the optical table.
For simultaneous measurement of the photovoltage between two pairs of contacts, we used two differential voltage amplifiers (Ithaco 1201) with a different contact providing the ground. The carrier doping in our samples is tuned by applying a DC voltage between the graphite gate and our device, while keeping the Si backgate grounded. To avoid detecting unwanted far-field contributions to the photovoltage signal, we detect the near-field signals at the second harmonic of the cantilever oscillation.
We follow Ref. [51] in the scheme for analysing the photovoltage maps. Here, the measured photovoltage signal is demodulated with the driving signal of the AFM cantilever as a reference signal. However, the actual motion of the AFM cantilever can have a phase offset that varies with the position on the sample (due to tip-sample interaction). This phase offset is given at each pixel by the measured phase delay between the tip driving signal and the actually detected motion. Therefore, we correct our photovoltage signal measured at harmonic \(i\) by subtracting at every point \(i\) times this phase delay. In addition to this, there remains a global phase offset in the corrected photovoltage signal due to the electronics in the circuit. Since the photovoltage signal is a real-valued quantity, we subtract this global phase offset, which we determine by taking the most frequent phase within a scan.
### Transport measurement details
The main set of four-terminal transport data shown in Fig. 3a was taken in an Advanced Research System cryostat with base temperature of \(5\) K and magnetic field up to \(1\) T. In these electronic transport measurements we followed a conventional lock-in measurement scheme. A low-frequency AC current (\(17.111\) Hz) of \(10\) nA flows between the bottom-right and top contacts, while measuring the voltage drop between the two left-middle contacts using a Stanford SR860 lock-in amplifier. The gate voltage was sourced using a Keithley \(2400\) Source Meter Unit.
### Analysis of transport data
From a Hall measurement in a magnetic field of \(1\) T we determine the carrier density \(n\) (in \(cm^{-2}\)) as a function of the applied gate voltage \(V_{\mathrm{G}}\). A linear fit near the charge-neutrality points (\(|V_{\mathrm{G}}|<1\) V) yields \(n(V_{\mathrm{G}})=1.24\cdot V_{\mathrm{G}}+0.27\) in units of \(10^{12}\)\(cm^{-2}\). For the subsequent analysis we allow for a small shift in the charge neutrality voltage (for instance induced by photodoping [65]) by replacing \(V_{\mathrm{G}}\) with \(V_{\mathrm{G}}-V_{\mathrm{shift}}\). From the peak value of the resistance peak at charge neutrality in Fig. 3a we find \(V_{\mathrm{shift}}=-45\) mV. We define the full-filling carrier density \(n_{\mathrm{s}}\) as half the distance from the two resistance peaks marked by the black arrows in Fig. 3a. In this, we notice that these two peaks are slightly off-centered with respect to the charge neutrality by \(60\) mV. The resistance peak of the hBN-Gr superlattice is less developed for negative carrier densities. Therefore we extract the \(n_{\mathrm{s}}\) corresponding to the hBN-Gr lattice from the relative scaling of the Gr-Gr and hBN-Gr superlattice peaks at positive carrier densities (marked in black and red, respectively).
### First-order superlattice periodicity
Our transport data reveals two moire lattices hosted by our sample: one given by graphene aligned to hBN, and another one owing to twisted bilayer graphene (TBG). Such moire lattices exhibit a resistive state at a particular carrier density associated with the full-fulling state of the superlattice [33, 66]. For a superlattice formed by two superposed hexagonal lattices, this full-filling carrier density \(n_{\mathrm{s}}\) is given by [33]
\[n_{\mathrm{s}}=\frac{8}{\sqrt{3}\lambda_{\mathrm{M}}^{2}}, \tag{2}\]
where the twist angle \(\theta\) and lattice mismatch \(\delta\) define the superlattice periodicity \(\lambda_{\mathrm{M}}\) as
\[\lambda_{\mathrm{M}}=\frac{(1+\delta)a}{\sqrt{2(1+\delta)(1-\cos\theta)+ \delta^{2}}} \tag{3}\]
with \(a=0.246\) nm corresponding to the graphene lattice periodicity. The properties of TBG are reflected by the curves with \(\delta=0\%\), while we model hBN-graphene superlattice with \(\delta=1.64\%\). This value is slightly lower than the value typically stated in literature, \(\delta=1.8\%\), However, the typical amount of strain in the graphene layers of \(\approx 0.1\%\) can account for this difference. |
2308.01862 | Wider and Deeper LLM Networks are Fairer LLM Evaluators | Measuring the quality of responses generated by LLMs is a challenging task,
particularly when it comes to evaluating whether the response is aligned with
human preference. A novel approach involves using the LLM itself to make
evaluation and stabilizing the results through multiple independent
evaluations, similar to a single-layer narrow LLM network. This network
consists of a fixed number of neurons, with each neuron being the same LLM. In
this paper, we draw upon the extensive research on deep neural networks to
explore whether deeper and wider networks can lead to fairer evaluations.
Specifically, inspired by the observation that different neurons in a neural
network are responsible for detecting different concepts, we first adaptively
generate as many neuron roles as possible for each evaluation sample. Each
perspective corresponds to the role of a specific LLM neuron in the first
layer. In subsequent layers, we follow the idea that higher layers in deep
networks are responsible for more comprehensive features, each layer receives
representations from all neurons in the previous layer, integrating the locally
learned evaluation information to obtain a more comprehensive evaluation
result. Interestingly, this network design resembles the process of academic
paper reviewing. To validate the effectiveness of our method, we construct the
largest and most diverse English evaluation benchmark LLMEval$^2$ for LLM
evaluators, comprising 15 tasks, 8 abilities, and 2,553 samples. Experimental
results demonstrate that a wider network (involving many reviewers) with 2
layers (one round of discussion) performs the best, improving kappa correlation
coefficient from 0.28 to 0.34. We also leverage WideDeep to aid in the
assessment of Chinese LLMs, which has accelerated the evaluation time by 4.6
times, resulting in a 60% cost saving. WideDeep achieves a remarkable 93%
agreement level among humans. | Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, Yongbin Li | 2023-08-03T16:38:34Z | http://arxiv.org/abs/2308.01862v1 | # Wider and Deeper LLM Networks
###### Abstract
Measuring the quality of responses generated by large language models (LLMs) is a challenging task, particularly when it comes to evaluating whether the response is aligned with human preference. A novel approach involves using the LLM itself to make evaluation and stabilizing the results through multiple independent evaluations, similar to a single-layer narrow LLM network. This network consists of a fixed number of neurons, with each neuron being the same LLM. In this paper, we draw upon the extensive research on deep neural networks to explore whether deeper and wider networks can lead to fairer evaluations. Specifically, inspired by the observation that different neurons in a neural network are responsible for detecting different concepts, we first adaptively generate as many neuron roles as possible for each evaluation sample. Each perspective corresponds to the role of a specific LLM neuron in the first layer. In subsequent layers, we follow the idea that higher layers in deep networks are responsible for more comprehensive features, each layer receives representations from all neurons in the previous layer, integrating the locally learned evaluation information to obtain a more comprehensive evaluation result. Interestingly, this network design resembles the process of academic paper reviewing, where each reviewer independently rates based on their preferences. Subsequently, through multiple discussions, they consider other reviewers' opinions to reach the final acceptance decision. To validate the effectiveness of our method, we construct the largest and most diverse English evaluation benchmark LLMEval\({}^{2}\) for LLM evaluators, comprising 15 tasks, 8 abilities, and 2,553 samples. Experimental results demonstrate that a wider network (involving many reviewers) with 2 layers (one round of discussion) performs the best, improving kappa correlation coefficient from 0.28 to 0.34. We also leverage WideDeep to aid in the assessment of Chinese LLMs, which has accelerated the evaluation time by 4.6 times, resulting in a 60% cost saving. WideDeep achieves a remarkable 93% agreement level among humans\({}^{2}\).
## 1 Introduction
The rapid progress and remarkable achievements of large-scale pre-trained language models (LLMs) have catalyzed a revolutionary transformation in the realm of natural language processing [23; 31; 36]. These models have showcased substantial improvements across various applications, such as
dialogue [42], summarization [4], and code generation [6]. The majority of tasks involve open-ended, inherently subjective, and reference-free responses, rather than selecting from a fixed set of answers. Consequently, evaluating the correspondence of their generated responses with human intent becomes a challenge [28]. Traditional automatic metrics such as BLEU [24] and ROUGE [19] have been shown to have relatively low correlation with human judgments, especially for open-ended generation tasks [21],while human evaluation is often time-consuming and costly. Thus, there is a growing demand for automated assessment methods that can consistently align with human judgments while being more efficient and cost-effective [13; 18; 5].
Recent research has introduced the LLMs-as-evaluator paradigm, utilizing LLMs to compare candidate responses with the assumption that LLMs have learned to assign higher probabilities to high-quality and fluent texts [7; 8; 14; 16; 32]. FairEval [33] finds that the ranking result of candidate responses can be easily altered by exchanging their order of appearance in prompt context. They swap the position of two responses for two rounds of scores and ensemble the results of multiple LLM runs in pursuit of result stability. Similarly, LLM-as-a-judge [43] also observes the position bias. It swappes the order of two answers and retaines the evaluation score only if the results remain consistent in both orders. In cases of inconsistency after swapping, it declares a tie. Essentially, they regard each LLM as an individual neuron and construct single-layer narrow networks, aggregating evaluation scores from a limited quantity of LLMs. FairEval [33] identifies that the optimal performance is achieved when three LLM neurons are employed; an increase in the number of neurons leads to a decline in effectiveness. Moreover, existing benchmarks for assessing LLMs' performance in evaluating text quality lack diverse evaluation capabilities. For instance, the benchmark utilized by FairEval comprises only 80 samples. Thus, there is an urgent requirement for more comprehensive datasets that can holistically evaluate LLMs' ability to assess the quality of generated text.
In this paper, we first delve into the realm of deeper and wider LLM networks for LLM evaluation. Systematic design has led to the development of deeper and wider neural networks, such as ResNets [11] for depth and ResNeXT [37] for width. These advancements have resulted in enhanced learning and ultimately improved performance compared to relatively shallow and narrow networks [17]. Therefore, we aim to increase the number of LLM neurons and layers that collaborate in the evaluation network, with the goal of creating a fairer LLM evaluator. It has been observed that different neurons in each layer of state-of-the-art deep networks match human-interpretable but distinct concepts [44; 39; 22; 2; 15; 27; 3]. Moreover, the features in different layers focus on different views for samples [10; 26; 20]. For example, the features in lower layers tend to encode more local contents with basic syntactic representations in NLP. Higher layers capture more complex semantics and usually produce higher-level semantic representations [9; 25]. However, in the evaluation network composed of different LLM neurons, we can only achieve forward computation and cannot update parameters as in deep neural networks where the different neurons are responsible for detecting different concepts and different layers abstract different granularity features through backpropagation. Therefore, in the network design, we artificially implement these two important
Figure 1: (a) Prior methods are single-layer LLM networks that combine assessments from a fixed number of LLM neurons. (b) In contrast, our method delves into the realm of wider and deeper multi-layer networks, where each neuron provides a distinct neuron role.
characteristics. Specifically, for each evaluation sample, we first ask the LLM about the candidate perspectives that could be used to assess the sample quality. Each perspective is explicitly injected into the evaluation process of each LLM neuron in the first layer as the concept that this neuron is responsible for detecting, outputting evaluation scores and reasons as the neuron's representation. For subsequent layers in the multi-layer LLM network, each layer receives representations from all neurons in the previous layer, integrating and abstracting the previously learned local evaluation information to obtain a more comprehensive evaluation result.
Interestingly, our wider and deeper LLM network can be likened to the process of paper review. First, each reviewer independently assigns a score based on their own research background and understanding of the paper (the evaluation sample), representing the first layer. Then, a discussion phase follows, during which all reviewers take into account each other's evaluations to update their scores. This iterative process can continue through multiple rounds, analogous to subsequent layers in our network. Finally, the Chair or Editor consolidates all the reviewers' opinions to make the decision on whether the paper will be accepted. The final experiments reveal that a LLM network with a wider scope yet limited to only two layers performs the best. This coincidence aligns with the current mainstream conference paper review process, where many reviewers are brought in for blind reviews and a single round of discussion, after which the chair makes the final decision.
To facilitate the research on LLM evaluator, we also build a comprehensive benchmark that encompasses 15 tasks, such as question answering, text summarization, and programming. Additionally, the benchmark assesses 8 different abilities, such as logical reasoning, semantic understanding and text composition. To ensure thorough evaluation, we have compiled 2,553 samples, each of which comes with human-annotated preferences, 31 times larger than the dataset used in FairEval [33].
The major contributions of this paper are summarized as follows:
* We explore the multi-layer wide network where each neuron possesses distinct neuron role and cooperative evaluations are performed among different layers of neurons. We observe that a wider two-layer LLM network, namely WideDeep, can achieve the best evaluation results, which is essentially a paper review process.
* We introduce the largest and most diverse benchmark LLMEval\({}^{2}\) for LLM evaluator. LLMEval\({}^{2}\) involves diverse ability evaluation, and contributes to more sufficient assessment.
* Our WideDeep network's effectiveness has been extensively validated through thorough experimentation on existing two benchmarks and LLMEval\({}^{2}\). This validation reveals a notable 3.5-point increase in accuracy, coupled with a noteworthy enhancement of 0.06 in the kappa correlation coefficient. Notably, we've successfully addressed a limitation previously identified in FairEval, where employing more than three LLMs failed to yield performance enhancements. This accomplishment underscores that augmenting the number of LLM neurons contributes to a more equitable evaluation process.
* We also leverage WideDeep to assess the performance of the Chinese LLMs. WideDeep's advantages have further expanded compared to English benchmarks, with improvements of 6pts, 5.5pts, and 0.09 in accuracy, F1 score, and kappa correlation coefficient, respectively, achieving a labeling accuracy of 74% and reaching a 93% agreement level among humans. We demonstrate WideDeep has accelerated the LLM evaluation process by 4.6 times and decreased the average annotation cost per sample by 60%.
## 2 Related Work
There has been a proliferation of LLM-based chatbots that harness instruction fine-tuning and learn from human feedback to unlock the ability of responding to questions following human preferences [1; 38; 29]. However, assessing whether LLM is well aligned with human preference is not a straightforward task. Traditional LLM benchmarks like MMLU [12] fall short in effectively distinguishing between these aligned models and the base models, as they only require LLM to answer multiple-choice questions. Even if we have evaluation benchmarks available, such as several questions and manually annotated responses, commonly used ngram-based metrics like BLEU [24] and ROUGE [19], as well as embedding-based metrics like BERTScore [40] and MoverScore [41], can only measure lexical and semantic similarity between a generated response and the reference response. These metrics have been shown to have relatively low correlation with human judgments [21].
In recent research, it has been noticed that extensive generative pre-training has enabled LLMs to excel in assigning higher probabilities to high-quality responses based on given instructions and context [8]. Building on this insight, researchers have leveraged ChatGPT and GPT-4 to evaluate numerous natural language generation tasks, including text summarization, story generation, data-to-text generation, and machine translation, showcasing remarkable performance [21, 32, 16]. However, subsequent investigations have unveiled certain issues with LLM evaluators, particularly concerning biases related to position and verbosity [33, 43]. To address these biases, researchers have adopted techniques such as swapping the order of candidate responses and conducting multiple independent evaluations, which effectively mitigates biases and yields more reliable results. In this paper, we propose a unified approach, considering previous LLM evaluators as one-layer narrow LLM networks with varying numbers of neurons. Each neuron independently scores candidate samples from the same evaluation perspective. Drawing inspiration from deep neural networks, we delve into wider and deeper LLM networks, assigning distinct functionalities and roles to different LLM neurons. Each layer takes evaluation outputs from all neurons in the previous layer, resulting in a fairer LLM evaluator. Furthermore, we contribute to the field by creating an extensive benchmark for evaluation across various tasks, aiming to drive progress and innovation in this research domain.
## 3 Methodology
In this section, we begin by introducing the multi-layer wide LLM network in Sec.3.1. Next, we provide a more intuitive explanation from the perspective of academic paper review in Sec.3.2.
### Deeper and Wider LLM Network
State-of-the-art deep neural networks are composed of interconnected layers of neurons, where each neuron performs a specific function by processing input from other neurons and producing output for the next layer. At the bottom layer of the network, a considerable number of neurons are responsible for processing the input data and extracting diverse features that are relevant to the task at hand. As we move up the layers of the network, the neurons capture higher-level features and relationships by combining the lower-level features learned in preceding layers, which can be critical for solving more complex tasks. However, it remains unexplored whether widening and deepening the single-layer LLM network with a fixed number of neurons in Figure 1 (a) can improve the evaluation performance. Inspired by this, we enhance the network by augmenting the number of neurons in each layer and increasing the depth of the network in Figure 1 (b), making the LLM network deeper and wider. Building such a network involves three key points: **The role of each neuron**, **The connection of different layers** and **The aggregation of final results**.
**The role of each neuron.** In deep neural networks, different neurons perform distinct functions where they may learn to respond to different linguistic features such as word order, grammar or semantics by back-propagation optimization. The role of each neuron is learned by gradient back-propagation to adjust the neuron parameters. However, within our LLM network, each neuron represents a frozen LLM, and we are unable to adjust the parameters of the network. To keep different functions for LLM neurons, we first query LLMs to generate diverse neuron roles for each sample according to its content. Concretely, given a testing question \(q\), two candidate responses \(A=\{a_{1},a_{2}\}\), a prompt \(\pi_{0}\), and a template \(\mathtt{F}()\), the generation of neuron roles describes a probability distribution \(p_{\mathtt{LLM}}(\mathtt{P}|\mathtt{F}(q,A,\pi_{0}))\) over output perspectives \(\mathtt{P}=\{\mathtt{p}_{1},\mathtt{p}_{2},...,\mathtt{p}_{n}\}\) as computed by the LLM. \(\mathtt{F}()\) aims to fill the question \(q\) and responses \(A\) into the slots of prompt \(\pi_{0}\). The neuron role prompt \(\pi_{0}\) is summarized as follows:
```
Neuron Role Prompt-\(\mathtt{F}\)
```
Please help me summarize that for a user question "{{{ question}}} ]", if I want to determine which of two answers is better, from what angles do we need to evaluate? The two answers are respectively "{{answer_1}}" and "{{answer_2}}". Output the name and evaluation content of each angle. Each line is an evaluation angle. Use a newline to separate different evaluation angles. Each evaluation angle Name starts with $ and ends with $.
For the generated neuron roles \(\mathrm{P}=\{\mathrm{p}_{1},\mathrm{p}_{2},...,\mathrm{p}_{n}\}\), we respectively assign \(\mathrm{p}_{i}\) to each neuron \(\mathrm{n}_{i}\) in each layer, simulating the different roles of neurons in deep neural networks. For example, as shown in Figure 1 (b), the LLM, such as gpt-3.5-turbo, generates four perspectives including coherence, relevance, harmlessness and accuracy, and then the LLM network would possess four neurons in each layer where each neuron played by the LLM is respectively responsible for evaluating the candidate responses from one of the four perspectives. For the input layer of LLM network, given a prompt \(\pi_{1}\), and a template \(\mathrm{F}()\), each neuron \(\mathrm{n}_{i}\) defines a probability distribution \(p_{\texttt{LLM}}^{i}(\mathrm{e}_{1}^{i}|q,A)\) over output evaluation result \(\mathrm{e}_{1}^{i}\) as computed by the LLM:
\[p_{\texttt{LLM}}^{i}(\mathrm{e}_{1}^{i}|q,A)=p_{\texttt{LLM}}^{i}(\mathrm{e}_ {1}^{i}|\mathrm{F}(q,A,\mathrm{p}_{i},\pi_{1}))p_{\texttt{LLM}}(\mathrm{p}_{i} |\mathrm{F}(q,A,\pi_{0})) \tag{1}\]
where the input layer evaluation prompt \(\pi_{1}\) for LLMs is described as follows:
[frame=single,frametitle=Input Layer Evaluation Group] You are a member of the expert group for checking the quality of answer. You are given a question and two answers. Your job is to decide which answer is better for replying question. [Question] {**question**} {**question**} {**The Start of Assistant 1's Answer**} {**answer_1**} {**The End of Assistant 1's Answer**} {**The Start of Assistant 2's Answer**} {**answer_2**} {**The End of Assistant 2's Answer**} {**System**} Take {**{**perspective**}**} as the Angle of View, we would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. Each assistant receives an overall score on a scale of 1 to 10,...... **PLEASE OUTPUT WITH THE FOLLOWING FORMAT**: <start output> Evaluation evidence: <your evaluation explanation here> Score of Assistant 1: <score> Score of Assistant 2: <score> <end output> Now, start your evaluation:
**The connection of different layers.** In naive deep neural networks, the neurons in each layer are interconnected through weighted connections. These connections are responsible for transmitting information from one layer to the next during the forward pass of the network. Concretely, within each hidden layer, each neuron is connected to all the neurons in the previous layer. The connections between neurons in the hidden layers are weighted, and the weights are learned through the training process to allow the network to capture and represent complex patterns and features from the input data. In our LLM network, there is neither numerical weights nor training optimization. Therefore, inspired by Stacked LLMs [30], we write the prompt \(\pi_{2}\) which serves as the weights to connect each neuron with all neurons in the previous layer. Similarly, each neuron \(\mathrm{\tilde{n}}_{i}\) in the \(l_{th}\) layer defines a probability distribution \(p_{\texttt{LLM}}^{i}(\mathrm{e}_{1}^{i}|q,A)\) over output evaluation result \(\mathrm{e}_{i}^{i}\) as computed by the LLM:
\[p_{\texttt{LLM}}^{i}(\mathrm{e}_{1}^{i}|q,A)=\sum_{j=1}^{n}p_{\texttt{LLM}}^{ i}(\mathrm{e}_{1}^{i}|\mathrm{F}(q,A,\mathrm{e}_{l-1}^{j},\mathrm{p}_{l-1}^{j}, \pi_{2}))p_{\texttt{LLM}}^{j}(\mathrm{e}_{l-1}^{j}|\mathrm{F}(q,A,\mathrm{p}_{ j},\pi_{1})) \tag{2}\]
where \(n\) is the number of neurons in the previous layer, \(\mathrm{p}_{l-1}^{j}\) is the role of \(j_{th}\) neuron in the \((l-1)_{th}\) layer. \(\pi_{2}\) is the hidden layer evaluation prompt for LLMs which is described as follows:
You are a member of the expert group for checking the quality of answer. You are given a question and two answers. Your job is to decide which answer is better for replying question. [Question] {**question**} {**Question**} {**The Start of Assistant 1's Answer**} {**answer_1**} } [The End of Assistant 1's Answer] [The Start of Assistant 2's Answer] {**answer_2**} } [The End of Assistant 2's Answer] [System] You and your colleagues in the expert group have conducted several rounds of evaluations. [The Start of Your Historical Evaluations] {**Your own evaluation from last layer**} } [The End of Your Historical Evaluations] [The Start of Other Colleagues' Evaluations] {**Other evaluations from last layer**} } [The End of Other Colleagues' Evaluations] Again, take {**inherited perspectives**} } as the Angle of View, we would like to request your feedback on the performance of two AI assistants in response to the user question displayed above. Each assistant receives an overall score on a scale of 1 to 10,...... [PLEASE OUTPUT WITH THE FOLLOWING FORMAT: <start output> Evaluation evidence: <your evaluation explanation here> Score of Assistant 1: <score> Score of Assistant 2: <score> <end output> Now, start your evaluation: [PLEASE OUTPUT WITH THE FOLLOWING FORMAT: <start output> Evaluation evidence: <your evaluation explanation here> Score of Assistant 1: <score> Score of Assistant 2: <score> <end output> Now, start your evaluation: [PLEASE OUTPUT WITH THE FOLLOWING FORMAT: <start output> Evaluation evidence: <your evaluation explanation here> Score of Assistant 1: <score> Score of Assistant 2: <score> <end output> Now, start your evaluation: [PLEASE OUTPUT WITH THE FOLLOWING FORMAT: <start output> Evaluation evidence: <your evaluation explanation here> Score of Assistant 1: <score> Score of Assistant 2: <score> Score of Assistant 3: <score> Score of Assistant 4: <score> Score of Assistant 5: <score> Score of Assistant 6: <score> Score of Assistant 7: <score> Score of Assistant 8: <score> Score of Assistant 9: <score> Score of Assistant 10: <score> Score of Assistant 11: <score> Score of Assistant 12: <score> Score of Assistant 13: <score> Score of Assistant 14: <score> Score of Assistant 15: <score> Score of Assistant 16: <score> Score of Assistant 17: <score> Score of Assistant 18: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 10: <score> Score of Assistant 11: <score> Score of Assistant 12: <score> Score of Assistant 13: <score> Score of Assistant 14: <score> Score of Assistant 15: <score> Score of Assistant 16: <score> Score of Assistant 17: <score> Score of Assistant 18: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 17: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> Score of Assistant 19: <score> of Assistant 19
evaluate the candidate responses. These discussions resemble the subsequent layers of our network, where reviewers compare and contrast their assessments, explore areas of agreement or disagreement, and identify potential biases or blind spots. This iterative process of discussion can span multiple rounds, analogous to the deep layers in our network. Finally, the chair makes a decision, akin to the result aggregation step in our network, by considering the collective feedback from the reviewers. By illustrating the functioning of our LLM network through the academic paper review analogy, we aim to provide a more intuitive understanding of its operations and effectiveness.
Figure 3: _Left_ is the distribution of all datasets in LLMEval\({}^{2}\). The outer and middle circles display the names of datasets and their associated tasks, respectively. The inner circle represents the proportions of three categories of data in the benchmark concerning the preference between two responses: the first one being better, the second one being better, or the two responses having similar quality. _Right_ illustrates covered 8 evaluation abilities of LLMEval\({}^{2}\).
Figure 2: Academic paper review process for evaluating the quality of candidate responses, comprising of blind review, reviewer discussion and chair summary.
## 4 LLMEval\({}^{2}\) Benchmark
In addition to exploring the wider and deeper LLM network to obtain fairer evaluation results, we also seek to propose improvements to the current LLM evaluator benchmark. The widely used benchmarks, such as FairEval [33] and MT-bench [43], only consist of 80 testing samples, leading to unstable evaluation results and making it challenging to comprehensively assess the LLM evaluator's capabilities. While PandaLM constructs a test set comprising 999 samples, it still lacks statistics for different abilities and suffers from a limitation in data diversity, as it solely relies on a single self-instruct source [35]. To address these shortcomings, we present LLMEval\({}^{2}\), the largest and most diverse evaluation benchmark for the LLM Evaluator to date.
**Benchmark Construction.** Assessing the capabilities of the LLM evaluator requires data that includes a question, a pair of candidate responses, and a human label indicating the preferred response. We notice that the format of the evaluation data resembles that of the samples used to train a reward model. The reward trainer aims to grasp human preferences by ranking the candidate responses based on human labels. Thus, we compile datasets used for training a reward model, totaling 15 datasets (shown as the outer circle in Figure 3 _left_). Next, we employ data sampling techniques to balance data diversity and evaluation costs, resulting in a collection of 2,553 evaluation samples, each annotated with human preferences, across all 15 datasets.
**Statistics.** In this benchmark, 1,050 samples of response 1 are considered to align with human preferences, while another 1,021 samples of response 2 are deemed superior. Additionally, two responses from the 482 samples are considered difficult to differentiate in terms of quality. As illustrated in Figure 3 _left_), the benchmark encompasses eight tasks: Story Generation, Text Summarization, Data-to-Text Generation, Retrieval QA, Dialogue, Commonsense NLI, Open-domain QA, and Programming. These tasks evaluate eight abilities of the benchmark: _Induction and Summarization_, _Semantic Understanding_, _Knowledge QA_, _Logical Reasoning_, _Text Composition_, _Dialogue_, _Harmlessness_ and _Multilingual_.
## 5 Experiments
In this section, our primary focus is to address the following research questions: (**RQ1**) Does a LLM network with a wider and deeper structure yield improved evaluation performance? (**RQ2**) Which neuron roles does LLM prioritize, and how do they impact the results? (**RQ3**) To what extent can our LLM evaluator accelerate manual annotation speed in real LLM business?
### Experimental Settings
**Datasets.** We conduct evaluations on three benchmarks, consisting of two existing datasets, FairEval [33] and PandaLM [34], along with our newly constructed dataset, LLMEval\({}^{2}\). FairEval comprises a total of 80 samples, and the candidate responses are generated by Vicuna-13b and ChatGPT. Meanwhile, PandaLM consists of 999 samples, which were drawn from the diverse human evaluation dataset of self-instruct [35]. The paired responses in PandaLM are generated by LLaMA-7B, Bloom-7B, Cerebras-GPT-6.7B, OPT-7B, and Pythia-6.9B.
**Implementation Details.** We use accuracy (Acc), Macro-F1, and the kappa correlation coefficient (Kap.) as our evaluation metrics. For reporting the main results, we utilize gpt-3.5-turbo as the LLM neuron on the full dataset due to cost constraints. Additionally, we construct a smaller version called LLMEval\({}^{2}\) mini, which consists of 20 samples drawn from each of the 15 datasets, resulting in a total of 300 samples. These samples are used for analytical experiments.
### Experimental Results
Table 1 shows the main results of our multi-layer wide LLM network WideDeep compared with prior single-layer network with fixed number of neurons FairEval[33]. We implement four variants WideDeep c\({}_{1}^{*}\), WideDeep c\({}_{2}^{*}(l_{1})\), WideDeep c\({}_{2}^{*}(l_{2})\) and WideDeep c\({}_{2}^{*}(all)\). WideDeep c\({}_{1}^{*}\) indicates averaging the scores from all neurons in LLM network and choosing the response with higher score (c\({}_{1}^{*}\) in Equation 3). For the latter three, we aggregate the results based on c\({}_{2}^{*}\) in Equation 3. WideDeepc\({}_{2}^{*}(l_{1})\) represents voting the evaluation results only in the \(1_{st}\) layer and WideDeep Wc\({}_{2}^{*}(l_{2})\) means only voting in the \(2_{nd}\) layer of LLM network. Voting all evaluation results in all layers is
denoted as WideDeep\(\text{c}_{2}^{s}(all)\). The best results over evaluation metrics are in bold. Note that we have attempted to use deeper LLM networks (more than 2 layers), but it resulted in a decrease in performance. Therefore, in our main experiment, we do not restrict the number of neurons in each layer, but we limit the network depth to 2 layers. We will discuss the impact of network depth on the results in the analysis experiment.
We can observe that our multi-layer wide LLM network outperforms FairEval significantly, with an increase in accuracy by 3.2pts, 4.4pts, and 3pts, and an improvement in kappa correlation by 3.7pts, 8.4pts, and 6.3pts on the three respective benchmarks. Compared with voting in each layer of the LLM network WideDeep\(\text{c}_{2}^{*}(l_{1})\) and WideDeep\(\text{c}_{2}^{*}(l_{2})\), WideDeep\(\text{c}_{2}^{*}(all)\) which votes evaluation results from all layers achieves the better overall performance. Meanwhile, in comparison withWideDeep\(\text{c}_{2}^{*}(l_{1})\), WideDeep\(\text{c}_{2}^{*}(l_{2})\) reaches the higher performance which demonstrates that the effectiveness of deepening the LLM network.
### Experimental Analyses
Due to cost constraints, we extract 20 samples from each of the 15 datasets included in LLMEval\({}^{2}\), resulting in a total of 300 testing samples, namely LLMEval\({}^{2}\) mini. This mini dataset allows us to easily assess the impact of network width, depth and neuron roles.
**Wider LLM network is a Fairer Evaluator.** Table 2 illustrates the performance improvement as the number of neurons in each layer of the LLM network (\(n\)) increases. When the number of layers \(l\) is limited to one or two, we observe a consistent upward trend in performance. This demonstrates the effectiveness of widening the LLM network, fully unleashing the potential of a group of neurons.
**Slightly deeper LLM network is a Fairer Evaluator.** From Table 2, we can also observe that increasing the number of layers (\(l\)) in the network from 1 to 2 while keeping the number of neurons
\begin{table}
\begin{tabular}{c c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**FairEval Benchmark**} & \multicolumn{3}{c|}{**PandaLM Benchmark**} & \multicolumn{3}{c}{**LLMEval\({}^{2}\) Benchmark**} \\ & Acc & Macro-F1 & Kap. & Acc & Macro-F1 & Kap. & Acc & Macro-F1 & Kap. \\ \hline FairEval [33] & 0.587 & – & 0.31 & 0.7147 & 0.5531 & 0.4891 & 0.5735 & 0.4663 & 0.2807 \\ \hline WideDeep\(\text{c}_{1}^{*}\) & 0.6063 & 0.4457 & 0.3336 & 0.7447 & 0.5834 & 0.5371 & 0.5946 & 0.4446 & 0.3197 \\ WideDeep\(\text{c}_{2}^{*}(l_{1})\) & 0.6125 & 0.4394 & 0.3215 & 0.7467 & 0.6481 & 0.5524 & 0.5895 & 0.4622 & 0.3155 \\ WideDeep\(\text{c}_{2}^{*}(l_{2})\) & **0.6188** & **0.4479** & **0.3472** & 0.7447 & 0.6295 & 0.5504 & 0.5962 & 0.5028 & 0.3345 \\ WideDeep\(\text{c}_{2}^{*}(all)\) & **0.6188** & 0.4465 & 0.3462 & **0.7568** & **0.6545** & **0.5726** & **0.6036** & **0.5041** & **0.3440** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main Results on FairEval, PandaLM and LLMEval\({}^{2}\) benchmarks.
Figure 4: Comparison of accuracy between WideDeep and FairEval under eight abilities.
per layer fixed resulted in significant performance improvements. However, further deepening the network led to a slight decline in performance. The reason for this could be that deeper LLM networks tend to hold more homogeneous information, similar to overfitting in deep neural networks.
**Neuron roles are diverse and effective.** To mimic the characteristic of different neurons in a neural network being responsible for detecting different concepts, we require the LLM to generate potential evaluation dimensions before assessing the samples. In the network, each LLM in every layer is responsible for evaluating one specific dimension. To elucidate the roles that LLM assigns to neurons for each task, we present word clouds for four tasks in Figure 5: dialogue, harmlessness QA, story generation, and programming. Note that we did not explicitly provide task names or definitions to LLM when generating the roles. Remarkably, these assigned roles appear to be logical and adaptable, dynamically changing based on the specific task characteristics. For harmlessness QA,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & Acc & Macro-F1 \\ \hline WideDeep (\(l=2,n=2\)) & 0.6400 & 0.5187 \\ WideDeep (\(l=2,n=2\)) W/O Neuron Roles & 0.6267 & 0.4992 \\ \hline WideDeep (\(l=2,n=\texttt{NL}\)) & 0.6567 & 0.5666 \\ WideDeep (\(l=2,n=\texttt{NL}\)) W/O Neuron Roles & 0.6400 & 0.5086 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effectiveness of neuron roles. NL indicates no limit on the number of neurons in each layer.
Figure 5: Word clouds of neuron roles on **(a)**_Dialogue_ **(b)**_Harmlessness QA_ **(c)**_Story Generation_ **(d)**_Programming_ task.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & \(n=1\) & \(n=2\) & \(n=3\) & \(n=4\) & \(n=\texttt{NL}\) \\ \hline \(l=1\) & Acc & 0.6033 & 0.6333 & 0.6300 & 0.6267 & 0.6300 \\ & Macro-F1 & 0.4709 & 0.4704 & 0.4793 & 0.4885 & 0.5116 \\ \hline \(l=2\) & Acc & 0.6333 & 0.6400 & 0.6433 & 0.6500 & 0.6567 \\ & Macro-F1 & 0.4819 & 0.5187 & 0.4772 & 0.5159 & 0.5666 \\ \hline \(l=3\) & Acc & 0.6533 & 0.6400 & 0.6433 & 0.6300 & 0.6500 \\ & Macro-F1 & 0.5076 & 0.5084 & 0.4764 & 0.4798 & 0.5053 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance on wider and deeper network. NL indicates no limit on the number of neurons.
LLM generates roles related to security, including Safety, Legal, and Ethical. In story generation, LLM assigns roles like Coherence, Relevance, and Character. Meanwhile, the programming task involves algorithm-related roles, such as Correctness and Efficiency. Having reliable and diverse neuron roles allows the LLM network to effectively utilize multiple neurons' value when the network becomes wider. As illustrated in Table 3, we conduct two groups of experiments where the number of layers \(l\) is set to 2 and neurons \(n\) to no limit, respectively. The results show that the accuracy and Macro-F1 metrics decrease by 1.33%, 1.67% and 1.95%, 5.80% without neuron roles.
**Wided deep can consume more neurons than baselines.** With a wider and deeper architecture and diverse neuron roles, our WideDeep network can utilize an unlimited number of LLM neurons. Previous methods, such as FairEval [33], can also harness a large number of LLM neurons by integrating multiple independent LLM evaluations. In Figure 6, we demonstrate that Deepwide can more efficiently leverage LLM neurons to achieve significantly improved accuracy across almost all neuron quantity constraints than FairEval. Moreover, as the number of neurons increases, the performance continues to improve. For our experiments, we opted for a two-layered Deepwide network, where, with an odd-numbered neuron constraint, the second layer's neurons are reduced by one. On the other hand, FairEval's performance saturates when the number of neurons reaches five, and any further increase leads to a decline in performance. This observation aligns with the conclusions of the original research, further confirming the positive impact of our deeper network and diversified neuron roles.
### Application in Chinese LLM Evaluation
We also utilize WideDeep to assess the performance of the Chinese LLMs by determining which of the three responses under the same prompt is better. Due to variations in evaluation data and tasks, the traditional manual annotation process involves multiple steps such as annotator training, small-scale trial annotation, selection of official annotators, and cross-annotation by multiple individuals. However, with the assistance of WideDeep, this process has been simplified to involve only a fixed team of professional annotators who perform sampling checks on the results generated by WideDeep.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Acc & Macro-F1 & Kap. \\ \hline GPT-4 & 0.6700 & 0.6261 & 0.4587 \\ FairEval & 0.6800 & 0.6692 & 0.5074 \\ WideDeep (Ours) & 0.7400 & 0.7245 & 0.5965 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance on chinese LLM evaluation with gpt-4 as the neurons.
Figure 6: Performance under different neuron quantity constraints.
In Table 4, we present a comparison of the effectiveness of WideDeep, FairEval, and standalone gpt-4 Evaluator in Chinese LLM evaluation. WideDeep's advantages have further expanded compared to English benchmarks, with improvements of 6pts, 5.5pts, and 8.9pts in accuracy, F1 score, and kappa correlation coefficient, respectively, achieving a labeling accuracy of 74%. The agreement among humans during the Chinese LLM evaluation stands at 80%, which indicates that WideDeep has reached a 93% agreement level among humans. In fact, with each point increase in accuracy, a significant amount of manual annotation time can be reduced. Assuming the LLM evaluator's accuracy is x, the annotators only need to review \(\frac{0.8-x}{1-x}\%\) of the data annotated by the LLM Evaluator to correct the labeling errors and achieve an 80% accuracy level, aligning with manual annotation. Therefore, the annotators only need to inspect 23% of the predicted results from WideDeep, while they would have to inspect 37.5% from FairEval and 39.3% from GPT-4. Overall, WideDeep has accelerated the LLM evaluation process by 4.6 times, saving a significant amount of time for human annotators. Furthermore, the average annotation cost per sample has decreased by 60%.
## 6 Conclusion
In this paper, we explore whether evaluation performance can be improved in deeper and wider LLM networks. Specifically, each neuron within the LLM network assumes a distinct evaluation role, and multiple neurons interact and collaborate, much like the interaction observed in deep neural networks. The evaluation process follows a feedforward approach, with each layer of neurons receiving inputs from the previous layer, facilitating a thorough and comprehensive assessment. An intuitive analogy for our designed LLM network can be drawn to the process of academic paper reviewing. Additionally, we present LLMEval\({}^{2}\), the largest and most diverse evaluation benchmark developed to date for the LLM Evaluator. Through extensive experiments, we demonstrate that a two-layer wider LLM network yields the best results, significantly enhancing the ability of LLMs to evaluate the quality of generated text. Furthermore, we apply our evaluator to assess the performance of Chinese LLMs, where it proves to speed up LLM evaluation process by 4.6 times and decrease the average annotation cost per sample by 60%.
|
2307.00321 | Algorithms for Euclidean-regularised Optimal Transport | This paper addresses the Optimal Transport problem, which is regularized by
the square of Euclidean $\ell_2$-norm. It offers theoretical guarantees
regarding the iteration complexities of the Sinkhorn--Knopp algorithm,
Accelerated Gradient Descent, Accelerated Alternating Minimisation, and
Coordinate Linear Variance Reduction algorithms. Furthermore, the paper
compares the practical efficiency of these methods and their counterparts when
applied to the entropy-regularized Optimal Transport problem. This comparison
is conducted through numerical experiments carried out on the MNIST dataset. | Dmitry A. Pasechnyuk, Michael Persiianov, Pavel Dvurechensky, Alexander Gasnikov | 2023-07-01T12:14:18Z | http://arxiv.org/abs/2307.00321v2 | # Algorithms for Euclidean-regularised Optimal Transport+
###### Abstract
This paper addresses the Optimal Transport problem, which is regularized by the square of Euclidean \(\ell_{2}\)-norm. It offers theoretical guarantees regarding the iteration complexities of the Sinkhorn-Knopp algorithm, Accelerated Gradient Descent, Accelerated Alternating Minimisation, and Coordinate Linear Variance Reduction algorithms. Furthermore, the paper compares the practical efficiency of these methods and their counterparts when applied to the entropy-regularized Optimal Transport problem. This comparison is conducted through numerical experiments carried out on the MNIST dataset.
Keywords:Optimal transport Euclidean regularisation Sinkhorn algorithm Primal-dual algorithm Alternating optimisation.
## 1 Introduction
Optimal Transport (OT) problem has a long history [9, 15], has been extensively studied [17, 20] and pipes interest in the modern statistical learning community [2, 10]. This paper focuses on the discrete OT problem statement and the numerical optimisation methods applied to it. Formally, the original problem to solve is:
\[\min_{\begin{subarray}{c}X_{\mathbf{1}_{m}=a}\\ X^{T}_{\mathbf{1}_{m}=b}\\ x_{ij}\geq 0\end{subarray}}\langle C,X\rangle, \tag{1}\]
where \(a\in\mathcal{S}_{n}\) and \(b\in\mathcal{S}_{m}\) are the source and destination distributions (measures), the unit simplex \(\mathcal{S}_{d}\equiv\{x\in\mathbb{R}_{+}^{d}\ |\ \sum_{i=1}^{d}x_{i}=1\}\), \(X\in\mathbb{R}_{+}^{n\times m}\) is a transportation plan such that \(x_{ij}\) is the mass to transport from the \(i\)-th source to the \(j\)-th destination, and \(C\in\mathbb{R}_{+}^{n\times m}\) is the cost of the transportation matrix.
An algorithm applied to the OT problem must derive an \(\varepsilon\)-optimal transportation plan, denoted by \(X_{\varepsilon}\) and defined as one that meets the following condition:
\[\langle C,X_{\varepsilon}\rangle-\varepsilon\leq\langle C,X^{*}\rangle\equiv \min_{\begin{subarray}{c}X\mathbf{1}_{m}=a\\ X^{\top}\mathbf{1}_{\mathbf{n}}=b\\ x_{ij}\geq 0\end{subarray}}\langle C,X\rangle,\]
and strictly adheres to constraints \(X_{\varepsilon}\mathbf{1}_{m}=a\), \(X_{\varepsilon}^{\top}\mathbf{1}_{n}=b\), and \(X_{\varepsilon}\in\mathbb{R}_{+}^{n\times m}\). To obtain such a solution, we consider the Euclidean-regularised OT problem:
\[\min_{\begin{subarray}{c}X\mathbf{1}_{m}=a\\ X^{\top}\mathbf{1}_{\mathbf{n}}=b\\ x_{ij}\geq 0\end{subarray}}\{f(X)\equiv\langle C,X\rangle+\tfrac{\gamma}{2}\|X \|_{2}^{2}\}, \tag{2}\]
where \(\|X\|_{2}^{2}\equiv\sum_{i=1,j=1}^{n,m}x_{ij}^{2}\), and apply convex optimisation methods to solve it. It is noteworthy that if \(\gamma\propto\varepsilon\), then the \(\varepsilon\)-optimum of this optimisation problem is a \((\propto\varepsilon)\)-optimal transportation plan for the original problem (1). Unlike (1), problem statement (2) allows one to leverage convex optimisation tools like duality and acceleration.
**Contribution**. We provide the first arithmetic complexity bounds for Euclidean-regularised OT. The results of this paper are summarised in Table 1 below. Each cell contains an estimate of the number of arithmetic operations number needed for an Algorithm in the leftmost column to achieve target accuracy \(\varepsilon\) for problem (1) with given \(n\), \(m\) (we assume without loss of generality that \(n>m\)), and \(C\) in the worst case. Constant factors are omitted, and \(\varepsilon\) is assumed to be sufficiently small. The arithmetic complexities for original algorithms applied to entropy-regularised OT [4] are known and are presented in the right column. The left column contains the estimates obtained in this paper.
The organisation of this paper is as follows. Section 2 provides a short literature review, highlighting the works that underpin the proofs presented in this paper
\begin{table}
\begin{tabular}{l|c|c|} \#\(a.o.\) & Euclidean-reg. OT & entropy-reg. OT \\ \hline \multirow{2}{*}{Sinkhorn, Alg. 1} & \(\dfrac{n^{7/2}\|C\|_{\infty}^{2}}{\varepsilon^{2}}\), Thm 2 & \(\dfrac{n^{2}\|C\|_{\infty}^{2}\log n}{\varepsilon^{2}}\), [6] \\ & \(\dfrac{n^{3}\|C\|_{\infty}}{\varepsilon}\), Thm 4 & \(\dfrac{n^{5/2}\|C\|_{\infty}\sqrt{\log n}}{\varepsilon}\), [6] \\ & \(\dfrac{n^{3}\|C\|_{\infty}}{\varepsilon}\), Thm 6 & \(\dfrac{n^{5/2}\|C\|_{\infty}\sqrt{\log n}}{\varepsilon}\), [8] \\ \cline{2-3} \cline{1-2
and tracing the history of applying quadratic regularisation in OT. Section 3 encompasses all the theoretical results of this paper. Subsections 3.2, 3.3, 3.4, and 3.5 delve into the details of the Sinkhorn, Accelerated Gradient, Alternating Minimisation, and Coordinate Linear Variance Reduction algorithms, respectively. Finally, Section 4 contains results of numerical experiments that compare the practical performance of the proposed algorithms and their counterparts applied to entropy-regularised OT.
## 2 Background
The Sinkhorn-Knopp algorithm [4, 18] stands out as the most widely-known method to solve the OT problem. The works [1, 6] justify its worst-case arithmetic complexity in terms of \(\varepsilon\) and \(n\). Our analysis of the arithmetic complexity of the Sinkhorn-Knopp algorithm applied to Euclidean-regularised OT draws from the framework outlined in [6] as well. As an alternative to the Sinkhorn-Knopp algorithm, the works [6, 12] show that accelerated gradient descent applied to entropy-regularised OT problem improves iteration complexity with respect to \(\varepsilon\). on the other hand, acceleration can be applied directly to the Sinkhorn-Knopp algorithm by viewing it as an alternating minimisation procedure, as proposed in [8]. Both approaches yield similar iteration complexities and require only minor adjustments in proofs for applying to Euclidean-regularised OT.
The standard approach for effectively applying convex optimisation methods to the OT is entropy regularisation [4]. Recently, there has been a growing interest in Euclidean regularisation [7, 11, 14]. A practically valuable property of Euclidean-regularised OT is the sparsity of the optimal plan [3], which holds significance in various applications, such as image colour transfer. Additionally, algorithms used for Euclidean-regularised OT are anticipated to be more computationally stable and more robust for small regularisation parameter. For instance, the Sinkhorn-Knopp algorithm for entropy-regularised OT requires computing the exponent with \(\gamma\) in the denominator. Besides, none of the aforementioned papers that study Euclidean regularisation provide arithmetic complexity estimates for particular algorithms applied to Euclidean-regularised OT.
## 3 Theoretical guarantees for various approaches
### Common reasoning
We have two discrete probability measures, \(a\in\mathcal{S}_{n}\) and \(b\in\mathcal{S}_{m}\) from the unit simplex, such that \(a^{\top}\mathbf{1}_{n}=1,b^{\top}\mathbf{1}_{m}=1\), along with the cost matrix \(C\in\mathbb{R}_{+}^{n\times m}\). Our objective is to find the transport plan \(X\in\mathbb{R}_{+}^{n\times m}\) determined by optimisation problem (2), which represents the Euclidean-regularised version of the classical problem (1).
The problems under consideration are in the generalised linear form and allow for the use of convex duality to eliminate linear constraints. Let us consider the
Lagrange saddle-point problem \(\max_{\lambda\in\mathbb{R}^{n},\mu\in\mathbb{R}^{m}}\min_{X\in\mathbb{R}_{+}^{n \times m}}\mathcal{L}(X,\lambda,\mu)\), where the Lagrangian function is defined as follows:
\[\mathcal{L}(X,\lambda,\mu)\equiv\langle C,X\rangle+\tfrac{\gamma}{2}\|X\|_{2}^{ 2}+\lambda^{\top}(X\mathbf{1}_{m}-a)+\mu^{\top}(X^{\top}\mathbf{1}_{n}-b).\]
The first-order optimality condition for this problem implies
\[\tfrac{\partial\mathcal{L}(X,\lambda,\mu)}{\partial x_{ij}}=0=c_{ij}+\gamma x _{ij}+\lambda_{i}+\mu_{j},\]
yielding the following closed-form expression for the optimal transport plan \(X(\lambda,\mu)=\left[-C-\lambda\mathbf{1}_{m}^{\top}-\mathbf{1}_{n}\mu^{\top} \right]_{+}/\gamma\), given the dual multipliers \(\lambda\) and \(\mu\), where \([x]_{+}\equiv\max\{0,x\}\). Upon substituting \(X(\lambda,\mu)\) into the formula for \(\mathcal{L}\), we derive the following dual problem:
\[\max_{\lambda\in\mathbb{R}^{n},\mu\in\mathbb{R}^{m}}\{\varphi(\lambda,\mu) \equiv-\tfrac{1}{2\gamma}\sum_{j=1}^{m}\|\left[-C_{j}-\lambda-\mu_{j}\mathbf{1 }_{n}\right]_{+}\|_{2}^{2}-\lambda^{\top}a-\mu^{\top}b\}, \tag{3}\]
where \(C_{j}\) is the \(j\)-th row of matrix \(C\).
### The Sinkhorn-Knopp Algorithm
Following the reasoning of [4] regarding the justification of the Sinkhorn-Knopp algorithm for the entropy-regularised OT problem, we come to an analogous Sinkhorn-Knopp method for the Euclidean-regularised OT problem.
The first-order optimality conditions for the dual problem (3) with respect to \(\lambda\) and \(\mu\) are, respectively,
\[\begin{cases}f_{i}(\lambda_{i})-\gamma a_{i}=0,\;i=1,...,n\\ g_{j}(\mu_{j})-\gamma b_{j}=0,\;j=1,...,m,\end{cases} \tag{4}\] \[f_{i}(\lambda)=\sum_{j=1}^{m}\left[-c_{ij}-\lambda-\mu_{j}\right] _{+},\quad g_{j}(\mu)=\sum_{i=1}^{n}\left[-c_{ij}-\lambda_{i}-\mu\right]_{+}.\]
Let us denote the \(i\)-th order statistic of the elements of the vector \(x\) as \(x_{(i)}\), and choose \(l\) as the largest index \(j\) such that \(f_{i}(-(C_{i}^{\top}+\mu)_{(j)})\leq\gamma a_{i}\), and \(k\) as the largest index \(i\) such that \(g_{j}(-(C_{j}+\lambda)_{(i)})\leq\gamma b_{j})\), respectively [14]. Then, by holding \(\mu\) and \(\lambda\) constant, the explicit solutions of (4) are
\[\begin{cases}\lambda_{i}=-\left(\gamma a_{i}+\sum_{j=1}^{l}(C_{i}^{\top}+\mu) _{(j)}\right)/\;l,\;i=1,...,n,\\ \mu_{j}=-\left(\gamma b_{j}+\sum_{i=1}^{k}(C_{j}+\lambda)_{(i)}\right)/\;k,\;j =1,...,m.\end{cases} \tag{5}\]
The alternating updates of \(\lambda\) and \(\mu\) according to the formulas above yield the Sinkhorn-Knopp algorithm applied to Euclidean-regularised OT. Its pseudocode is listed in Algorithm 1. The following proposition estimates the algorithmic complexity of each iteration of Algorithm 1.
**Proposition 1**.: _One iteration of Algorithm 1 requires \(\mathcal{O}((n+m)^{2})\) amortised arithmetic operations per iteration (only +, -, * and \(\leq\); \(\mathcal{O}(n+m)\) /; no built-in functions calculations)._
Following Lemmas 1, 2 and Theorem 1 correspond to Lemmas 1, 2 and Theorem 1 from [6], but the proofs are significantly different from that of their analogues due to the use of specific properties of Euclidean regularisation.
Lemma 1: _For \(R=\|C\|_{\infty}+\frac{\gamma}{\min\{n,m\}}(1-\max\limits_{\begin{subarray}{c}i=1,...,n\\ j=1,...,m\end{subarray}}\{a_{i},b_{j}\})\), it holds that_
\[\max_{j=1,...,m}\mu_{j}-\min_{j=1,...,m}\mu_{j}\leq R,\quad\max_{i=1,...,n}\lambda_{i}-\min_{i=1,...,n}\lambda_{i}\leq R,\] \[\max_{j=1,...,m}\mu_{j}^{*}-\min_{j=1,...,m}\mu_{j}^{*}\leq R,\quad \max_{i=1,...,n}\lambda_{i}^{*}-\min_{i=1,...,n}\lambda_{i}^{*}\leq R.\]
Proof: Firstly, thanks to the form of updates (5), we can guarantee the non-positivity of dual variables. Indeed, initial values of \(\mu\) and \(\lambda\) are zero, so non-positive. Then, for all \(j=1,...,m\),
\[\frac{n-1}{\gamma}\mu_{j}+b_{j}=\frac{1}{\gamma}\sum_{i=1}^{n}(-c_{ij}- \lambda_{i}-\mu_{j})\leq\frac{1}{\gamma}\sum_{i=1}^{n}[-c_{ij}-\lambda_{i}- \mu_{j}]_{+}=X^{\top}\mathbf{1}_{n}=b_{j},\]
that implies \(\mu_{j}\leq 0\). Similarly, one can prove \(\lambda_{i}\leq 0\) for all \(i=1,...,n\).
Further, let's relate dual variables with corresponding marginal distributions of \(X\). Here we consider only \(\mu\), assuming that we just updated it. Similar reasoning can be applied to just updated \(\lambda\) as well, that gives the right column of statements from Lemma.
\[-\mu_{j}-\|C\|_{\infty}-\frac{1}{n}\mathbf{1}_{n}^{\top}\lambda \leq\frac{\gamma}{n}[X^{\top}\mathbf{1}_{n}]_{i}=\frac{\gamma}{n}b_{j}\leq \frac{\gamma}{n}\] \[-\mu_{j}-\frac{1}{n}\mathbf{1}_{n}^{\top}\lambda \geq\frac{\gamma}{n}[X^{\top}\mathbf{1}_{n}]_{i}=\frac{\gamma}{n}b_{j}, \quad\forall j=1,...,m.\]
This implies
\[\mu_{j}\geq-\|C\|_{\infty}-\frac{1}{n}(\mathbf{1}_{n}^{\top}\lambda+\gamma), \quad\mu_{j}\leq-\frac{1}{n}(\mathbf{1}_{n}^{\top}\lambda+\gamma b_{j}),\quad \forall j=1,...,m.\]
Finally,
\[\max_{j=1,...,m}\mu_{j}-\min_{j=1,...,m}\mu_{j} \leq-\frac{1}{n}\left(\mathbf{1}_{n}^{\top}\lambda+\gamma\max_{j =1,...,m}b_{j}\right)+\|C\|_{\infty}+\frac{1}{n}\left(\mathbf{1}_{n}^{\top} \lambda+\gamma\right)\] \[=\|C\|_{\infty}+\frac{\gamma}{n}\left(1-\max_{j=1,...,m}b_{j} \right).\]
Reasoning for \(\mu^{*}\) and \(\lambda^{*}\) is similar, since the gradient of objective in (3) vanishes, so \(X^{\top}\mathbf{1}_{n}=b\) and \(X\mathbf{1}_{m}=a\), correspondingly.
Lemma 2: _For \(\lambda\), \(\mu\), and \(X\) taken from each iteration of Algorithm 1 it holds that_
\[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)\leq 4R\sqrt{n+m}(\|X \mathbf{1}_{m}-a\|_{2}+\|X^{\top}\mathbf{1}_{n}-b\|_{2}).\]
Proof: Due to concavity of \(\varphi\), we have
\[\varphi(\lambda^{*},\mu^{*})\leq\varphi(\lambda,\mu)+\langle\nabla\varphi( \lambda,\mu),(\lambda^{*},\mu^{*})-(\lambda,\mu)\rangle.\]
Then, by Holder inequality and Lemma 1,
\[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)\leq\sqrt{n+m} \|\nabla\varphi(\lambda,\mu)\|_{2}\|(\lambda^{*},\mu^{*})-(\lambda,\mu)\|_{\infty}\] \[\qquad\leq 4R\sqrt{n+m}\|\nabla\varphi(\lambda,\mu)\|_{2}\leq 4R \sqrt{n+m}(\|X\mathbf{1}_{m}-a\|_{2}+\|X^{\top}\mathbf{1}_{n}-b\|_{2}).\]
Theorem 3.1: _To obtain \(\varepsilon\) solution of problem (2), its sufficient to perform \(2+\frac{8\max\{n,m\}^{3/2}R}{\gamma\varepsilon}\) iterations of Algorithm 1._
Proof: Below, \(\lambda_{+}\) and \(\mu_{+}\) will denote values of \(\lambda\) and \(\mu\) after the current iteration, and \(\lambda_{+k}\) and \(\mu_{+k}\) denote values of \(\lambda\) and \(\mu\) after \(k\) iterations. Let current update relate to \(\lambda\). Denoting \(S=-C-\mathbf{1}_{n}\mu^{\top}-\lambda\mathbf{1}_{m}^{\top}\) and \(\delta=\lambda-\lambda_{+}\), we have
\[\varphi(\lambda_{+},\mu_{+})-\varphi(\lambda,\mu) =\tfrac{1}{2\gamma}\sum_{i,j=0,0}^{n,m}(\max\{0,S_{ij}+\delta_{i} \}^{2}-\max\{0,S_{ij}\}^{2})+\delta^{\top}a\] \[\geq\tfrac{1}{2\gamma}\sum_{S_{ij}>0,\delta_{i}<0}(\max\{0,S_{ij }+\delta_{i}\}^{2}-S_{ij}^{2})+\delta^{\top}a\] \[\geq\delta^{\top}(a+[\delta]_{-}-2\gamma X\mathbf{1}_{m})\geq\| \delta\|_{2}^{2}+\delta^{\top}(a-2\gamma X\mathbf{1}_{m})\] \[\geq\delta^{\top}(a-X\mathbf{1}_{m})\geq\tfrac{\gamma}{n}\|a-X \mathbf{1}_{m}\|_{2}^{2},\]
due to \(\lambda_{i}-[\lambda_{+}]_{i}=\tfrac{\gamma}{l}a_{i}-\tfrac{1}{l}\sum_{j=1}^{l} (-C_{i}^{\top}-\mu-\lambda_{i})_{(j)}\geq\tfrac{\gamma}{l}a-\tfrac{\gamma}{l} X\mathbf{1}_{m}\) and for small enough \(\gamma\). Then, by Lemma 2, we have
\[\varphi(\lambda_{+},\mu_{+})-\varphi(\lambda,\mu)\geq\max\left\{\tfrac{\gamma }{16n^{2}}\tfrac{[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]^{2}}{R^{ 2}},\tfrac{\gamma}{n}\varepsilon^{2}\right\}, \tag{6}\]
which implies, similarly to SS2.1.5 from [16], that
\[k\leq 1+\tfrac{16n^{2}R^{2}}{\gamma}\tfrac{1}{[\varphi(\lambda^{*},\mu^{*})- \varphi(\lambda_{+},\mu_{+})]}-\tfrac{16n^{2}R^{2}}{\gamma}\tfrac{1}{[\varphi (\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]}. \tag{7}\]
In the other case of (6), we have
\[[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda_{+k},\mu_{+k})]\leq[\varphi( \lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]-\tfrac{k\gamma\varepsilon^{2}}{n}. \tag{8}\]
To combine bounds on \(k\) from (7) and (8), we take minimum of their sum over all options for current objective function value
\[k \leq\min_{0\leq s\leq[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]}\left\{2+\tfrac{16n^{2}R^{2}}{\gamma s}-\tfrac{16n^{2}R^{2}}{\gamma} \tfrac{1}{[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]}+\tfrac{sn}{ \gamma\varepsilon^{2}}\right\}\] \[=\begin{cases}2+\tfrac{n}{\gamma}(\tfrac{8\sqrt{n}R}{\varepsilon} -\tfrac{16nR^{2}}{[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]})&[ \varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]\geq 4\varepsilon\sqrt{n}R^{2},\\ 2+\tfrac{n}{\gamma}\tfrac{[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]} {\varepsilon^{2}}&[\varphi(\lambda^{*},\mu^{*})-\varphi(\lambda,\mu)]<4 \varepsilon\sqrt{n}R^{2},\end{cases}\]
which implies the statement of Theorem 3.1.
We have not set \(R\) and \(\gamma\) in the bound above. By Lemma 1, \(R\leq\|C\|_{\infty}+\frac{\gamma}{n}\), so \(k\leq 2+\frac{8n^{3/2}\|C\|_{\infty}}{\gamma\varepsilon}+\frac{8n^{1/2}}{\varepsilon}\), and one can take \(\gamma=\varepsilon/2\), such that solving regularised problem with accuracy \(\varepsilon/4\) will give \((\varepsilon/2)\)-solution of original problem. Besides, by Lemma 7 from [1] we have
\[\langle C,X\rangle\leq\langle C,X^{*}\rangle+\tfrac{\gamma}{2}\|X\|_{2}^{2}+2( \|a-X\mathbf{1}_{m}\|_{1}+\|b-X^{\top}\mathbf{1}_{n}\|_{1})\|C\|_{\infty},\]
so one should set target accuracy to \(\varepsilon/(4\|C\|_{\infty})\). This proves the following result.
Theorem 3.1: _Number of iterations of Algorithm 1, sufficient for Algorithm 2 to return \(\varepsilon\)-optimal transport plan \(X\) such that \(X\mathbf{1}_{m}=a,X^{\top}\mathbf{1}_{n}=b\), is_
\[\mathcal{O}\left(\tfrac{(n+m)^{3/2}\|C\|_{\infty}^{2}}{\varepsilon^{2}}\right).\]
```
0:\(a,b,C,\varepsilon\)
1: Find \(X^{\prime}\) for given \(C,a,b,\gamma=\varepsilon/2\), with accuracy \(\varepsilon/(4\|C\|_{\infty})\) using Algorithm 1
2: Find projection \(X\) of \(X^{\prime}\) onto the feasible set using Algorithm 2[1]
```
**Algorithm 2** Approximate OT by Algorithm 1
Note that correction \(a^{\prime}=(1-\varepsilon/8)\left(a+\mathbf{1}_{n}\varepsilon/(n(8- \varepsilon))\right)\) of target marginal distributions \(a\) and \(b\), which is required for original Sinkhorn-Knopp algorithm [6], is not necessary in Algorithms 2 and 4, since formula for \(R\) from Lemma 1 makes sense even if \(a_{i}=0\) and \(b_{j}=0\) for some \(i\) and \(j\).
### Adaptive Accelerated Gradient Descent
To apply accelerated gradient method to the problem (2), let us consider it as problem of convex optimisation with linear constrains:
\[\min_{\begin{subarray}{c}A[X]=B\\ x_{ij}\geq 0\end{subarray}}f(X), \tag{9}\]
where operator \(A:\mathbb{R}^{n\times m}\rightarrow\mathbb{R}^{n+m}\) is defined by \(A[X]=(X\mathbf{1}_{m},X^{\top}\mathbf{1}_{n})\), \(B=(a,b)\in\mathbb{R}_{+}^{n+m}\), \(f\) is defined in (2), and corresponding dual problem is equivalent to (3). The following theorem gives iteration complexity for primal-dual Algorithm 3, which will be further applied to obtain the solution for problem (2). Note, that for given operator \(A\) it holds that
\[\|A\|_{2,2}\equiv\sup_{\|X\|_{2}=1}\|A[X]\|_{2}=\sqrt{n+m}. \tag{10}\]
Theorem 3.2 (Theorem 3.2 [6]): _Assume that optimal dual multipliers satisfy \(\|(\lambda^{*},\mu^{*})\|_{2}\leq R_{2}\). Then, Algorithm 3 generates sequence of approximate solutions for primal and dual problems (9) and (3), which satisfy_
\[f(X_{k})-f(X^{*})\leq f(X_{k})-\varphi(\lambda_{k},\mu_{k})\leq \tfrac{16\|A\|_{2,2}^{2}R^{2}}{\gamma k^{2}},\] \[\|A[X_{k}]-B\|_{2}\leq\tfrac{16\|A\|_{2,2}^{2}R}{\gamma k^{2}}, \quad\|X_{k}-X^{*}\|_{2}\leq\tfrac{8\|A\|_{2,2}R}{\gamma k}.\]
Following the proof scheme chosen in [6], we estimate the error of solution \(X\) for the original problem (1):
\[\langle C,X\rangle =\langle C,X^{*}\rangle+\langle C,X_{\text{reg.}}^{*}-X^{*}\rangle+ \langle C,X_{k}-X_{\text{reg.}}^{*}\rangle+\langle C,X-X_{k}\rangle\] \[\leq\langle C,X^{*}\rangle+\langle C,X_{\text{reg.}}^{*}-X^{*} \rangle+\langle C,X-X_{k}\rangle+f(X_{k})+\varphi(\lambda_{k},\mu_{k})+\gamma,\]
where \(X_{\text{reg.}}^{*}\) is the exact solution of problem (2). By choosing \(\gamma\leq\varepsilon/3\), obtaining \(X_{k}\) such that \(f(X_{k})-\varphi(\lambda_{k},\mu_{k})\leq\varepsilon/3\) by Algorithm 3 and making \(\langle C,X-X_{k}\rangle\leq\varepsilon/3\), we guarantee arbitrarily good approximate solution \(X\). Let us consider the latter condition in more details. By Lemma 7[1] and Theorem 3.1 one has
\[\langle C,X-X_{k}\rangle \leq\|C\|_{\infty}\|X-X_{k}\|_{1}\leq 2\|C\|_{\infty}(\|X_{k}\mathbf{1 }_{m}-a\|_{1}+\|X_{k}^{\top}\mathbf{1}_{n}-b\|_{1})\] \[\leq_{1}2\sqrt{n+m}\|C\|_{\infty}\|A[X_{k}]-B\|_{2}\leq\frac{32(n+ m)^{3/2}\|C\|_{\infty}R}{\gamma k^{2}}\] \[\leq_{2}2\sqrt{n+m}\|C\|_{\infty}\|X_{k}-X_{\text{reg.}}^{*}\|_{2} \leq\frac{16(n+m)\|C\|_{\infty}R}{\gamma k}.\]
To ensure the latter, it is sufficient to choose \(k\) such that
\[k=\mathcal{O}\left(\min\left\{\frac{n\|C\|_{\infty}R}{\varepsilon^{2}},\frac{ n^{3/4}\sqrt{\|C\|_{\infty}R}}{\varepsilon}\right\}\right). \tag{11}\]
On the other hand, \(f(X_{k})-\varphi(\lambda_{k},\mu_{k})\leq\varepsilon/3\) together with Theorem 3.1 imply
\[k=\mathcal{O}\left(\frac{\sqrt{n+m}R}{\varepsilon}\right),\]
which is majorated by (11) and does not contribute to iteration complexity. This proves, taking into account (10), Lemma 1, and that \(R_{2}\leq R\sqrt{n+m}\), the following result.
Theorem 3.4: _Number of iterations of Algorithm 3, sufficient for Algorithm 4 to return \(\varepsilon\)-optimal transport plan \(X\) such that \(X\mathbf{1}_{m}=a,X^{\top}\mathbf{1}_{n}=b\), is_
\[\mathcal{O}\left(\min\left\{\frac{(n+m)^{3/2}\|C\|_{\infty}^{2}}{\varepsilon^{ 2}},\frac{(n+m)\|C\|_{\infty}}{\varepsilon}\right\}\right).\]
```
0:\(a,b,C,\varepsilon\)
1: Find \(X^{\prime}\) for given \(C,a,b,\gamma=\varepsilon/3\), with accuracy \(\varepsilon/3\) using Algorithms 3, 5, or 6
2: Find projection \(X\) of \(X^{\prime}\) onto the feasible set using Algorithm 2[1]
```
**Algorithm 4** Approximate OT by Algorithms 3,5, or 6
### Accelerated Alternating Minimisation
Note that Sinkhorn-Knopp algorithm is based on the simplest alternating optimisation scheme: dual function \(\varphi\) is explicitly optimised with respect to \(\lambda\) and \(\mu\) alternately. Thus, if there is a way to accelerate some alternating optimisation algorithm, similar technique can be applied to Sinkhorn-Knopp algorithm. Moreover, iteration complexity will correspond to that of taken accelerated alternating optimisation method, while the arithmetic complexity of optimisation with respect to one variable will be the same as for Sinkhorn algorithm.
The following theorem gives iteration complexity for general primal-dual alternating minimisation Algorithm 5, which can be used similarly to Algorithm 3 to obtain the solution for problem (2). Note that \(b\), which denotes the number of independent variables blocks in [8], can be set to \(b=2\) in our case, because \(\|\nabla_{\lambda}\varphi(\lambda,\mu)\|_{2}>\|\nabla_{\mu}\varphi(\lambda, \mu)\|_{2}\) implies \(\|\nabla_{\lambda}\varphi(\lambda,\mu)\|_{2}^{2}>\frac{1}{2}\|\nabla\varphi( \lambda,\mu)\|_{2}^{2}\). But since dimensionalities of \(\lambda\) and \(\mu\) are different, one of the variables which has bigger dimensionality will be updated more often a priori.
Theorem 3.5 (Theorem 3[8] for \(b=2\)): _Assume that optimal dual multipliers satisfy \(\|(\lambda^{*},\mu^{*})\|_{2}\leq R_{2}\). Then, Algorithm 5 generates sequence of approximate solutions for primal and dual problems (9) and (3), which satisfy_
\[f(X_{k})-f(X^{*})\leq f(X_{k})-\varphi(\lambda_{k},\mu_{k})\leq \frac{16\|A\|_{2,2}^{2}R^{2}}{\gamma k^{2}},\] \[\|A[X_{k}]-B\|_{2}\leq\frac{16\|A\|_{2,2}^{2}R}{\gamma k^{2}}, \quad\|X_{k}-X^{*}\|_{2}\leq\frac{8\|A\|_{2,2}R}{\gamma k},\]
Instead of \(\arg\max\) operator taking place in the listing of general Algorithm 5 one should use formulas (5). The advantage of this approach consists in simplicity of obtaining the solution for these auxiliary problems. It is expected that while accelerated gradient descent considered before was making one gradient step at each iteration, this algorithm makes optimal step with respect to half of
dual variables, so expected progress per iteration is bigger, while the number of iterations is the same up to small \(\mathcal{O}(1)\) factor. Using the proof scheme similar to which is provided in Section 3.3 and the same problem pre- and post-processing Algorithm 4, one can guarantee, taking into account (10) and Lemma 1, that the following result holds.
Theorem 3.5: _Number of iterations of Algorithm 5, sufficient for Algorithm 4 to return \(\varepsilon\)-optimal transport plan \(X\) such that \(X\mathbf{1}_{m}=a,X^{\top}\mathbf{1}_{n}=b\), is_
\[\mathcal{O}\left(\min\left\{\frac{(n+m)^{3/2}\|C\|_{\infty}^{2}}{\varepsilon^{ 2}},\frac{(n+m)\|C\|_{\infty}}{\varepsilon}\right\}\right).\]
### Coordinate Linear Variance Reduction
One can also consider problem (2) as generalised linear problem with strongly-convex regulariser and sparse constraints. By using the property that dual variables or problem (3) are separable into two groups (\(\lambda\) and \(\mu\)), one can apply primal-dual incremental coordinate methods. One of the modern algorithms which is based on dual averaging and has implicit variance reduction effect was proposed in [19]. The following theorem presents simplified form of iteration complexity estimate for Algorithm 6 adopted to our particular problem.
Theorem 3.6 (Corollary 1 [19] for \(b=2\)): _Assume that optimal dual multipliers satisfy \(\|(\lambda^{*},\mu^{*})\|_{2}\leq R_{2}\). Then, Algorithm 6 generates sequence of approximate
solutions for primal and dual problems (9) and (3), which satisfy_
\[\mathbb{E}[f(\widetilde{X}_{k})-f(X^{*})]=\mathcal{O}\left(\tfrac{\|A\|_{2,2}^{2 }R^{2}}{\gamma k^{2}}\right),\ \mathbb{E}[\|A|\widetilde{X}_{k}]-B\|_{2}]=\mathcal{O} \left(\tfrac{\|A\|_{2,2}^{2}R}{\gamma k^{2}}\right).\]
Taking into account (10) and Lemma 1, using the same reasoning as for Theorem 4.1, one has
Theorem 4.2: _Number of iterations of Algorithm 6, sufficient for Algorithm 4 to return expected \(\varepsilon\)-optimal transport plan \(X\) such that \(X\mathbf{1}_{m}=a,X^{\top}\mathbf{1}_{n}=b\), is_
\[\mathcal{O}\left(\min\left\{\tfrac{(n+m)^{3/2}\|C\|_{\infty}^{2}}{\varepsilon^ {2}},\tfrac{(n+m)\|C\|_{\infty}}{\varepsilon}\right\}\right),\]
_where "expected \(\varepsilon\)-optimal" means that \(\mathbb{E}[\langle C,X\rangle]-\varepsilon\leq\langle C,X^{*}\rangle\)._
One can see that asymptotic of iteration complexity is the same as that of Algorithms 3 and 5. This allows to use the same pre- and post-processing Algorithm 4 to apply this algorithm to the OT problem. The advantage of this algorithm is the simplicity of iterations. It is expect that despite the same \(\mathcal{O}(nm)\) arithmetic complexity of one iteration, constant of it in practice is significantly smaller than for accelerated methods considered before.
## 4 Numerical experiments
All the optimisation algorithms described in previous section are implemented in Python 3 programming language. Reproduction package including source code of algorithms and experiments settings is hosted on GitHub1. We consider OT problem for the pair of images from MNIST dataset [5], where distributions are represented by vectorised pixel intensities and cost matrix contains pairwise Euclidean distances between pixels.
Footnote 1: Repository is available at [https://github.com/MuXauJl11110/Euclidean-Regularised-Optimal-Transport](https://github.com/MuXauJl11110/Euclidean-Regularised-Optimal-Transport)
Firstly, experiment on comparison of algorithms applied to entropy-regularised OT was carried out. Following algorithms were compared: Sinkhorn-Knopp algorithm (Sinkhorn) [6], Adaptive Primal-dual Accelerated Gradient Descent (APDAGD) [6], Primal-dual Accelerated Alternating Minimisation (PDAAM) [8] and its modification which uses one-dimensional optimisation to choose step size (PDAAM-LS). Results of the experiment are shown in Figure 1. There are presented convergence curves of methods for two progress measures: function value for original problem (1) and dual gap for problem (3). The range of target accuracy value is \(\varepsilon\in\{2\cdot 10^{-2},1.85\cdot 10^{-3},5\cdot 10^{-4}\}\) (each target accuracy value
Figure 1: Practical efficiency of Sinkhorn–Knopp, Adaptive Accelerated Gradient, and Accelerated Alternating methods applied to entropy-regularised OT problem on MNIST dataset.
requires separate experiment, because \(\varepsilon\) is a parameters of Algorithms 2 and 4 and affects the convergence from the beginning).
All the plots show that PDAAM is leading algorithm, and performance of APDAGD is competitive with it. On the other hand, Sinkhorn-Knopp algorithm converges slowly, especially for small \(\varepsilon\). PDAAM-LS demonstrates unstable behaviour in our experiment.
Secondly, the same algorithms were compared while applied to Euclidean-regularised OT problem. Figure 2 shows convergence curves of methods, organisation of the plots is the same as above. One can see that ordering of the methods' performance remain the same as in the case of entropy-regularised OT. Specifically, the PDAAM algorithm convergence is faster than that of APDAGD and Sinkhorn. On the other hand, difference between PDAAM and APDAGD performance is less significant in the case of Euclidean-regularised OT (we conclude that progress of step which is optimal with respect to one of the dual variables is not much bigger than progress of the gradient step), and Sinkhorn algorithm performs significantly worse than in entropy-regularised OT and is not efficient in practice. CLVR did not displayed itself an efficient method in our experiment. Generally, convergence of all of the algorithms in the case of Euclidean regularisation is more prone to slowing down on the latter iterations.
Figure 2: Practical efficiency of Sinkhorn–Knopp, Adaptive Accelerated Gradient, and Accelerated Alternating methods applied to Euclidean-regularised OT problem on MNIST dataset.
The expected property of Euclidean-regularised OT that the optimal transport plan obtained with it is sparse is approved in our experiments. One can see the examples of transport plans in Figure 3, the fraction of zero elements (which are \(<10^{-21}\)) in them is around 99.5%.
## 5 Discussion
Euclidean regularisation for OT problems has been recently explored in several papers due to its practically valuable properties, such as robustness to small regularisation parameter and sparsity of the optimal transport plan. This paper provides a theoretical analysis of various algorithms that are applicable efficiently to Euclidean-regularised OT. We demonstrate and compare their practical performance. Our findings reveal that these desirable properties come at a cost. Namely, the slower convergence of all the algorithms and faster increase in arithmetic complexity as dimensionality grows.
Our plans involve considering different convex optimisation algorithms applied to Euclidean-regularised OT, focusing on splitting algorithms that are to be more computationally stable with small regularisation parameter [13]. Additionally, we aim to explore the application of Euclidean regularisation for the Wasserstein barycenter problem.
|
2307.06988 | An Extreme Black Hole in the Recurrent X-ray Transient XTE J2012+381 | The black hole candidate XTE J2012+381 underwent an outburst at the end of
2022. We analyzed 105 NICER observations and 2 NuSTAR observations of the
source during the outburst. The NuSTAR observations of the $M \sim10M_\odot$
black hole indicate clear signs of relativistic disk reflection, which we
modeled to measure a BH spin of $a=0.988^{+0.008}_{-0.030}$ and an inclination
of $\theta=68^{+6}_{-11}$ degrees ($1\sigma$ statistical errors). In our
analysis, we test an array of models and examine the effect of fitting NuSTAR
spectra alone versus fitting simultaneously with NICER. We find that when the
underlying continuum emission is properly accounted for, the reflected emission
is similarly characterized by multiple models. We combined 52 NICER spectra to
obtain a spectrum with an effective exposure of 190 ks in order to probe the
presence of absorption lines that would be suggestive of disk winds, but the
resulting features were not statistically significant. We discuss the
implications of this measurement in relation to the overall BH spin
distribution in X-ray binary systems. | Paul A. Draghis, Jon M. Miller, McKinley C. Brumback, Andrew C. Fabian, John A. Tomsick, Abderahmen Zoghbi | 2023-07-13T18:00:00Z | http://arxiv.org/abs/2307.06988v1 | # An Extreme Black Hole in the Recurrent X-ray Transient XTE J2012+381
###### Abstract
The black hole candidate XTE J2012+381 underwent an outburst at the end of 2022. We analyzed 105 NICER observations and 2 NuSTAR observations of the source during the outburst. The NuSTAR observations of the \(M\sim 10M_{\odot}\) black hole indicate clear signs of relativistic disk reflection, which we modeled to measure a BH spin of \(a=0.988^{+0.008}_{-0.030}\) and an inclination of \(\theta=68^{+6}_{-11}\) degrees (\(1\sigma\) statistical errors). In our analysis, we test an array of models and examine the effect of fitting NuSTAR spectra alone versus fitting simultaneously with NICER. We find that when the underlying continuum emission is properly accounted for, the reflected emission is similarly characterized by multiple models. We combined 52 NICER spectra to obtain a spectrum with an effective exposure of 190 ks in order to probe the presence of absorption lines that would be suggestive of disk winds, but the resulting features were not statistically significant. We discuss the implications of this measurement in relation to the overall BH spin distribution in X-ray binary systems.
0000-0002-8800-7088]Paul A. Draghis
0000-0002-8861-8887]Jon M. Miller
0000-0002-4707-387X]McKinley C. Brumback
0000-0002-4707-387X]Andrew C. Fabian
0000-0002-4707-387X]John A. Tomsick
0000-0002-4883-0888]Abderahmen Zoghbi
## 1 Introduction
The observed spin distribution of stellar mass black holes (BHs) in X-ray binary (XB) systems is in disagreement with the spin distribution of BHs in merging binary black hole (BBH) systems observed through gravitational waves (GW), with BHs in XB having preferentially high spins, whereas BHs in BBHs have preferentially low spins (Fishbach & Kalogera, 2022; Draghis et al., 2023). It is important however to acknowledge that while the distribution of spins in BBH accounts for selection effects and observational biases, the distribution of spins of BHs in XB is built based only on the observed spins and there may be unknown selection effects. Furthermore, the measured spin values are reported with only statistical uncertainties, as the systematic uncertainties are not yet well understood. The most pragmatic approach to quantifying the systematic uncertainties of the spin measurements of BHs in XBs and to attempt to quantify the observational biases is to measure the BH spin in as many sources as possible.
For BHs in XBs, the preferred spin measurement techniques that use of X-ray spectroscopy are the "continuum fitting" method (see, e.g., Gou et al., 2009; Feng et al., 2023) and the "relativistic reflection" method (see, e.g., Fabian et al., 2000; Brenneman & Reynolds, 2006; Miller, 2007; Miller et al., 2010; Draghis et al., 2020). Both methods come with a series of assumptions and simplifications. For a review of the efforts in the field regarding the two methods, see Reynolds (2021). Of the currently operating X-ray missions, NuSTAR (Harrison et al., 2013) is the most ideally suited for measuring the features of relativistic reflection, namely the broadened Fe K\(\alpha\) line, present at 6.4 keV for neutral gas and at progressively higher energies up to 6.97 keV for Fe XXVI, and the Compton hump, a broad energy excess above \(\sim 20\) keV.
XTE J2012+381 was first discovered in 1998 using the RXTE All-Sky Monitor by Remillard et al. (1998), with a candidate optical counterpart being quickly identified, but later confirmed by Hynes et al. (1999), which classified the outburst as soft X-ray transient. White et al. (1998) analyzed an ASCA observation of the source and obtained a good fit to the spectrum using a disk blackbody and a power-law component model, and claimed it to be a black hole candidate. Later, Vasiliev et al. (2000) analyzed 24 RXTE observations of XTE J2012+381 obtained throughout the 1998 outburst
and claimed the presence of excess broadened emission around 6.4 keV. Based on the spectral features measured from five BeppoSAX observations, Campana et al. (2002) placed a lower limit on the mass of the BH in XTE J2012+381 of \(M\gtrsim 22d_{10}\ M_{\odot}\) for a maximally spinning BH, where \(d_{10}\) is the distance to the system in units of 10kpc. The Gaia (Gaia Collaboration et al., 2016) measurement of the parallax of XTE J2012+381 is \(p=0.1859\pm 0.0719\) mas, equivalent to a distance to the system of \(d=5.4\pm 2.1\) kpc. Given the Gaia distance estimate and the estimate of Campana et al. (2002), we can place a lower limit on the BH mass of \(M\gtrsim 11.8\pm 4.6M_{\odot}\).
XTE J2012+381 entered an outburst phase again in late 2022. This outburst was first detected by the MAXI/GSC nova alert system on December 25, 2022 (Kawamuro et al., 2022), and confirmed using the Swift XRT instrument on December 26, 2022 (Kennea, 2022). We obtained two NuSTAR observations of XTE J2012+381, and NICER (Gendreau et al., 2016) monitored the source throughout the outburst. Motivated by the previous reports of the presence of relativistic reflection features in the spectra of this source during the previous outburst, we attempted to use the relativistic reflection method to measure the spin of the BH candidate XTE J2012+381. The summary of observations used in the analysis is presented in Section 2 and our analysis methods and results are presented in section 3. In Section 4 we discuss the implications of this result on the broader stellar mass BH spin distribution.
## 2 Observations and Data Reduction
We observed XTE J2012+381 twice using NuSTAR, obtaining an exposure on December 29, 2022 under ObsID 80802344002 and an exposure on January 18 2023 under ObsID 80802344004. We analyzed the NuSTAR observations using the routines in HEASOFT v6.29c through the NUSTARDAS pipeline v2.1.1 and CALDB v20211103. We extracted the source spectra from circular regions centered on the source position with radii of 120", and we used regions of the same size for extraction of background rates. We grouped the spectra using the "ftgrouppha" ftool, through the optimal binning scheme described by Kaastra & Bleeker (2016). We continued analyzing the NuSTAR spectra in the 3-70 keV and 3-60 keV bands, respectively, as the spectra obtained during the two observations were background-dominated at higher energies. We chose to analyze the NuSTAR observations using these versions of the calibration software in order to maintain consistency with the larger sample presented in Draghis et al. (2023). However, we note that using spectra extracted using the latest calibration software available produces fully consistent results.
NICER tracked the evolution of the outburst by taking 105 observations of the source over the first 155 days between the first detection and May 29th. We analyzed the observations using the NICERDAS v10 pipeline in HEASOFT v6.31 and CALDB xti20221001. We ran the nicerl2 pipeline by excluding the detectors 14 and 34. During many observations, the NICER detectors were dominated by optical loading at low energies, producing residuals that cannot be accounted for using physical models under 1keV, regardless of the limit placed on the "undershoots" in the NICER detector. Therefore, as the spectrum below 1keV cannot be properly constrained due to optical loading, we constrained the allowed "undershoot" rates to be as high as 500 in order to not sacrifice the quality of the data at energies above 1keV, and only fit the NICER spectra down to 1keV. We set the allowed "overshoot" rates to be as high as 1.5. We then extracted the source spectra and the associated RMF and ARF files using the nicerl3 pipeline, and we accounted for background emission using the SCORPEON model. We fit the spectra in the 1-10 keV band.
## 3 Analysis and Results
We ran the spectral analysis in XSPEC v12.12.0g (Arnaud, 1996) by minimizing the \(\chi^{2}\) statistic. We independently fit the spectra obtained from all the NICER observations and the two NuSTAR Focal Plane Modules (FPM) from the two observations of the source. The initial model that we used describes an absorbed disk black body plus a power law component, TBabs*(diskbb+powerlaw). This model includes the multiplicative component TBabs(Wilms et al., 2006) to account for the interstellar absorption using abundances computed by Wilms et al. (2000) and photoionization cross sections computed by Verner et al. (1996).
The top left panel in Figure 1 shows the MAXI light curve of the outburst of XTE J2012+381, in the 2-20 keV band. The two cyan vertical lines represent the dates of the two NuSTAR observations of the source. The following panels on the left show the time evolution of the measurements of the Galactic column density, the accretion disk temperature and normalization, and of the power law index and normalization in the fits to the NICER spectra. The last two panels on the left in Figure 1 show the effective exposure of the NICER observations analyzed, and the reduced \(\chi^{2}\) produced when fitting the NICER spectra. The right panels show the link between the evolution of the measured temperature of the diskbb component (top) and the 1-10 keV
flux (bottom) as a function of the hardness ratio, computed as the ratio of the fluxes in the 5-10 keV band and the 2-5 keV band. The colors of the points track the time evolution, similarly to the panels on the left. The outburst begins in an already relatively soft state, but evolves similarly to other BH outbursts, following a "Q" shape in this plot. However, Rodriguez et al. (2023) reported an INTEGRAL detection of XTE J2012+381 on December 23rd, 2022 (3 days before the first NICER observation). During this X-ray observation, the source was in a harder state and well detected up to high (150 keV) energies, suggesting that the source transitioned from a hard to soft state in the very early stages of its outburst.
When fitting NuSTAR spectra, it is often customary to allow the presence of a normalization constant to account for the difference between the spectra from the two detectors. However, we did not include a constant component in our models and instead we allowed the normalizations of the diskbb and the powerlaw components to
Figure 1: Left: The MAXI light curve in the 2-20 keV band of the 2022-2023 outburst of XTE J2012+381 (top). The following five panels show the evolution of the measured Galactic column density \(N_{\rm H}\), the inner disk temperature and normalization of the diskbb component, the power-law index \(\Gamma\), the normalization of the powerlaw component, obtained when fitting the NICER spectra of the source with the model TBabs*(diskbb+powerlaw). The seventh panel shows the exposure of the NICER observations analyzed, and the eighth panel shows the reduced \(\chi^{2}\) returned by the fits to the NICER spectra. The colors of the points represent time evolution. The vertical cyan lines show the dates of the two NuSTAR observations of XTE J2012+381 analyzed in this paper. The observations between the dashed vertical magenta lines were combined to produce the spectrum shown in Figure 5. Right: The evolution of the measured disk temperature of the source (top) and of the 1-10 keV flux (bottom) vs. the hardness defined as the ratio of the 5-10 keV flux to the 2-5 keV flux. In the top panel, we omitted the measurements which had an uncertainty larger than 0.5 keV. The colors of the points are the same as in the left panels and represent the time evolution of the source.
vary independently. This introduces an additional free parameter when compared to adding a constant component to the model, but the quality of the fits is often superior to simply allowing a constant offset between the spectra. When allowing the normalizations of the components to vary independently, they generally take values within a few percent of each other. The residuals produced when fitting the NuSTAR spectra show clear signs of relativistic reflection.
To account for the relativistic reflection features, we replaced the powerlaw component in our baseline model with different flavors of the relxill v.1.4.3 family of models (Dauser et al., 2014; Garcia et al., 2014). A complete description of the models can be found on the relxill website1, Section 3.1 in Draghis et al. (2021), or Appendix A in Draghis et al. (2023). While newer versions of the relxill models include the effect of returning radiation, works such as Dauser et al. (2022) and Riaz et al. (2023) concluded that the measured spin of the compact object in the system is unaffected by the inclusion of returning radiation. Therefore, in order to ensure consistency of our analysis with the pipeline of Draghis et al. (2023), we chose to use the same version of relxill. Similarly, for consistency with the large-scale analysis of Draghis et al. (2023), we initially explored the effects of replacing the powerlaw component in our initial fits with six different flavors of the relxill family of models: relxill, relxillCp, relxilllp, and the relxillD version with the accretion disk density fixed to \(n=10^{15}\), \(10^{17}\), and \(10^{19}\) cm\({}^{-3}\).
Footnote 1: [http://www.sternwarte.uni-erlangen.de/](http://www.sternwarte.uni-erlangen.de/)\(\sim\)dauser/research/
relxill/
Given the existing mass and distance estimates (presented in Section 1) and the inferred fluxes based on the two NuSTAR observations, the source falls within the range of luminosity for which based on theoretical, numerical, and observational results (see, e.g., Reynolds & Fabian, 2008; Salvesen et al., 2013; Garcia et al., 2015; Schnittman et al., 2016) it is expected that the inner disk radius extends near to the innermost stable circular orbit (ISCO) of the BH: \(10^{-3}\lesssim L/L_{\rm Edd}\lesssim 0.3\), where \(L_{\rm Edd}\) represents the Eddington luminosity, and \(L\) represents the luminosity of the source. Therefore, throughout our spectral analysis, we set the inner disk radius to be that of the ISCO, \(r_{\rm in}=r_{\rm ISCO}\). We fixed the outer disk radius at \(r_{\rm out}=990~{}r_{g}\). We allowed all other parameters in the models to vary freely.
We fit the NuSTAR spectra from the two observations both independently, and jointly with NICER observations of the source taken closely in time to the NuSTAR exposures. We fit NuSTAR obsID 80802344002 together with NICER obsID 5203600104, which overlapped with the NuSTAR observation, and NuSTAR obsID 80802344004 together with NICER obsID 5203600114, which was taken three days after the second NuSTAR observation. We chose this NICER observation over other, closer in time to the second NuSTAR observation, as it had a significantly longer exposure. We applied the array of six relxill flavors to both the NuSTAR spectra alone, and to the NICER and NuSTAR spectra together.
The spectra from the first NuSTAR observation (80802344002) are dominated by a high-energy component. Fitting the NuSTAR spectra jointly with the NICER spectrum from obsID 5203600104 with the six variants of the reflection model produces good fits. The best-performing model was TBabs*(diskbb+relxill) producing \(\chi^{2}/\nu=549.15/556=0.99\), followed closely by the relxillD variant with \(\log(n)=19\) producing \(\chi^{2}/\nu=550.09/557=0.99\) and the relxillD variant with \(\log(n)=15\), with \(\chi^{2}/\nu=569.18/557=1.02\). Given the low-energy coverage provided by the addition of NICER spectra, we also tested the effects of modeling the accretion disk with a more physically accurate component, by replacing the diskbb component in the best performing model with the kerrbb model (Li et al., 2005). In the kerrbb component, we fixed the BH mass to \(11.8M_{\odot}\), the distance to the BH to 5.2 kpc, and linked the BH spin and inner disk inclination between the kerrbb and relxill components. This returned an improved statistic of \(\chi^{2}/\nu=537.3/555=0.97\) for the TBabs*(kerrbb+relxill) model. The other models tested performed worse in terms of statistic, but produced relatively similar parameter constraints.
When using the six models to fit the NuSTAR spectra from the first observation alone, the best-performing model was TBabs*(diskbb+relxill), producing \(\chi^{2}/\nu=452.45/420=1.08\), followed closely by the relxillD variant with \(\log(n)=15\) producing \(\chi^{2}/\nu=459.1/421=1.09\) and by the relxillCp variant with \(\chi^{2}/\nu=460.32/420=1.10\). The other three models tested performed worse. Despite the lack of low-energy coverage under 3 keV when not including NICER spectra to the fit, we tested the effects of replacing the diskbb component with the kerrbb one. This returned \(\chi^{2}/\nu=452.12/419=1.08\), which formally improves \(\chi^{2}\), but due to the extra free parameter, the improvement over the the model assuming a simplistic disk treatment is not statistically significant.
Fitting the second pair of NuSTAR and NICER spectra with the six model variants again produces reasonable fits. By far, the best-performing model
is TBabs*(diskbb+relxill), which returns \(\chi^{2}/\nu=482.14/459=1.05\). As the observations occurred while the source was in a disk-dominated state, one would naively expect that the improvement of replacing the diskbb component with kerrbb would be more significant in this case. However, fitting the spectra with the TBabs*(kerrbb+relxill) model produces a worse fit, returning \(\chi^{2}/\nu=493.15/458=1.08\). This result is surprising, given how the kerrbb component is more complex, with more free parameters, and one would expect the fit to converge to at least the same value of \(\chi^{2}\). However, the increased complexity of the component originating from multiple parameters which are strongly correlated to produce similar spectral features, paired with the limited data quality makes the parameter space difficult to explore and the fit prone to converging to local \(\chi^{2}\) minima, as opposed to the global best-fit solution. Furthermore, it is important to note that at this point, the majority of the contribution to \(\chi^{2}\) comes from instrumental residuals in the NICER spectrum and from possible differences between the FPMA and FPMB instruments on NuSTAR. Nevertheless, the reflection component remains similar regardless of the assumption of disk model.
When fitting the NuSTAR spectra of the second observation without including the NICER spectrum, the fits from multiple models converge to the same solution, in the same region of the parameter space. The models TBabs*(diskbb+relxillD) with \(\log(n)=15\), TBabs*(diskbb+relxillD) with \(\log(n)=19\), TBabs*(diskbb+relxill), and TBabs*(kerrbb+relxill) produce \(\chi^{2}/\nu=370.15/349=1.06\), \(\chi^{2}/\nu=371.85/349=1.07\), \(\chi^{2}/\nu=372.62/348=1.07\), and \(\chi^{2}/\nu=372.27/346=1.08\), respectively. Particularly peculiar about these results is that the relxillD with \(\log(n)=15\) variant performs better than the relxill variant, since the only difference between the two is that relxill allows the high-energy cutoff of the incident power-law spectrum to vary, while relxillD fixes it at 300 keV, while both these particular variants have fixed \(\log(n)=15\). This suggests that the fit using the TBabs*(diskbb+relxill) was indeed stuck in a local minimum, but with a fit statistic very similar to the global minimum. Similarly to the case of the first observation, when lacking low-energy coverage, replacing the diskbb component with the kerrbb one does not significantly influence the quality of the fit or the reflection parameter combination.
Following the pipeline described in Draghis et al. (2023), we ran a Markov Chain Monte Carlo (MCMC) analysis of the parameter space on the best fits produced for each observation. For specifics regarding the MCMC analysis, please refer to Section 2.2 in Draghis et al. (2023). We ran the MCMC analysis on the 3 best-performing models that describe the thermal emission from the accretion disk using the diskbb component and on the model that describes the disk emission using kerrbb and the coronal and reflected emission using relxill. We computed the Deviance Information Criterion (DIC; Spiegelhalter et al. 2002) based on all MCMC runs, and we use this number to quantify the goodness of fit and to distinguish between models that perform similarly in terms of statistic produced.
When fitting the first NuSTAR observation alone, the TBabs*(diskbb+relxill) model produces DIC=479.51, the TBabs*(diskbb+relxillD) variant with \(\log(n)=15\) produces DIC=486.84, the TBabs*(diskbb+relxillCp) model produces DIC=491.47, and the TBabs*(kerrbb+relxill) model produces DIC=576.53. When fitting this NuSTAR observation jointly with the NICER observation 5203600104, the TBabs*(diskbb+relxillD) variant with \(\log(n)=19\) produces DIC=579.53, the TBabs*(diskbb+relxill) model produces DIC=582.43, the TBabs*(kerrbb+relxillD) model produces DIC=583.5, and the TBabs*(diskbb+relxillD) variant with \(\log(n)=15\) produces DIC=598.91.
When fitting the second NuSTAR observation alone, the TBabs*(diskbb+relxillD) variant with \(\log(n)=15\) produces DIC=396.45, the TBabs*(diskbb+relxillD) variant with \(\log(n)=19\) produces DIC=400.47, the TBabs*(diskbb+relxillD) model produces DIC=5401.58, and the TBabs*(kerrbb+relxill) model produces DIC=545.25. When including the NICER observation 5203600114 and fitting the spectra jointly, the TBabs*(diskbb+relxill) model returns DIC=518.88, the TBabs*(diskbb+relxillD) variant with \(\log(n)=19\) produces DIC=519.99, the TBabs*(diskbb+relxillD) variant with \(\log(n)=15\) produces DIC=521.36, and the TBabs*(kerrbb+relxill) model produces DIC=529.23.
The top sub-panels in Figure 2 show the unfolded spectra of the observations taken during the two epochs, which we analyzed in this paper. The right panels show only the NuSTAR FPMA and FPMB spectra through the red points, while the left panels include the NICER spectra, shown through the blue points. The solid lines represent the total best-fit models, the dashed lines represent the contribution of the diskbb component in the models, and the dotted lines represent the contributions of the best-performing reflection component. Subpanels b) and c) show the contribution to the resid
Figure 2: Sub-panels (a) shows the unfolded spectra of XTE J2012+381. The blue points represent the NICER spectra, and the different shades of red indicate the spectra from the NuSTAR FPMA and FPMB detectors. The solid lines represent the total best-fit models, while the dashed and dotted lines show the contributions to the model by the diskbb and relxill/relxillD components, respectively. Sub-panels (b) shows the residuals in terms of \(\sigma\) for the TBabs*(diskbb+powerlaw) model. The residuals show clear indication of relativistic reflection for both observations. Sub-panels (c) shows the residuals of the best-fit models, TBabs*(diskbb+relxill) or TBabs*(diskbb+relxillD), respectively, for each observation, together with the statistic produced by the models. Sub-panels (d) show the ratio of data to model for the best-fit models. The left panels show the spectra and residuals when fitting NICER and NuSTAR observations jointly, while the right panels show the results obtained when fitting the NuSTAR observations alone. The top panels indicate the first epoch (December 29, 2022), while the bottom panels show the second epoch (January 18 and 21, 2023 for NuSTAR and NICER respectively).
uals in terms of sigma when fitting the spectra using the TBabs*(diskbb+powerlaw) model, which does not account for relativistic reflection, and with the best-performing models which does account for reflection, respectively, together with the fit statistic produced. Subpanels d) show the ratio of data to model for the best-performing reflection models.
Visually, the highest contributions to the residuals come from unaccounted instrumental features in the NICER spectra, around the Al edge at 1.56 keV and the Si edge at 1.84 keV, due to FPM detector features, and around the Au M edge around 2.2 keV due to the reflectivity of gold M shells in the NICER X-ray Concentrator (XRC) optics. However, these residuals have a relatively low impact on the total statistic of the fit. We tested the effect of trying to account for those residuals by adding gaussian components to the best-fit models. For the first observation, the fit prefers the addition of a gaussian component at 2.27 keV for the Au M edge at 2.2 keV, and another gaussian component at 1.7 keV, at the average of the 1.56 keV Al edge and the 1.84 Si edge, accounting for both features. The addition of the two components improves the quality of the fit by \(\Delta\chi^{2}=16\) for 6 additional free parameters. For the second observation, adding a gaussian at 2.26 keV improves the fit by \(\Delta\chi^{2}=13\) for three extra free parameters. We note that all the improvement comes for the NICER spectrum, and the fit to the NuSTAR spectra return the same \(\chi^{2}\), suggesting that the continuum is constrained in the same way regardless of the correction for instrumental features. As the reflection parameters are nearly entirely determined by features that fall above 3 keV and seeing how the underlying continuum is constrained the same way regardless of the addition of correction gaussian components, we chose to continue our analysis without the extra components. This choice was made in order to reduce the complexity of the models, and make variations in information criteria be more impacted by the ability to constrain reflection features as opposed to instrumental features.
Figure 3 shows the 1-dimensional histograms of the posterior distributions for the spin (left sub-panels) and inclination (right sub-panels) based on the MCMC runs. The left panels of Figure 3 show the posterior distributions for the two epochs, when fitting only the NuSTAR observations (first and third sub-panels, in the downward direction) and both the NuSTAR and NICER observations (second and fourth sub-panels, in the downward direction). These are grouped by observation analyzed. The right panels in Figure 3 show the posterior distributions produced by the different model variations analyzed when treating the observations from the two different epochs, when analyzing the NuSTAR observations alone, and when also fitting the NICER spectra jointly with the NuSTAR ones. These are grouped by the model used.
Similarly to the prescription of Draghis et al. (2023), we combined the posterior distributions of the best-performing models in terms of DIC for the BH spin and inner disk inclination for the two epochs. The top panels in Figure 4 show the histograms of the posterior samples for the spin (left) and inclination (right) when fitting the NuSTAR and NICER observations jointly, with the blue curves indicating the measurements based on observations taken during the first epoch and the red curves indicating the measurements based on observations taken during the second epoch. The bottom panels show the posterior distributions obtained when fitting only the NuSTAR spectra from the two epochs. The line width of the blue and red curves indicate the weighting used when combining the measurements with a beta distribution, which was calculated to be proportional to the ratio of the reflected to total flux in the 3-79 keV band. The black curves in Figure 4 indicate the combined beta distribution obtained based on the mode of the posterior distributions of the parameters \(a\) and \(b\) describing the beta distribution, according to the method used in Draghis et al. (2023). The vertical solid and dashed lines represent the mode and the \(\pm 1\sigma\) credible intervals of the combined spin and inclination distribution.
Furthermore, to better encapsulate the differences between the measurements produced by the observations of the source during the two epochs owing to systematic uncertainties, we also combined the measurements using a novel method. Upon running the Bayesian algorithm used to combine the individual measurements into a single beta distribution (the black curve), we randomly selected 10000 of the posterior samples generated while running the algorithm and averaged them. These resulting averaged beta distributions are shown in Figure 4 through the solid green curves, and the modes and \(\pm 1\sigma\) credible intervals are shown through the green vertical solid and dashed lines.
The insert in the bottom left panel of Figure 4 shows the complete histogram of the posterior samples for the BH spin when fitting the second NuSTAR observation alone with the relxillD model with with \(\log(n)=15\), which performs best in terms of DIC among the models used to fit this spectrum. While running the MCMC analysis, the walkers discovered a second combination of parameters that is similarly favored in terms of statistic produced to the one that was used to initialize the walkers. While the initial best fit favored a high BH spin, the second solution for this model favors a moder
ately low spin, consistent with a non-rotating BH. While the high-spin solution also has a high inner emissivity index \(q_{1}\), the low-spin solution takes low values of \(q_{1}\). As suggested by Fabian et al. (2014), such solutions are to be treated as lower limits only, as for flat emissivity parameters, in order to match the flux, the inner disk radius in the models is pushed outward. When the inner disk radius is linked to the size of the ISCO, this translates to a lowering in measured BH spin. Despite the two solutions producing similar \(\chi^{2}\) values, the walkers in the MCMC analysis favor the low-spin solution, as the parameter space is wider and easier to explore. The likelihood space for the high-spin solution is very narrow, making it easier for the walkers to leave the high-spin solution and explore the low-spin one, but very difficult to return to the high-spin region of the parameter space. This combination of parameters only produces a good fit for the relxillD variants when fitting the second NuSTAR observation alone. Low-spin solutions are strongly disfavored when fitting the same NuSTAR observation jointly with a NICER spectrum, when fitting the NuSTAR spectra alone with other relxill variants, or when fitting the other NuSTAR observation, either alone or jointly with a NICER observation, with any relxill variant. As the second has a strength of reflection much smaller compared to the first observation, despite the fact that many of the posterior samples in the MCMC run prefer a low spin, the combined measurement still yields a high value. However, the lower limit of the credible interval takes a lower value, suggesting that in this case, the BH spin is poorly constrained.
Table 1 shows the modes of the posterior distributions along with the \(\pm 1\sigma\) credible intervals of the posterior distributions based on the MCMC analysis of the best-performing models for the two epochs, when including a NICER observation in the fit and when fitting the NuSTAR spectra alone. As shown in the insert in the bottom left panel in Figure 4, the preferred solution has a low spin. For comparison, we include the results produced when fitting only the spectra from the second NuSTAR observation with the default relxill flavor. In Appendix A we show the 1D and 2D parameter space of spin, inclination, and \(\chi^{2}\) based on the MCMC analysis of the joint NuSTAR-NICER fits, and also the complete
Figure 3: Histograms of the posterior distributions in the MCMC analysis for spin and inclination. The two panels on the left show the distributions grouped by observation analyzed, when using different models, with the first and third rows indicating fits to the NuSTAR observations alone, and the second and fourth rows indicating joint NuSTAR and NICER fits. The two panels on the right show the distributions grouped by the model used to fit the observations.
corner plots of the MCMC runs that produced the best DIC values for both the joint NuSTAR-NICER fits and the NuSTAR only fits.
The measured Fe abundance is high, \(A_{\rm Fe}\gtrsim 7\), consistent in both observations, regardless of the inclusion low-energy coverage through NICER spectra in the analysis. One possible explanation for the enhanced Fe abundance is levitation of Fe ions by radiation pressure in the innermost regions of the accretion disk, which would enlarge the abundance of iron in the disk photosphere (Reynolds et al., 2012). The high energy cutoff is high and poorly constrained in both observations. The disk component and the Galactic absorption are constrained differently when including the soft NICER coverage to the fits. However, regardless of how the underlying disk continuum is accounted for, the relativistic reflection features produce similar spin constraints. This is further suggested by agreeing results being produced when replacing the diskbb component in the models with kerrbb - see Figure 3. While models that describe the contribution of the accretion disk through the kerrbb component are disfavored in terms of DIC due to the complexity of the model, the spin measurements agree well with those
Figure 4: The left panels shows the posterior distributions resulting from the MCMC analysis of XTE J2012+381 for spin, while the right panels shows the posterior distributions for the inclination of the inner accretion disk in the model. The width of the lines is proportional to the ratio of the reflected flux to the total flux in the 3-79 keV band, which were used as weighting when combining the posterior distributions. The top panels indicate the results obtained from joint NICER and NuSTAR fits, while the bottom panels represent the results obtained from NuSTAR-only fits. The blue and red lines represent the posteriors obtained when fitting the two sets of observations analysed, and the black curves represent the combined inferred distributions, derived as highlighted by Draghis et al. (2023a). The green lines represent the combined distributions obtained as highlighted in this paper, which better account for systematic variations between the results obtained from the independent observations. The solid vertical black and green lines represent the modes of the combined distributions, and the dashed vertical black and green lines represent the 1\(\sigma\) credible intervals of the respective measurements. The insert in the lower left panel shows the complete posterior distribution for the spin obtained when fitting the second NuSTAR observation alone with the TBabs*(diskbb+relxillD) model, which obtains two solutions similar in terms of statistic produced, but with very different spin constraints. Nevertheless, as the first observation (blue) weights significantly more in the combining algorithm due to stronger reflection, the combined distribution still significantly favors a high spin, but with a broader lower limit on the credible interval.
from models that describe the disk contribution through the simpler diskbb component.
The measured ionization parameter is high during the first observation, \(\log(\xi)\sim 4\), but low during the second observation, \(\log(\xi)\lesssim 1\). Fixing one ionization measurement in the model used to fit the other observation produces bad fits, suggesting that given the size of the parameter space of the models, the measured ionization values are required by the data. It is important to acknowledge that the change in measured ionization between the two epochs is likely not physical, as the ionizing flux did not change significantly between the two sets of observations, and it is unlikely that the accretion disk density changed by many orders of magnitude over such a short timescale. The more likely explanation for the combination of peculiar change in ionization parameter and elevated Fe abundance has to do with the increased reflection fraction during the second epoch, when the ionization is lower, but more importantly the relatively low density of the accretion disk material. While throughout our analysis we probed values of the disk density of \(10^{15}-10^{19}\) cm\({}^{-3}\), it is likely that much higher disk densities would, in fact, help reconcile the apparently abnormal measurements of \(\log(\xi)\) and \(A_{\rm Fe}\)(see, e.g. Tomsick et al., 2018).
To probe that, we attempted fitting the data with the reflionx_HD model2(Jiang et al., 2020; Connors et al., 2021), which allows accretion disk densities up to \(10^{22}\) cm\({}^{-3}\). In full xspec parlance, the model used was TBabs*(diskbb+nthcomp+ relconv*atable{reflionx_HD_nthcomp_v2.fits}). This model does fit the data well, formally improving the value of \(\chi^{2}\), but with most of the improvement coming at low energies, in the NICER band, and the quality of the fit being essentially unchanged for the two NuSTAR spectra. In this case, when fixing the accretion disk density to \(10^{22}\) cm\({}^{-3}\), both observations produce consistent ionization measurements \(\log(\xi)\sim 3.5\) and reduced values of \(A_{\rm Fe}\sim 1.5\). However, as the model does not include a parameter than quantifies the reflection fraction, a direct comparison between the outputs of the two models is not trivial. Nevertheless, the spin
\begin{table}
\begin{tabular}{c|c c c c|c|c} \hline \hline ObsID & 80802344002 \& 5203600104 & 80802344002 & 80802344004 \& 5203600114 & 80802344004 & 80802344004 \\ \hline instrument & NuSTAR \& NICER & NuSTAR & NuSTAR \& NICER & NuSTAR & NuSTAR \\ \hline model & relxillD-19 & relxill & relxillD-15 & relxill & \multicolumn{1}{c}{} & & relxill \\ \hline \(N_{H}\) [\(\times 10^{22}\) cm\({}^{-2}\)] & \(1.83^{+0.02}_{-0.02}\) & \(0.8^{+0.3}_{-0.2}\) & \(1.750^{+0.010}_{-0.008}\) & \(0.7^{+0.2}_{-0.1}\) & \(0.8^{+0.2}_{-0.1}\) \\ \hline \(kT_{\rm in}\) [keV] & \(0.726^{+0.006}_{-0.004}\) & \(0.79^{+0.02}_{-0.01}\) & \(0.789^{+0.002}_{-0.001}\) & \(0.815^{+0.006}_{-0.004}\) & \(0.814^{+0.005}_{-0.006}\) \\ norm\({}_{d,A}\) [\(\times 10^{2}\)] & \(10.6^{+0.3}_{-0.3}\) & \(5.5^{+1.2}_{-0.2}\) & \(13.8^{+0.2}_{-0.2}\) & \(10.8^{+0.4}_{-0.4}\) & \(10.8^{+0.6}_{-0.6}\) \\ \hline \(q_{1}\) & \(7.0^{+0.9}_{-0.7}\) & \(9.9^{+0.1}_{-1.0}\) & \(9.9^{+0.1}_{-1.1}\) & \(3.5^{+2.7}_{-0.5}\) & \(3.4^{+4.3}_{-0.4}\) \\ \(q_{2}\) & \(2.4^{+0.6}_{-1.2}\) & \(1.7^{+0.3}_{-0.6}\) & \(1.93^{+0.07}_{-0.60}\) & \(1.9^{+0.2}_{-0.2}\) & \(1.9^{+0.2}_{-0.3}\) \\ \(R_{\rm w}\) [\(r_{\rm g}\)] & \(9^{+6}_{-0.003}\) & \(3.3^{+2.2}_{-0.30}\) & \(2.9^{+1.2}_{-0.02}\) & \(13^{+7}_{-0.6}\) & \(2.6^{+3.8}_{-0.6}\) \\ \(a\) & \(0.990^{+0.003}_{-0.003}\) & \(0.989^{+0.004}_{-0.004}\) & \(0.994^{+0.002}_{-0.002}\) & \(-0.2^{+0.6}_{-0.6}\) & \(0.99^{+0.4}_{-0.4}\) \\ \(\theta\) [\({}^{\circ}\)] & \(65^{+2}_{-3}\) & \(71.2^{+0.8}_{-2.9}\) & \(77^{+20}_{-1}\) & \(80^{+3}_{-6}\) & \(74^{+4}_{-3}\) \\ \(\Gamma\) & \(2.21^{+0.01}_{-0.02}\) & \(2.28^{+0.01}_{-0.02}\) & \(2.30^{+0.03}_{-0.05}\) & \(2.06^{+0.05}_{-0.03}\) & \(2.13^{+0.07}_{-0.04}\) \\ \(\log(\xi)\) & \(3.86^{+0.09}_{-0.8}\) & \(4.21^{+0.05}_{-0.4}\) & \(0.3^{+0.6}_{-0.6}\) & \(\leq 0.004\) & \(0.3^{+1.3}_{-0.3}\) \\ \(A_{\rm Fe}\) & \(9.9^{+0.2}_{-1.2}\) & \(9.6^{+0.4}_{-3.1}\) & \(9.9^{+0.1}_{-1.5}\) & \(9.8^{+0.2}_{-2.3}\) & \(8^{+1}_{-1}\) \\ E\({}_{\rm Feu}\) [keV] & \(30^{\circ}\) & \(980^{+20}_{-200}\) & \(970^{+30}_{-300}\) & \(30^{\circ}\) & \(900^{+100}_{-400}\) \\ R & \(1.3^{+0.2}_{-0.2}\) & \(1.4^{+0.3}_{-0.2}\) & \(3.8^{+0.6}_{-0.7}\) & \(1.8^{+0.6}_{-0.6}\) & \(1.5^{+0.7}_{-0.3}\) \\ norm\({}_{x,{\rm A}}\)[\(\times 10^{-3}\)] & \(5.5^{+0.4}_{-0.4}\) & \(6.0^{+0.5}_{-0.5}\) & \(1.8^{+0.1}_{-0.1}\) & \(1.23^{+0.04}_{-0.04}\) & \(1.4^{+0.1}_{-0.1}\) \\ \hline norm\({}_{d,{\rm B}}\)[\(\times 10^{2}\)] & \(10.2^{+0.3}_{-0.5}\) & \(5.2^{+1.1}_{-0.3}\) & \(13.5^{+0.2}_{-0.2}\) & \(10.7^{+0.4}_{-0.4}\) & \(10.7^{+0.5}_{-0.5}\) \\ norm\({}_{x,{\rm B}}\)[\(\times 10^{-3}\)] & \(5.4^{+0.5}_{-0.4}\) & \(6.0^{+0.7}_{-0.5}\) & \(1.8^{+0.1}_{-0.1}\) & \(1.26^{+0.06}_{-0.05}\) & \(1.4^{+0.1}_{-0.1}\) \\ \hline norm\({}_{d,{\rm N}}\)[\(\times 10^{2}\)] & \(8.5^{+0.3}_{-0.3}\) & \(13.3^{+0.1}_{-0.2}\) & \(\cdots\) & \(\cdots\) \\ norm\({}_{x,{\rm N}}\)[\(\times 10^{-3}\)] & \(4.8^{+0.4}_{-0.3}\) & \(\cdots\) & \(1.8^{+0.2}_{-0.2}\) & \(\cdots\) & \(\cdots\) \\ \hline \(\chi^{2}/\nu\) & \(564^{+5}_{-8}(550.09)/557\) & \(464^{+6}_{-4}(452.45)/420\) & \(495^{+7}_{-6}(482.14)/459\) & \(382^{+5}_{-5}(370.15)/349\) & \(384^{+6}_{-4}(372.62)/348\) \\ \hline \end{tabular} Note. – In this table, we report the modes of the posterior distributions in the MCMC analysis, along with the \(1\sigma\
measurement (coming from the relconv model, not the reflionx_HD one) is again high, consistent with the measurement derived through our analysis. In the future, large scale studies and comparisons between models using different accretion disk densities will enable quantifying the systematic effect of the assumption regarding disk density and the measured BH spins. For now, as the reflionx_HD model produces values consistent with our analyisis using the relxill family of models, and in the interest of maintaining consistency with the rest of the sample of Draghis et al. 2023a, we report the results of our analysis using relxill and defer the comparison with other families of models for future work.
Low and intermediate values of the ionization parameter lead to a narrower Fe K line profile (Matt et al., 1993; Fabian et al., 2000). In contrast, increasing the inclination would broaden the blue wing of the line, while increasing the spin would broaden the red wing of the line. Furthermore, high inclination systems often show evidence of disk winds which produce absorption features around 7 keV (see, e.g., Miller et al., 2006; King et al., 2012; Ponti et al., 2012; King et al., 2014; Draghis et al., 2020). If present and unaccounted for, such an absorption feature could possibly lead to biased characterizations of the ionization, BH spin, and viewing inclination.
While not apparent in the residuals produced when fitting the observations treated in this work, we tested whether a wind-like feature is present in this system by combining 52 NICER observations taken while the source was in a relatively stable soft state, throughout which we do not expect the continuum and reflection features to vary significantly. The observations used when combining the spectra are highlighted between the vertical dashed magenta lines in Figure 1. We used the addspec.py code written by Johannes Buchner 3 to obtain an observation with an effective exposure of \(\sim\)190 ks, and the associated background and response files. The residuals produced when fitting the combined NICER spectrum with the best performing reflection model are shown in Figure 5. The red vertical line shows the position of the H-like Fe XXVI transition at 6.97 keV. While visually it appears that the residuals are suggestive of a narrow absorption-like feature, this is not statistically significant, similarly to the broader absorption-like feature just below 8 keV. Even if present, such a feature is unlikely to significantly impact the ability of the models to constrain the parameters, and the values obtained are likely to be impacted by degeneracies in the parameter space given the quality of the data.
Footnote 3: [https://github.com/JohannesBuchner/addspec.py](https://github.com/JohannesBuchner/addspec.py)
Lastly, we further explored the kerrbb variants of our models with the goal of placing better constraints on the mass of the BH in XTE J2012+381. We analyzed the best-fit models using the kerrbb component to describe the thermal emission from the accretion disk and the relxill component to describe the coronal emission and the reflected component, and used the models to perform joint fits of the NuSTAR and NICER observations during the two epochs. We linked the BH spin and inner disk inclination between the two components, fixed the normalization of the kerrbb component to 1, constrained the distance to the system to be between 3.3 kpc and 7.5 kpc as suggested by the Gaia measurement, and allowed all the other parameters of kerrbb to vary freely, namely the BH mass, the accretion rate, and the spectral hardening factor. Based on the observations from the first epoch, the mass is poorly constrained, giving \(M=9.6^{+40.0}_{-1.5}\) M\({}_{\odot}\). However, the observations from the second epoch during which the disk component dominates over the coronal and reflected emission produces a BH mass constraint of \(M=10.0^{+3.0}_{-0.4}\) M\({}_{\odot}\). It is important to note that for this measurement, there is a strong correlation between this parameter and the other parameters of the kerrbb component, and that the spectral hardening factor during the fit takes a very low value of \(f\sim 1\). Fits with larger values of \(f\) fail to find similarly good solutions in terms of statistic for the observations during this epoch.
## 4 Discussion
We analyzed two NuSTAR and 105 NICER observations of the late 2022 outburst of XTE J2012+381. By combining the information from two sets of simultane
Figure 5: Residuals in terms of \(\sigma\) in the 6-8 keV band produced when fitting the spectrum obtained by combining 52 NICER observations from the soft state of XTE J2012+381 using the best-fit reflection model obtained by fitting the second NuSTAR observation. The NICER observations used to obtain this combined spectrum are indicated by the two vertical dashed magenta lines in Figure 1. The vertical red line in this plot indicates the rest energy of the H-like Fe XXVI at 6.97 keV.
ous NICER and NuSTAR observations taken during two epochs three weeks apart, we measured the spin of the BH in the system to be \(a=0.988^{+0.008}_{-0.030}\) and the inclination of the inner accretion disk to be \(\theta=68^{+6}_{-11}\) degrees. This measurement was conducted using the pipeline established by Draghis et al. (2023), by testing an array of models describing the effects of relativistic reflection on the spectra, distinguishing the models using the DIC computed using the posterior distribution of an MCMC analysis, and combining the posterior distributions of the spin and inclination parameters using a Bayesian framework to maximize the information provided by all the existing observations.
We ran our analysis pipeline on the two NuSTAR observations alone, and on joint fits to the NuSTAR spectra and simultaneous NICER spectra. When not including the low-energy coverage of NICER spectra, the measured Galactic column density is underestimated and the accretion disk temperature is slightly overestimated. In our analysis, we tested the effects of modeling the disk component both through the simplistic diskbb model or through the more physically accurate kerrbb model. Also, we tested the effects of including low-energy coverage through NICER spectra. Regardless of how the thermal emission from the accretion disk is modeled, as long as the continuum is well modeled, the reflection models are able to recover the shape of the relativistically broadened features and place agreeing constraints on the BH spin and viewing inclination of the inner accretion disk.
This measurement highlights the importance of obtaining multiple observations when trying to understand the systematic uncertainties of BH spin measurements using relativistic reflection. The two main sources of systematic uncertainties for BH spin measurements come from peculiarities in the data and from aspects of the models that are not yet fully understood and characterized. An example of the former would be more complex phenomena than what we account for using our models, but which do not contribute significantly enough to be obviously required during the analysis (e.g. weak disk winds). An example of the latter is the effect of the accretion disk density in our models, and how that connects to the inferred Fe abundance, ionization, reflection fraction, and how that entirely contributes to our ability to constrain the underlying continuum to isolate the effects of reflection and measure the dynamic contributions on the broadening of spectral features, which directly constrain the spin.
In this work, we take the method used in Draghis et al. (2023) to combine the posterior distributions of the spin and inclination parameters obtained from the MCMC analysis of the fits to independent NuSTAR observations and we expand it to better encapsulate and account for the systematic differences between independent measurements on different observations. As seen with the two observations of this source, obtaining stronger BH spin constraints is facilitated by having stronger reflection features during observations. Spectra taken while the sources are in harder states, where reflection is both stronger and easier to disentangle from the underlying continuum usually lead to better constraints on the BH spin than spectra taken during softer states. However, in order to understand the possible systematic differences that can lead to measurement uncertainties, multiple observations are required. Furthermore, obtaining multiple observations throughout the duration of BH XB outbursts reduces the likelihood of obtaining a single observation that does not allow placing reliable constraints on the BH spin (i.e., fitting only the NuSTAR observation 80802344004).
The high spin of the BH in XTE J2012+381 is in good agreement with the distribution of spins measured in XB systems, and inconsistent with the distribution of spins of BHs in BBH mergers observed through GW. By expanding the observed sample of BH spins in XB, we begin to better explore possible observational biases that could explain the difference between the spins of BH in XB and the spins of BH in BBH.
The distribution of BH spins can be used to construct a unified view of stellar-mass BH formation and evolution in binary systems. While high-mass X-ray binary (HMXB) systems are ideal candidates to link the population of XB to that of BBH as they contain a BH and a massive star that could also evolve to produce a secondary BH, Gallegos-Garcia et al. (2022) find that only up to 11% of HMXB that experience an accretion episode while both stars are still on the main sequence (Case-A mass transfer) can evolve to eventually form a merging BBH system, and that at most 20% of merging BBH systems originate from Case-A HMXB. Additionally, Liotine et al. (2023) find that observational selection effects can further divide the link between HMXB and BBH through the fact that only around 0.6% of detectable HMXB could produce a BBH system that would merge in a Hubble time. Therefore, independently understanding the different BH spin distributions is imperative, and the most pragmatic way to expand the spin distribution in XB is to continue to measure the spins of as many BHs as possible.
In the future, observations with XRISM (Tashiro et al., 2018) and ATHENA (Barret et al., 2018) will provide high-resolution studies of the emission from XB systems, enabling more precise studies of the relativistically re
flected radiation while better accounting for the effects of accretion disk winds, stellar companion winds, and the specifics of the physics of the accretion disk and the compact corona. Furthermore, missions such as HEX-P (Madsen et al., 2018) or AXIS (Mushotzky, 2018) will be able to detect outbursts from more, fainter XB systems, significantly expanding the sample of measured BH spins in XB.
We thank the NuSTAR director, Fiona Harrison, and the mission scheduling team for making the observations. This research has made use of data and software provided by the High-Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC, and of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (Caltech, USA). We thank the anonymous reviewer for their comments and suggestions, which have improved the quality of this paper.
_Software:_Astropy (Astropy Collaboration et al., 2013, 2018), emcee(Foreman-Mackey et al., 2013), numpy(Harris et al., 2020), matplotlib(Hunter, 2007), scipy(Virtanen et al., 2020), pandas(Reback et al., 2022; Wes McKinney, 2010), corner(Foreman-Mackey, 2016), iPython(Perez & Granger, 2007), Xspec(Arnaud, 1996), relxill(Dauser et al., 2014; Garcia et al., 2014).
## Appendix A Corner plot
The top-left, middle-right, and bottom-right panels in Figure 6 show the 1D histograms of the posterior distributions of the spin, inclination, and reduced \(\chi^{2}\) based on the MCMC analysis on the joint NuSTAR-NICER observations, with the blue curves representing the results from the first epoch, and the red curves representing the results from the second epoch. The width of the contours representing the histograms is proportional to the strength of reflection during the observations, which was used as weighting when comparing the independent measurements (see Figure 4 and its explanation). The solid and dashed lines represent the modes and \(1\sigma\) credible intervals of the individual posterior distributions. The histograms were normalized so that the peak of the distribution has a value of 1. The center-left and bottom-left panels show the 2D histograms of the _a-\(\theta\)_ and _a-\(\chi^{2}\)_ parameter space in the MCMC analysis, respectively. The dashed, dash-dot, and dotted contours in these panels represent the \(1\sigma\), \(2\sigma\), and \(3\sigma\) confidence intervals, respectively. The black and green points represent the modes and uncertainty of the values obtained when combining the individual posterior distributions into a single distribution using the two methods highlighted in Figure 4. We report the green point as the result of this analysis.
Figure 7 shows the complete corner plot generated from the posterior samples resulting from the MCMC analysis. The diagonal entries show the marginalized 1-dimensional probability distributions for the individual parameters in the analysis, and the rest of the panels show the 2-dimensional regions of the parameter space for combinations of parameters in the models. The bottom-left half of the plot shows the corner plot of the observations from the first epoch, with the dark blue contours representing the results from the joint NICER and NuSTAR analysis, and the light blue representing the results from the independent NuSTAR analysis. The top-right corner shows the corner plot of the analysis from the second epoch, with the yellow contours showing the results of the joint NICER and NuSTAR analysis, and the red contours showing the individual NuSTAR analysis. The contours in the plot represent the \(1\sigma\), \(2\sigma\), and \(3\sigma\) confidence intervals in the 2D posterior distribution for each parameter combination. The vertical lines in the 1D posterior distributions (the subplots on the diagonal) represent the values around which Gaussian proposal distributions were generated and used to initialize the walkers in the MCMC run. The priors for the parameters are uniform in the parameter range allowed by the model components. For simplicity, we only plot the normalizations of the diskbb and relxill components for the NuSTAR FPMA spectra.
The most obvious trends noticeable are the correlations between the Galactic column density \(N_{\rm H}\) and the parameters of the diskbb model, namely the disk temperature \(kT_{\rm in}\) and the normalization of the component \(norm_{\rm disk,\Lambda}\). However, as discussed in above, as long as the continuum is fully characterized, the reflection features are identified to be the same, producing very similar spin measurements. The data are often unable to properly constrain the outer emissivity index \(q_{2}\) and the breaking radius \(R_{\rm br}\), as the emissivity is often steep in the inner disk regions, strongly suppressing the contribution at larger distances. As discussed in the main text, the inner emissivity index contributes strongly to the ability to constrain the BH spin. As we can see here, the outer emissivity and the breaking radius do not impact the spin directly, but rather through the ability to constrain \(q_{1}\), which in turn influences our ability to measure the inner disk radius which constrains the spin.
Figure 6: Two-dimensional histograms of the \(a\)-\(\theta\) (center-left panel) and of the \(a\)-\(\chi^{2}\) (bottom-left panel) parameter space based on the posterior samples in the MCMC analysis. The dashed, dash-dot, and dotted contours in these panels represent the \(1\sigma\), \(2\sigma\), and \(3\sigma\) confidence intervals, respectively. The top-left, middle-right, and bottom-right panels show the 1D histograms of the posterior distributions in the MCMC analysis for spin, inclination, and reduced \(\chi^{2}\). The width of the contours is proportional to the strength of reflection in the observation, which was used as weighting when combining the posterior distributions into a single measurement. The solid lines represent the modes of the distributions, and the dashed lines represent the \(\pm 1\sigma\) credible regions. Throughout the entire figure, the blue lines represent the results from the first epoch, and the red lines represent the results from the second epoch. In the middle-left panel, the black and green points show the values obtained by combining the posterior distributions through the two different methods highlighted in this paper. |
2306.09992 | Rewriting the Script: Adapting Text Instructions for Voice Interaction | Voice assistants have sharply risen in popularity in recent years, but their
use has been limited mostly to simple applications like music, hands-free
search, or control of internet-of-things devices. What would it take for voice
assistants to guide people through more complex tasks? In our work, we study
the limitations of the dominant approach voice assistants take to complex task
guidance: reading aloud written instructions. Using recipes as an example, we
observe twelve participants cook at home with a state-of-the-art voice
assistant. We learn that the current approach leads to nine challenges,
including obscuring the bigger picture, overwhelming users with too much
information, and failing to communicate affordances. Instructions delivered by
a voice assistant are especially difficult because they cannot be skimmed as
easily as written instructions. Alexa in particular did not surface crucial
details to the user or answer questions well. We draw on our observations to
propose eight ways in which voice assistants can ``rewrite the script'' --
summarizing, signposting, splitting, elaborating, volunteering, reordering,
redistributing, and visualizing -- to transform written sources into forms that
are readily communicated through spoken conversation. We conclude with a vision
of how modern advancements in natural language processing can be leveraged for
intelligent agents to guide users effectively through complex tasks. | Alyssa Hwang, Natasha Oza, Chris Callison-Burch, Andrew Head | 2023-06-16T17:43:00Z | http://arxiv.org/abs/2306.09992v1 | # Rewriting the Script:
###### Abstract.
Voice assistants have sharply risen in popularity in recent years, but their use has been limited mostly to simple applications like music, hands-free search, or control of internet-of-things devices. What would it take for voice assistants to guide people through more complex tasks? In our work, we study the limitations of the dominant approach voice assistants take to complex task guidance: reading aloud written instructions. Using recipes as an example, we observe twelve participants cook at home with a state-of-the-art voice assistant. We learn that the current approach leads to nine challenges, including obscuring the bigger picture, overwhelming users with too much information, and failing to communicate affordances. Instructions delivered by a voice assistant are especially difficult because they cannot be skimmed as easily as written instructions. Alexa in particular did not surface crucial details to the user or answer questions well. We draw on our observations to propose eight ways in which voice assistants can "rewrite the script"--summarizing, signposting, splitting, elaborating, volunteering, reordering, redistributing, and visualizing--to transform written sources into forms that are readily communicated through spoken conversation. We conclude with a vision of how modern advancements in natural language processing can be leveraged for intelligent agents to guide users effectively through complex tasks.
voice assistants, instructions, voice user interfaces, remixing, complex task guidance, summarization, splitting, reordering +
Footnote †: 2023) Copyright held by the owner/author(s).
We suggest addressing these challenges by designing voice assistants to "rewrite the script": adapt written instructions into a form that is easier to follow in hands- and eyes-free settings. In our discussion, we outline a set of capabilities involved in rewriting the script: summarize, signpost, split, elaborate, volunteer, reorder, redistribute, and visualize (see Table 3). These capabilities revolve around ways that voice assistants can rearrange information to communicate more effectively with their users. Furthermore, many of these capabilities are already possible with the current state of natural language processing research, especially in task-oriented dialogue, event reasoning, and commonsense reasoning. Given the complementary advances in natural language processing and human-computer interaction, science fiction continues to become a reality at a fast pace. We conclude with a vision of what it might mean for this kind of voice assistant to become part of that reality.
## 2. Background and Related Work
In this section, we review research that offers insight on designing voice assistants for complex task guidance, including voice assistant design, instruction design, and task interfaces.
### Designing Voice Assistants
The human-AI interaction community has developed several sets of guidelines for designing good voice assistants. In their landmark paper, Amershi et al. (Amershi et al., 2018) propose eighteen heuristics for designing AI-infused systems, including that they should indicate what they can do and how well they can do it. Notably, voice assistants are known for not communicating their affordances well (Mikolov et al., 2016). Although voice assistants are relatively new, early work in the 1990s warned that voice interfaces should be "designed from scratch, rather than directly translated from their graphical counterparts" (Sherwani et al., 2017; Sherwani et al., 2017). Sherman et al. (Sherwani et al., 2017)'s later work on VoicePedia echoes this warning: this voice user interface (VUI) mimicked Wikipedia's graphical user interface (GUI) as closely as possible and was rejected in user studies. Since then, contemporary researchers have tackled voice interface design in a new way: transforming existing resources specifically for audio rather than treating VUIs as spoken GUIs (Sherwani et al., 2017; Sherwani et al., 2017). We follow Murad and Munteanu (Munta et al., 2017)'s lead in establishing usability principles for voice interaction from the ground up.
Additional guidance for designing voice assistants focuses on their abilities as conversational agents. Langevin et al. (Langevin et al., 2017)'s heuristics for conversational agents emphasize the need to guide users through the available affordances without overwhelming them. Clark et al. (Clark et al., 2017) dive even deeper into the meaning of a good conversation, suggesting that conversational agents concentrate on functional rather than social goals. Volkel et al. (Volkel et al., 2017) similarly find that envisioned conversational agents were just social enough to support highly interactive, multi-turn conversations while helping users with a task without becoming a "friend." The design space for conversational agents is large and complex since users can communicate a wide range of intents in many ways (Clark et al., 2017; Sherwani et al., 2017) and contexts, like while driving (Sherwani et al., 2017). Our work focuses on cooking with a voice assistant, reiterating the need to make affordances clearer and support long-form conversations for complex task guidance.
### The Design of Instructions
Previous findings in instruction design and cognitive science can help inspire the design of voice interfaces for complex task guidance. One classic result in cognitive science famously suggests that working memory is limited to "seven, plus or minus two," items (Sherwani et al., 2017). This implies that instructions should limit the amount of information that a user has to keep track of at a time. Following instructions delivered over audio poses unique challenges because verbal instructions are processed by the phonological loop in the brain (Sherwani et al., 2017). While the phonological loop is faster and more flexible than the structures for visual processing, the information in it decays more rapidly (Sherwani et al., 2017). Voice assistants need to be particularly strategic about the level of detail provided in any one instruction to respect the limits of our neurobiology.
Prior work suggests some techniques to offer instructions that are mindful of these limits. Simply replacing written text with spoken text--the primary approach to complex task guidance through voice assistants--is not necessarily the right approach since it has led to detrimental effects in some studies (Sherwani et al., 2017). Rather than reciting text verbatim, one approach is to present concrete, well segmented instructions to help users perform unfamiliar tasks (Sherwani et al., 2017). Regardless of delivery format, concrete procedures have been shown to improve immediate performance while abstract procedures help with learning and transfer (Sherwani et al., 2017). Instruction formats can also embrace minimalism, an approach introduced by Carroll (Carroll, 2017) that focuses on learning skills as needed rather than all ahead of time. In our study, we focus on how written text may need to be transformed for audio-first interfaces given these insights on cognitive processing, concreteness, and minimalism.
### Intelligent Cooking Support
Our paper focuses on recipes as one type of instruction that voice assistants may help users follow. The human-computer interaction community has broadly explored the design of interfaces to support cooking, many of which intersect with the goals of our work. Chang et al. (Chang et al., 2018)'s RecipeScape, for instance, helps users interactively browse a collection of recipes. Other tools help people follow along with recipes, such as Kosch et al. (Kosch et al., 2017)'s digital cooking coach, which provides _in situ_ auditory and visual feedback on a cook's tasks. In a more immersive scenario, Sato et al. (Sato et al., 2017)'s MimiCook and Chen et al. (Chen et al., 2018)'s "smart kitchen" embed step-by-step instructions and nutritional information into kitchen counters and screens. Some interfaces allow users to navigate through video recipes with their voices, which requires voice assistants to understand a range of intents (Sherwani et al., 2017). Our work explores how users navigate through audio recipes as a case study on voice interaction for any complex task.
### Complex Task Support
Along with cooking, the human-computer interaction community has envisioned many ways to help people accomplish a wide variety of tasks by augmenting their workspaces (Sherwani et al., 2017; Sherwani et al., 2017) and devices (Sherwani et al., 2017). Conversational agents that provide instructional support--like Vitro, a voice assistant that guides researchers through cell culturing procedures (Sherwani et al., 2017)--are particularly relevant to our work. Iris, on the other hand, is a text-based conversational agent that chains together simple commands to perform complex data science tasks (Sherwani et al., 2017). Prior
research has also indicated the nuance involved in helping users navigate sets of instructions with a voice interface. Abdolrahmani et al. (Abolrahmani et al., 2018) propose that voice assistants in complex environments like an airport provide support through short transactions. Other work has suggested that interfaces should support multiple kinds of pauses and jumps (Beyer and Holtzblatt, 2017), handle implicit conversation cues (Sandel et al., 2018), and support jumps according to both conventional navigation instructions and content-based anchors (Sandel et al., 2018). Our paper contributes a detailed exploration of the challenges involved in following audio-first task guidance and suggestions to overcome them.
## 3. Methods
We designed an observational study to understand how voice assistants can effectively guide people through complex tasks, using recipes as an example. We recruited participants to choose and prepare recipes at home while being guided by a voice assistant (see Figure 1). We aimed to answer the following research questions:
* **RQ1**: What challenges do users face when following instructions to perform complex tasks given by a contemporary, state-of-the-art voice assistant?
* **RQ2**: What can be done to address these challenges in future voice assistants?
Our goal was to clearly document the challenges in a way that led to concrete suggestions for solutions. To do this, we opted to perform an observational study with deep contextual elements. Even though our study is not a contextual inquiry according to the precise methodology described by Beyer and Holtzblatt (Beyer and Holtzblatt, 2017, Chapter 3)--participants were not using their own Alexa and they would not have performed the task without our intervention--we made heavy use of contextual elements in the design of the study: we observed participants in natural work settings (their homes) working on tasks they cared about (recipes of their own choice), with continual, incremental interpretation of observations during and after the task. Our hope was that this contextual approach would lead to deep, validated, actionable design inspiration while being possible to arrange in a way that a full contextual inquiry would not be.
### Technology Probe
Participants in our study interacted with Amazon Alexa to prepare their recipes. We chose to study Alexa because it was, to our knowledge, the state of the art in hands-free, eyes-free voice interaction. Furthermore, one of the authors of this paper had prior experience working with Alexa for the Alexa Prize Taskbot Challenge, which made us aware of its capabilities for similar tasks (Sandel et al., 2018). We also chose an audio-only device to focus on the design of spoken communication, which voice assistants of all kinds need to support.
We originally used a Wizard-of-Oz approach to represent an idealized version of a voice assistant, but we converged on using Alexa instead because of the challenges associated with developing a realistic, idealized voice assistant for study settings. Existing tools for changing a human's voice to sound more robotic were inappropriate for our goals because most real-time voice changers were designed for humor. Attempting to type responses fast enough to use text-to-speech technology introduced an unnatural 5-to-10-second delay. Our own tests with Alexa revealed that it already provides sophisticated support for complex task guidance, including quickly answering questions with external information, that we felt we could not rival with a WOZ'd prototype. We therefore decided to explore the challenges associated with modern devices and suggest areas for improvement, as revealed by our observational study.
### Participants
Participants were recruited from an institution-wide graduate student email newsletter at the University of Pennsylvania. We chose to scope recruiting to within our university community because we
\begin{table}
\begin{tabular}{l l l l} \hline \hline ID & Selected Recipe & Self-Rated & Prior Use \\ & & Skill & \\ \hline C1 & Steaks with Blue Cheese Butter & \(\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ }}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ }}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}} \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\}}}}}}}}{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{\texttt{{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{\texttt{{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{\texttt{{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{{\texttt{{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{{\texttt{ \texttt{\texttt{{\texttt{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{\texttt{{\texttt{{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{}}}}}}}}}{\texttt{\texttt{{\texttt{{ \texttt{\texttt{{\texttt{\texttt{\texttt{\}}}}}}}}}{\texttt{{\texttt{{\texttt{{\texttt{\texttt{{ \texttt{\texttt{\}}}}}}}}}{\texttt{{\texttt{{ \texttt{{\texttt{\texttt{\texttt{{\texttt{\}}}}}}}}}}{\texttt{{\texttt{{\texttt{{\texttt{{\texttt{ \texttt{\texttt{\}}}}}}}}}{\texttt{{{ \texttt{{\texttt{{\texttt{\texttt{{\texttt{\}}}}}}}}}}{\texttt{{{\texttt{{{\texttt{{\texttt{ \texttt{{\texttt{\}}}}}}}}}}{\texttt{{{\texttt{{{ \texttt{{\}}}}}}}}{\texttt{{{{\texttt{{{\texttt{{{\}}}}}}}}}{\texttt{{{{ \texttt{{{{\texttt{{{\}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} } } } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{\text{\text{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \}}}} \ \ \ \}}}}\ \ \ \ \ \ \ \ \}}}}}\ \ \ \ \ \ \}}\ \ \}\ \}\ \}\ \}\ \}\ \ \ \ \}\ \}\ \}\ \ \ \}\ \}\ \}\ \ \}\ \ \}\ \ \ \ \}\}\ \ \ \ \}\ \ \{\
would be observing participants at home. This ensured that participants would be within traveling distance and have some personal connection to the research team, which we believed would make the session more comfortable for participant and researcher alike.
Readers of the newsletter were asked to complete a preliminary questionnaire to indicate their interest, background related to cooking, and prior use of voice assistants. We sampled participants according to two criteria: (1) whether they were available during daytime hours, which we anticipated would make the at-home observation more comfortable, and (2) whether they helped us achieve a wide coverage of cooking and technical experience.
The selected sample of participants varied a great deal in cooking skill and familiarity with voice assistants (see Table 1). Since participants would be completing a task while interacting with a sophisticated piece of technology, we selected for diversity in both areas to learn about a fuller range of experiences. On a 5-point Likert scale (where 5 indicated a great amount of skill), five participants reported their cooking skill at a level of 2 or below, five participants reported 4 or above, and two participants reported exactly 3. Participants also used voice assistants with varying levels of frequency, with two using them daily, four weekly, one monthly, and five less than monthly. Some were excited to experiment with voice assistants, with seven rating their excitement at 4 or 5 out of 5 on a Likert scale; four participants were less excited at a 2 or 3.
As a result of the recruiting method, most cooks were graduate students (67% Master's, 25% Ph.D.), with the exception of C5, who was a software engineer. Eight of the twelve participants answered questionnaire items about their demographic information.1 Of these eight, 63% self-identified as female and the rest (37%) as male. Ages ranged between 22 and 30 years old, with a median age of 23.5 years old. 37.5% reported their race as Caucasian/European/White, 37.5% East Asian, 12.5% South Asian, and 12.5% Southeast Asian. Except for one cook who described herself as "intermediate," all respondents described themselves as "proficient" in English.
Footnote 1: The question for race was adapted from the 2022 Computing Research Association Annual Survey. The question for ability status was adapted from the Voluntary Self-Identification of Disability provided by the United States Department of Labor.
### Procedure
Once selected, a participant was asked to complete a few steps to prepare for their cooking session. First, they downloaded the Amazon Alexa app and used it to search for the recipe they wanted to cook. They were required to use the Alexa app because it was the only way to ensure that the recipe they chose would be supported by the device. The participant then shopped for the ingredients before the research team arrived for their session. Participants were inevitably able to see the recipe ahead of time to purchase ingredients, so we asked them to minimize the amount of the recipe they read in advance to reduce the likelihood that they would come to the study with significant prior knowledge.
We met the participant at home at the scheduled time to observe them as they cooked. We briefed them on the study procedures and asked them for their consent to participate. We then set up our equipment. In most cases, we used a fourth-generation Amazon Alexa Echo Dot, which was the newest screen-less Alexa device. Occasionally, the Echo failed to connect to the internet, so we used the Amazon Alexa iOS app on an iPhone 12. Lastly, we set up a camera to record the cooking session and debrief interview.
We started the session with a brief overview of how to use Alexa: navigating to the next step, backtracking to the previous step, and jumping to a specific step. We encouraged them to ask Alexa questions and interact with it however felt natural. The participant then prepared their dish with Alexa. As the participant cooked, the research team observed and asked occasional clarification questions to understand critical incidents. We encouraged everyone to think aloud if they felt comfortable, but most seemed to think aloud only a handful of times during each session.
The participant was asked to complete two remaining activities after finishing their recipe. First, we conducted a semi-structured interview to learn more about their experience during the session, including what was easy and difficult about working with Alexa and their willingness to follow a recipe with a voice assistant again on a 5-point Likert scale. Second, the participant was asked to annotate a printed copy of their recipe, indicating changes to Alexa's audio script that could have improved their experience (see Figure 2). We
Figure 2. Instructions annotated with user suggestions. At the end of each cooking session, cooks were asked to mark up printed copies of the original recipes that they had just prepared with Alexa. Cooks indicated content they wished Alexa had changed for a better audio script, including skipping extra information (see the strikethrough and “TMI” for “too much information,” C3), providing more details (“on each side or total?” C3), splitting long steps into multiple shorter substeps (see the “/” mark, C3), and grouping ingredients into categories (see the ingredients above the line annotated with “BREADED SEASONING,” C4).
reviewed the annotated recipe and their thought process with them. Finally, we debriefed the participant and concluded the session. Participants were compensated with a gift card amounting to the cost of ingredients and an additional $100 USD.
### Analysis
Our study yielded four kinds of data: audio and video recordings, researcher notes, questionnaire data, and annotated paper recipes. We used Rev.com to transcribe audio recordings. Notes, transcripts, and annotated recipes were analyzed with a thematic analysis approach (Cowley et al., 2011, Chapter 5). One author developed a set of codes during an open coding pass, reviewing all of the data. Another author reviewed the codes and all accompanying excerpts. Then, both authors revised the set of codes into a final schema. Codes were grouped into categories roughly corresponding to the 9 challenges in Section 4. The former author applied this schema to the data in an axial coding pass, which was validated by the latter author.
To analyze counts of events (such as the number of navigation requests), transcripts were analyzed once more. The author who had originally defined the set of codes created a code book of event types, along with examples and brief written descriptions of each one. Two authors then applied this code book to every conversational turn in two transcripts. The boundaries of the conversational turns were determined by the transcribers at Rev. After showing high agreement on all codes, one author applied the code book to the remaining transcripts (see Appendix Table 4).
## 4. Results
This section presents an overview of the participants' interactions with Alexa, followed by 9 key challenges they encountered while being guided through their recipes (see Table 2). To ground our results in the task at hand, we refer to participants as "cooks" with pseudonyms C1-12.2 Quotes from participants are sometimes lightly edited for brevity and clarity.
Footnote 2: Video and audio data from C6’s session are omitted due to technical issues.
### Overview
Cooks followed recipes ranging in familiarity, complexity, length, and cultural origin, adding to the richness of their experiences beyond self-reported cooking skill and frequency of using voice assistants. Most recipes were entrees, with two being baked goods (C2, C8) (see Figure 3). Of the 12 cooks, 6 reported being unfamiliar with their recipe, 1 reported moderate familiarity, and 4 reported being familiar. Cooks followed an average of 8 steps per recipe (\(\sigma\) = 2.9, min = 3, max = 11). Recipes may have included more than this number of steps: as noted in Section 4.2, cooks sometimes ended their sessions too early because they were unaware that there were additional steps that Alexa had not yet read. Cooking sessions ranged from 15 minutes to just over an hour.
Cooks typically interacted with Alexa dozens of times while completing their recipes. While cooks shared similar patterns of navigating through recipes, they diverged in patterns of information-seeking. The most common request was to advance to the next step or ingredient, with cooks requesting to advance an average of 13.6 times per session (\(\sigma\) = 7.8, min = 4, max = 31). Cooks moved backwards or jumped from step to step less often: they requested to backtrack an average of 0.5 times per session (\(\sigma\) = 0.8) and jumped to a specific instruction an average of 5.3 times (\(\sigma\) = 5.0). Cooks frequently asked Alexa to repeat itself--4 times on average--with some asking more often (\(\sigma\) = 5.0). As we discuss in Section 4.3, frequently requesting Alexa to repeat a step usually implied that cooks were feeling overwhelmed by the amount of information they were receiving. Cooks asked many questions on average (\(\mu\) = 7.4) but varied more widely on this than with any other request (\(\sigma\) = 8.3).
Figure 3. Completed dishes. Cooks prepared a variety of dishes of their choice following the guidance of a voice assistant. These dishes varied in complexity: some required interaction with the voice assistant for many steps (i.e., C2’s eggless red velvet cake), while others involved just a few (i.e., C12’s ground beef bulgogi).
Requests of all kinds occurred throughout the session, rather than exclusively at the beginning or end (see the timelines in Figure 4).
Before and after each session, we asked cooks to report on a 5-point Likert scale how likely they would be to use a voice assistant to follow a recipe in the future. Most (7 of 12) cooks responded to this item in the pre-study questionnaire, yielding a median of 4 out of 5 (\(\sigma\) = 0.9). These same cooks reported a half-point drop in willingness after their sessions (median = 3.5, \(\sigma\) = 1.2). When including all cooks who responded at the end of their session (12 of 12), their median willingness dropped a half point more (median = 3, \(\sigma\) = 0.97). Only one cook (C5) reported being more willing to use a voice assistant for recipe guidance after completing the study. These drops may have been influenced by the challenges described in the following sections.
### Missing the Big Picture
When following a recipe with Alexa, cooks often felt they were missing the "big picture." With a conventional written recipe, cooks can skim it beforehand to familiarize themselves with the steps they need to follow and the order they need to be followed in. With Alexa, there was no comparable way to skim. Cooks could listen to the recipe as a whole before beginning to follow it, but very few chose to do so because it is time-consuming. This led to an experience where, as C8 described, "everything was a surprise."
Because of this, one of the most requested features was the ability to get an overview of a recipe. One cook called this the "bigger picture" (C10). Five cooks explicitly mentioned that they would have liked some kind of overview of the contents of a recipe. Most of these cooks envisioned a summary that could be stated at the beginning of the recipe. C9, for instance, sketched out a summary she would have wanted to hear, which included a list of equipment, a distillation of the 11 steps into 3 "major steps" divided by wet
\begin{table}
\begin{tabular}{c l l l} \hline \hline \# & Challenge & Description & Representative Observation \\ \hline
1 & Missing the Big Picture & Lacking awareness of what the recipe entails or what steps remain. & “Alexa maybe could give me the bigger picture in the introduction. ‘_This is basically what we are going to do, and let me guide you through, step by step._” (C10) \\
2 & Information Overload & Too much information is provided by the voice assistant at once. & _C3 requested that this step be repeated twice._”Step 2. Heat a large skillet over medium-high heat. Add the ground beef, breaking up with a spoon. Cook until browned, about five to seven minutes. Drain off excess grease. When you’re ready, say ‘repeat’ or ‘next step.’ \\
3 & Fragmentation & Information is broken up a way that makes it difficult to act upon. & “All of the garnishing, [Alexa] told me them in three different steps...So I had to stand there, wait for it to say it, and then be like, 'Alexa, next step, previous step, previous step.” (C9) \\
4 & Time Insensitivity & Time-sensitive directions are delivered after they are needed. & “A lot of information that I think is really important is time... like pre-heating the oven...If you’re not preparing [preheating or thawing] that ahead of time, then either you’re gonna be waiting 20 minutes or you’ll just be going ahead with whatever, cold meat or something which will cook slower.” (C1) \\
5 & Missing Details & Useful details are left out of the recipe. & _C11 requested the bolded text be added to the instructions._ “In the same pan, heat the remaining 1 Tbsp olive oil **on medium heat...** Cook until the juices run clear. **If chicken is not consistent thickness, consider cutting into chunks.” \\
6 & Discarded Context & Answers are based on external resources instead of the recipe. & _C2_: Alexa, how much vinegar do I need? \\
7 & Failure to Listen & The voice assistant does not respond to requests or interruptions. & _Alexa_, what should I do with the sausage? [_11 seconds pass while C8 chops onions]... She ignored me.” \\
8 & Uncommunicated & The voice assistant is not & “You cannot expect it to answer any questions you ask. You need to think, 'Okay, I have this problem, and in what way it can assist me.’” (C7) \\ Affordances & & Desiring visual information or affordances. & “It would also be nice to have a visual image of what the sauce is supposed to look like, or the chicken.” (C4) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Challenges arising from instructions dictated in audio-only format.
and dry ingredients, and a list of preparatory steps to be performed in advance of the recipe. C12 described an overview as consisting of about a sentence per step, and C1 desired the ability to "scroll through the whole recipe" ahead of time.
Particularly long or involved steps could have used overviews of their own. C1, for instance, described how he would have liked a brief description of a step, and the step after, before starting the current step:
If there was a concise summary, like, "Add the cheese, add the pepper, cream the butter again. Next you will be preheating the pan and turning the oven on." A little snapshot of where you're going next. What the next turn is in the directions.
Cooks sometimes wanted a better sense of how far they had progressed in the recipe. C3 described this as understanding "where you are in the context of all the steps." Without a clear sense of progress, some cooks were confused when they eventually reached the end of the recipe. After Alexa narrated the final step and became silent in his session, C10 exclaimed, "So that's it? Alexa, is that the end of the recipe?" C10 was not the only one to experience this confusion: C3 asked for what she called a "The End message" because she "wasn't sure if the last step was the last step." Even more crucially, five of eleven cooks missed the last step without knowing there was more to be heard. Luckily, in these few cases, the final step was either able to be inferred or had little consequence toward completing the recipe, but this information still appeared in the original script and they were not made aware of it.
### Information Overload
When instructions appear in print, cooks have control over how much to read and when. When instructions are delivered by audio, this control is considerably diminished. A voice assistant necessarily makes decisions about how much of the instructions to read at a time, and these decisions may be poorly calibrated to users. Alexa's approach was to read instructions one step at a time. These steps were defined by the authors of the original recipe, so they they varied a great deal in their complexity. While many steps were short, simple, and memorable, others called for cooks to perform many disparate actions, making use of many ingredients.
This led a few cooks to explicitly describe their preference to hear simpler instructions. C3 wished for shorter steps, rather than "multiple sentences within a step and having to repeat." C5 similarly desired that Alexa could "[break a longer step] down into steps like the same way they do recipe ingredients." By default, Alexa read the ingredients in pairs. It sometimes segmented long steps into a couple of sentences, but this did not seem to be enough.
Beyond these two cooks, many seemed to struggle with remembering the instructions that were read aloud. One indication that the instructions were too long to remember is that cooks frequently asked Alexa to repeat instructions. Every cook asked Alexa to repeat at least one step. Across all sessions, cooks requested that Alexa repeat a step an average of 4 times per session. Given that recipes had an average of 8 steps, a sizeable portion of requests were repetitions. Repetitions were requested for 26 different steps, with cooks requesting at least two repetitions for 13 steps and at least three for 5 steps. Some of these steps contained quite a few details, like one that C10 asked to have repeated:
Step 7. Put the avocados in a large bowl and gently toss with the tomatoes, lemon juice, shallots, two tablespons oil, half teaspoon salt, and the reserved herbs. Transfer to a serving bowl.
This instruction refers to seven ingredients (two of them with accompanying measurements), two pieces of equipment (large bowl and serving bowl), and three separate actions (putting, tossing, and transferring). We can understand why, when read all at once, a step
Figure 4. Timelines of cooking sessions with Alexa. Dots represent requests that cooks made to Alexa; colors of dots represent the type of request they made. These timelines reveal a difference between navigation and information-seeking while using a voice assistant: although cooks usually navigated through recipes by requesting the next step, rarely going backwards or jumping directly to a specific step, they differed widely in how often they asked for additional information. C11 in particular asked many questions throughout her session while C5 asked none. 10 events are omitted from the records of C8 and C9 due to transcription issues.
like this requires repetition: it contains many individual details, some of which must be recalled precisely.
Post-hoc analysis of the repeated steps suggests that cooks were more likely to request repetitions for steps that were more complex. We observed a correlation between the number of repetitions and various aspects of complexity of an instruction, including the number actions a cook was asked to perform, the number of ingredients they needed to use, and the number of words and sentences all in a single step. On average, steps that were repeated had 1.1 additional actions, 1.2 additional ingredients, 10 additional words, and 0.7 additional sentences compared to those that were not repeated.
Another indication that the steps were too complex is that cooks explicitly indicated many steps they would have liked to split up in their annotations of printed recipes. Six cooks indicated at least one step that they wished had been further divided. Some of these steps were truly immense, representing many individual actions, like this step that C3 would have liked to split into six component steps (as indicated by "/"):
* [noitemsep]
* Pound the pork chops with a meat mallet or a heavy skillet until about 1/4 inch thick: / season with salt and pepper. / Put the flour in a shallow baking dish. / Whisk the eggs, 1/2 teaspoon sesame oil and a pinch each of salt and pepper in a second dish. / Put the panko in a third dish. / Working with 1 chop at a time, coat in the flour and then dip in the egg, shaking off any excess; firmly press both sides in the panko.
A voice assistant cannot always control how recipes are written, so it may need to guide users through instructions like the one above. It can, however, control how it processes information before delivering it. Voice assistants should play an active role in chunking information into steps that are easier for users to follow.
Many sources of information overload came from too many steps being presented at the same time, but Alexa sometimes described a single step in too much detail. Some instructions stated the obvious: C5 indicated that the did not need Alexa to tell him to "place salmon in an evenproof pan" before baking it, maybe because he could have inferred it from context. In other cases, the steps included tips that the cook felt they did not need. C6, for example, reported that she would have preferred skipping suggestions for washing clams and mussels; C3 wished Alexa had omitted a suggestion to test if a pan of oil had heated up enough by tossing a breadcrumb in and watching it sizzle. Five cooks annotated printed copies of their recipes in a way that suggested information should have been left out. Along with splitting and chunking complex steps, voice assistants can identify extraneous information--which may depend on individual preferences--and omit it from the instructions altogether.
### Fragmentation
Delivering details in the right place is especially crucial when a recipe cannot be read but only heard. In our study, information was often fragmented across the recipe. A cook reading a recipe can search for information across the page at their own pace. A cook listening to one is dependent on the voice assistant to do the same.
Information about ingredients was particularly fragmented in our study: the amount of each ingredient to use was often present in the ingredients list but missing in the step that used it. In one case, C4's recipe called for "1 tablespoon butter," "1 tablespoon garlic," and "1 tablespoon ginger," but the step that combined them simply said to "melt the butter and add the garlic and ginger." This was already the sixth step of the recipe, so C4 had probably forgotten the quantities of the ingredients--in fact, she asked Alexa to tell her some of them at this point. In total, seven of eleven cooks explicitly asked for the amount of an ingredient at some point while cooking.
Recipes are often written in this style, perhaps to save room on a page, but details are lost when voice assistants directly read the instructions without redistributing key information. This also leads to a variety of challenges because cooks may follow the same instructions in different ways: some portioned out the ingredients as they heard them in the beginning, while others completely skipped the ingredients list because they assumed, incorrectly, that Alexa would give them all the information they needed later.
Without awareness of steps to come, ambiguity about the number of ingredients sometimes led to deviations from the recipe. C9's recipe called for oions in two different steps. Unaware of this fact, she used all of her onions in the first step that called for them. When she arrived at the next step, she had no more onions to use.
Beyond providing all relevant details when they are needed, cooks also felt that some instructions could be combined to make them easier to perform. C9's recipe asked her to prepare the pan and fry an onion, then add cashew nuts and raisins, and finally add whole clove, cardamom pods, bay leaf, and cinnamon stick. She saw these three consecutive steps in her recipe as substeps of the same action, so she wished that they had been described together. In another session, C4 wished that ingredients in her recipe were grouped by the part of the dish they were used to make (i.e., seasoning, sauce, fried chicken, and toppings). Voice assistants will need to act as users' eyes when reading instructions for them, picking up information that has been fragmented across the page and presenting it together at the right time.
### Time Insensitivity
Recipes are often full of time-sensitive information that, if not properly anticipated, lead to problems with completing them correctly and on time. With written recipes, cooks commonly scan the text to discover and plan for such time-sensitive steps. However, cooks in eyes-free settings do not have the same luxury. Some cooks in our study found themselves in awkward situations where they could have benefited from information that was delivered too late.
In some cases, cooks ended up wasting time that was supposed to be used to complete multiple tasks in parallel. Alexa told C2, for instance, to "cool the cakes in the pan for fifteen minutes and then turn the cakes on a rack to cool completely." About an hour later, after waiting for the cakes to cool completely as instructed, C2 asked for the next step: "While the cakes cool, make the buttercream." C2 was supposed to start the buttercream far before she had, but she had no idea without Alexa warning her or helping her preview the next step. Similarly, C3 could have started preparing a side salad while frying her pork, but she waited to finish the current step before asking for the next.
Some cooks anticipated the need to prepare for later steps but still struggled to find time-sensitive information. C1 and C8, for example, realized early on in their sessions that they would need
to preheat the oven. When asking Alexa directly for the proper settings did not work, they settled for repeatedly asking for the next step until they found the information. Voice assistants can help users anticipate time-sensitive steps by surfacing them to the beginning when remixing instructions.
### Missing Details
In addition to providing details too often or in unhelpful places, Alexa sometimes excluded information that would have helped cooks. In particular, Alexa often excluded parenthetical information from the original written recipes. One recipe read, "In a medium bowl, whisk together All-Purpose Flour (1 1/2 cups)," but Alexa omitted "(1 1/2 cups)" when reading the "script" aloud. Alexa already seems to "rewrite the script" in some ways, but this particular approach increased fragmentation. Every cook who experienced these omissions wished Alexa had not left this information out.
Other times, Alexa excluded details about how ingredients should be prepared even though the ingredients list often included this information. For example, when one ingredients list called for "1 Tbsp Fresh Ginger, _crushed_," Alexa left out "crushed." This is doubly problematic when this information does not appear anywhere else in the recipe, like in C9's case. C9's written recipe for egg biryani called for "5 eggs, boiled," but Alexa excluded the word "boiled" when reading aloud the ingredients list. The rest of the recipe never stated to boil the eggs, either. C9 knew ahead of time to boil the eggs because she had glamed at the ingredients list online while grocery shopping: Alexa failed on all counts to inform her of the proper preparation for the eggs. Another user relying completely on Alexa may have cracked the eggs in or overcooked the rest of their dish while waiting to boil the eggs in the middle of the recipe, especially if they were less familiar with egg biryani.
In extreme cases, omitting parts of the original recipe led to a mistake that could not be reversed. C11, for instance, was preparing a two-part dish consisting of seasoned chicken and a yogurt sauce. Alexa told her to pour the sauce over the chicken as part of the last step but skipped an author's note at the bottom of the page that suggests storing the chicken and sauce separately if saving the dish for later. C11 was dismayed when she discovered the note while annotating the printed copy at the end of the observation session. The serving size of the dish had been much larger than expected--another case of missing the bigger picture--and she had wanted to save the leftovers for another time. Two other written recipes had similar notes at the end, which Alexa did not read aloud. All cooks who discovered omitted author's notes on paper recipes expressed that they would have liked to hear them while cooking.
Cooks also noted that they would have benefited from a voice assistant adding some details beyond the original recipe. This would be especially helpful for cooks who are unfamiliar with a dish or less experienced with cooking. C11 was not familiar with the Lebanese chicken fattle recipe she had chosen. After cooking, she annotated her recipe with additional details she would have appreciated Alexa adding while guiding her (see **boldface** text):
In the same pan, heat the remaining 1 Tbsp alive oil **on medium heat**; add the chicken breast to the pan and season with the garlic powder, criander, thyme, paprika, and salt and pepper to taste. Cook until the juices run clear. **If chicken is not consistent thickness, consider cutting into chunks.**
These clarifications help make assumptions about how to perform tasks more explicit. Three other cooks (C3, C11, C12) annotated their recipes with similar clarifications, including whether rice should be "al dente" or fully cooked and whether one or both sides of a pork chop should be cooked for the stated amount of time. Voice assistants with advanced general knowledge and commonsense reasoning skills can do more than just read instructions aloud: they can make them even more informative at the same time.
Cooks frequently asked questions to uncover these hidden details, including which type of grater to use for carrots (C3), whether a regular skillet could be used instead of nonstick (C3), how to achieve a certain consistency with sauce (C4), what a reduction was (C7), what deglazing a pan meant (C8), how long it takes to cook rice _al dente_ (C9), and what a saucepan was (C11). These questions represent a wide range in cooking experience, which affects whether including certain information clears up the big picture or overloads users with information. Providing the right information adapted for each user and and answering questions well while delivering the recipe could make working with voice assistants even more powerful than following instructions alone.
### Failure to Listen
When cooks tried to interact with Alexa, it often failed to respond. Surprisingly, Alexa often failed to respond even when cooks addressed it in the recommended way: by prefixing their requests with the wake word, "Alexa." In fact, it failed to respond to six cooks who addressed it with the wake word, and five cooks faced this issue at least three times. This failure to respond was a stumbling block for conversation, with cooks waiting an average of 4.6 seconds (min = 2s, max = 9s) before trying the same request again. This led one cook to suggest that Alexa did not "hear properly" (C8).
Cooks faced even more trouble when addressing Alexa without the wake word. This may seem like a user error, but the real issue may be that Alexa is not well tuned to the way cooks naturally want to address it during longer-form, multi-turn interactions. The vast majority--eight of eleven--of cooks addressed Alexa without the wake word at least once. These failed requests caused a delay as well, with cooks waiting an average of 3.5 seconds (min = 1s, max = 5s, outlier = 28s) before repeating the request with the wake word. Cooks may have been particularly confused because Alexa actually did not always require the wake word at all. Rather, after dictating a step, it would "listen" for follow-up requests for a few seconds before turning the microphone off. During this period, users can interact with Alexa without the wake word, but it was not always obvious that Alexa was ready for new requests. C8 even remarked, "I don't know where or when [to use the wake word] so I just call her name every time." Alexa did light up while listening, but visual cues may be invisible or unclear during eyes-free interaction.
Alexa failed to respond to a wide variety of requests, including continue, repeat, next, start over, and answer a question, many of which were basic navigation requests that were unambiguously intended for the voice assistant. Features for interactions without wake words, like "conversation mode" on the new Amazon Alexa Echo Show 10, may help reduce the friction of following instructions
with a voice assistant if it can clearly indicate when it is listening and successfully respond when prompted.
Another source of friction in cooks' conversations with Alexa was its failure to respond to interruptions. Overall, cooks seemed hesitant to interrupt Alexa, even during a very long step. The few times a cook did try to interrupt it, Alexa did not always respond. A small minority of three cooks tried to interrupt Alexa at all during their sessions, most often to stop it in the middle of providing an obviously irrelevant answer to a question they had asked (see Section 4.8). This was particularly frustrating when Alexa's answers were verbose. In the words of C11, "When I asked her a question and she just, like, launched into a thing, I felt like there was no way to get her to stop. Maybe I'm just not loud enough, or maybe I didn't try, but [Alexa's answer] was just very, very long. And I was like, 'All right, I just have to ride this out.'
### Discarded Context
Perplexingly, Alexa seemed to forget about the recipe when cooks asked it a clarification question. It seemed to answer questions by searching for external references instead, like in this exchange:
**C3:** Alexa, how much flour?
**Alexa:**From cooksinfo.com, if you are making bread
in North America, some suggest three parts all
purpose flour to one part cake flour...
This exchange is representative of a pattern we observed for five cooks (twelve times total), in which a cook asked a question that they wished to have answered from the recipe, but Alexa answered by referring to an external reference (i.e., "From <I&I>...", "According to an Alexa Answers contributor..."). Even more perplexingly, Alexa occasionally provided a response that sounded plausible without saying that it was from an external source:
**C1:** Alexa, how long do I need to bake the cake?
**Alexa:**It is recommended to bake cake in the oven for 30 minutes to 35 minutes at 350 degrees Fahrenheit and to rest for 10 minutes. Bake until
cake is golden. Test the cake by inserting a
toothpick. If the toothpick comes out clean, your
cake is done.
This exchange clearly deviates from the original recipe. It calls for the oven to be heated to 350F (rather than 325F, as instructed by the original recipe), recommends cooking for 30-35 minutes (rather than 35-40 minutes), and suggests baking until the cake is golden (the participant was baking a red velvet cake, which would not have turned golden no matter how long it was cooked). Neglecting to indicate that it is deviating from the recipe is especially risky when the response sounds plausible. Cooks who experienced this issue were asking about a variety of details from the original recipe, including quantities of ingredients, types of ingredients, substitutions, baking temperature, seasoning options, and planning (i.e., when to start preheating an oven). Future voice assistants could default to answering questions by extracting information from the source recipe before turning to external sources.
Regardless of context, Alexa struggled to answer questions in general. Of the 71 questions asked across sessions, two thirds received answers that we believe were obviously unsuitable. Along with mistakenly turning to external resources, Alexa sometimes misunderstood the intent of a cook's question, like when it attempted to set a reminder when C10 asked it to "remind" him of how many tomatoes he needed. Greater awareness of the kinds of questions users tend to ask during the task completion process can help voice assistants answer them more helpfully.
### Uncommunicated Affordances
Even though best practices in human-AI interaction recommend that AI-infused interfaces be clear about what they can do and how well they can do it (Bogman et al., 2017), Alexa did not seem designed to make its affordances for following results easily understood. Cooks learned that Alexa, in contrast to a human partner, required a certain way of making requests and asking for help. C7 initially communicated with Alexa as though it were a "real person" but ended up with a much more restricted perception of Alexa's capabilities later on:
I have to know how it processes my information, like, to talk with it as it can understand... Sometimes, instead of asking direct questions, I may ask it to repeat the instructions and figure it out myself.
Despite Alexa's efforts to communicate its numerous affordances for helping people follow step-by-step instructions, cooks were still unaware of many of them. This may have happened because Alexa did not inform the cooks of its relevant affordances at the right time. Furthermore, communicating affordances in an eyes-free setting is not as simple as ambiently displaying them on a screen for users to discover on their own. When cooks were unaware of an affordance, they usually worked around it instead of experimenting through trial and error or trying to find it through documentation. In one instance, C4 did not try to ask Alexa questions about the recipe that required external information because she "thought that Alexa could only tell [her] what was in the recipe." The discoverability of affordances on audio-first interfaces may rely on efficiently informing users right when they need them.
Another affordance that was not communicated well to users was Alexa's behavior when reading ingredients. Rather than reading them all at once, Alexa read them in pairs with pauses in between. It advanced to the next pair of ingredients when requested. Two cooks were unaware of this behavior, which led to two different issues. C11 skipped the ingredients list altogether, anticipating that Alexa would read "a whole long list" without pauses. This made matters more difficult for her because she had to collect ingredients later as they were mentioned in the recipe. C9, in contrast, tried to listen to the ingredients list. When Alexa paused after the first two ingredients, she seemed surprised and asked, "Is that all the ingredients it's gonna give me?"
Complicating matters, Alexa gave mixed signals about the availability of affordances, perhaps because of speech recognition issues. For instance, C8 asked Alexa for the instructions at the beginning of a recipe. Alexa responded with the statement, "That command is not supported right now," even though it does in fact have this ability (and several other cooks used it successfully after a similar request). This cook understandably responded by asking, "Oh no, do I have to start again?" and then searched through Alexa's recipe library for the same recipe to start over instead of retrying the request. Providing clearer error messages and suggestions for working through them could have helped C8 recover from this error,
as recommended in prior research (Jessica and Lon Binder on Flickr, 2017). Altogether, these cooks' experiences revealed that affordances will likely need to be more explicitly communicated by future voice assistants.
### Limitations of Audio
We purposefully used an audio-only device for our study because we wanted to learn more about how voice assistants of all kinds can better communicate with their users. Although audio-only guidance has shown great promise in our observations, cooks sometimes wanted visual information as well.
Cooks wanted visual information to help them assess if they had achieved the intended outcome of a step, like the proper consistency of cake batter (C2) or the doneness of fried chicken (C4). Cooks also described a number of situations that could be answered with visual information, like what size equipment to use (C10), how to execute a technique in a recipe (C8, C12), how finely to chop an ingredient (C2), or the proportions of different ingredients in a mixture (C7). Some voice assistants can deliver this visual information through images or videos, but visual information does not necessarily need to be provided through visual output modalities. Verbally describing visual elements--like cooking chicken until it is no longer pink--can help voice assistants of all kinds communicate more effectively.
Beyond information to help visualize their tasks, cooks sometimes wished they had the ability to skim through their recipes. Skimming written recipes would have been useful to plan for upcoming steps (C11) or quickly recall details scattered throughout the instructions (C1, C4). Providing similarly efficient ways of "skimming" through audio-first content is not as obvious as delivering more content at a faster pace, at the risk of information overload. Displaying the instructions on a screen for users to scroll through should not be the final solution either, at least for eyes- or hands-free settings. Cooks who wanted to skim through recipes usually verbalized this as _reading_, but the core of their request may be quickly absorbing information in some way, not necessarily using their eyes to do it. Out of all the challenges we discuss, this limitation of audio may require the most creativity to address.
## 5. Discussion
In this section, we propose eight ways in which voice assistants can "rewrite the script" to transform written sources into more usable voice-based instructions. We conclude by considering the future role of voice assistants and relevant advances in natural language processing research for complex task guidance.
### Voice Assistants as Rewriters of Scripts
We propose eight key capabilities that a "rewriter of scripts" should have, which are grounded in our observational study (Sec. 4). We believe these capabilities are especially suited to the current era
Figure 5. A sample transformation after “rewriting the script.” Among other changes, a set of instructions should be _split_ into easier-to-follow chunks; information should be _redistributed_, with details appearing where they would be most useful; and the voice assistant should _summarize_ and _signpost_ to help users understand where they are in a procedure. An effective voice assistant for providing instructions will have to perform all of these tasks in a coordinated way to effectively provide task support. Example from Tuscan Butter Salmon recipe (Salamon et al., 2017). Photo of spinach by Jessica and Lon Binder on Flickr (2017).
of computing given the recent advances in natural language processing research (see Table 3). We offer a concrete vision of what rewriting the script might look like in Figure 5, which includes:
**Summarize.** Because listening does not currently afford skimming as easily as reading, voice assistants should help users familiarize themselves with the instructions by providing overviews at different levels (Sec. 4.2). Summarizing instructions as a whole, and particularly complex steps within them, would help users develop a sense of what the instructions entail and how to prepare for upcoming tasks. Furthermore, advanced users could use these summaries instead of the original steps if they do not need detailed guidance.
**Signpost.** Voice assistants can also provide more direct guidance by signposting. Contrary to a summary, a signpost tells the user where specific information is or what commands they can use. Telling a user, for instance, that they are on "step 2 of 12" as opposed to just "step 2" can help them keep track of their progress within the big picture (Sec. 4.2). Alerting users of time-sensitive steps like preheating an oven can help them anticipate actions that need to be executed in parallel (Sec. 4.5). Finally, simply telling users what they can say to their voice assistant would go a long way in communicating affordances (Sec. 4.9).
**Split.** To avoid burdening users with information overload, voice assistants can reduce the amount of information in each step of the procedure. Our study implied that simpler steps--containing fewer actions, materials, words, and sentences--were less likely to be repeated by users (Sec. 4.3). As a preliminary rule of thumb, we suggest splitting complex steps so that each step contains one main action. Additional actions within the same step should be small or tightly related to it. Voice assistants may need to insert more pauses when an instructions contains many parts, such as many different materials or additional implied substeps.
**Elaborate.** Sometimes, voice assistants need to elaborate on small details. Voice assistants should also ensure that they do not omit important information (like crucial details in parentheses; Sec. 4.6). Some cooks in our study appreciated when implicit details were made explicit, like that tomatoes should be cooked until they are _beginning to burst_. Voice assistants should anticipate when additional details would benefit particular users, preferences, or levels of experience and provide them while delivering the instructions.
**Volunteer.** Cooks in our study sometimes implied that they would appreciate more proactive voice assistants as opposed to strictly reactive ones. Proactively volunteering information can help users anticipate the currently uncommunicated affordances of voice assistants (Sec. 4.9). Voice assistants can continue offering information about its affordances directly after a relevant interaction--i.e., telling a user they can say "repeat" to hear the current step again--and dive deeper into the content of the instructions--i.e., volunteering to set a timer or elaborate on an obscure technique.
**Reorder.** Order matters in instructions. It is especially important to for time-sensitive tasks (Sec. 4.5). Instructions that depend on each other should be detected and stated far enough in advance that users can act upon them before too late. This may require splitting steps into multiple substeps or even alerting the user well before they begin the main part of the instructions--so they can thaw frozen ingredients before cooking, for example.
**Redistribute.** When information is fragmented across a written procedure, voice assistants should group it back together. Information in our study was particularly fragmented across the ingredients list and main instructions (Sec. 4.4). The ingredients list often contained crucial information about the amount and preparation of an ingredient ("3 cloves garlic, minced") without repeating it when it was needed ("When butter has melted, stir in garlic and cook until fragrant, about 1 minute").3 Redistributing this information, even if that means repeating it, would help users access information in a modality that is hard to search through.
Footnote 3: Examples from Tuscan Butter Salmon recipe (Tuscan Butter Salmon, 2018).
**Visualize.** Visual information can help offset some of the limitations of audio (Sec. 4.10). Voice assistants can provide this information in two ways. First and foremost, voice assistants should verbalize visual representations, like by suggesting the user to "cut potatoes into slices as thick as a pencil." Multi-modal voice assistants can display an analogous visual on their screens after generating or querying for it. These multi-modal assistants should still take care to verbalize visual information because the screen is meant to complement voice interaction, not replace it.
### Limitations
Our conclusions are limited in several ways. First, the challenges we identified may not represent the full range of experiences of a broader population. The participants in our study were primarily college-educated, English-speaking young adults who were likely already aware of voice assistant technology. Second, our findings may not apply to all voice assistants since we used one type of device. Finally, the challenges associated with recipes may manifest differently in different types of instructions. Recipes tend to include many actions and materials (i.e., ingredients) in a single step, so our study may overrepresent issues of information overload. Examples of fragmentation related to ingredients lists are likely unique to recipes as well. Furthermore, recipes have lower stakes compared to safety-critical procedures like driving and surgery. Because taste is subjective, the outcome of a recipe is also more flexible, unlike building furniture or submitting legal documents.
### Future Work
Our conclusions suggest directions in which the fields of human-computer interaction (HCI) and natural language processing (NLP) can together provide more effective guidance for complex tasks.
Within HCI, additional studies can further clarify what it means for a voice assistant to effectively rewrite the script by replicating our _in situ_ methods with other voice assistants and types of instructions. These studies should take care to include participants who represent a greater range of ability status, language proficiency, cultural origin, and age. Wizard-of-Oz studies would be especially informative for testing aspirational variants of voice assistants that can execute our suggestions as well. We also recognize that voice interaction can go beyond voice itself. Future studies can clarify the role of external displays and augmented reality in showing effective visuals at the right times to complement audio-first instructions.
In a cyclical fashion, our findings resonate with and can further inspire research efforts within NLP. Many of the goals we describe
in Section 5.1 can already be achieved by leveraging current advances in well known NLP tasks, especially task-oriented dialogue, summarization, event reasoning, commonsense reasoning, question generation, and text-to-image generation. To help unite the two fields, we summarize relevant NLP research in Table 3. Our observational study method is effective for more than just identifying user needs: it can be a robust, user-centered way of evaluating NLP contributions. Bringing many of these techniques together into a single system capable of producing coherent, easy-to-follow text can help voice assistants develop to maturity.
### Futures with Voice Assistants that Rewrite the Script
Our work explores the design of voice assistants that guide users through complex tasks, even when the tasks are unfamiliar. Many solutions we propose for the challenges revealed by our observational study are already possible with current progress in natural language processing (NLP), as we discuss in Section 5.3. In this section, we consider what it would mean for voice assistants to be able to guide users through complex tasks as fluidly as we imagine.
In a future filled with voice assistants that are skilled at complex task guidance, we may fear that people's ability to learn new procedures will become diminished. As Eiriksdottir and Catrambone (Eiriksdottir and Catrambone, 2018) describe in their review of research on instruction design, concrete instructions that are easy to follow right away often lose their potency in transferring to new tasks. The NLP community has been grappling with a similar fear fueled by the recent release of ChatGPT, an immensely powerful language model (Kolmogorov, 2018). One concerned researcher wrote that ChatGPT is a "plague upon education" and a "threat to human intelligence and academic authority" because of its ability to automate many writing tasks. Duckworth and Ungar (2018), on the other hand, argue that ChatGPT has the power to "accelerate the trend toward valuing critical thinking" because users need to carefully evaluate its output. In our view, voice assistants that make procedures easy to follow may remove the incentive to internalize those procedures, but they also raise the baseline of the procedures we are able to learn at all.
\begin{table}
\begin{tabular}{c l l l l} \hline \hline \# & HCI Goal & Description & Related NLP Tasks & Selected Research \\ \hline \multirow{3}{*}{**Rewrite**} & \multirow{3}{*}{Adapt written instructions into a form} & Task-Oriented Dialogue, Text & Budzianowski et al. (2018) \\ & & more easily consumed over audio. & Simplification, Style Transfer & Reif et al. (2018) \\ & & & & Wu et al. (2018) \\ & & & & Zhang et al. (2018) \\ \hline \multirow{3}{*}{1} & Summarize & Provide overviews of entire procedures & Summarization (especially for & Gao et al. (2018) \\ & and complex steps. & procedural text) & Zhong et al. (2018) \\ & & & & \\ \multirow{3}{*}{2} & Signpost & Convey a user’s progress and how to & Information Extraction, Event & Dalvi et al. (2018) \\ & & navigate to desired information. & Reasoning & \\ \multirow{3}{*}{3} & Split & Segment complex steps into & Procedural Text, Event Reasoning & Kim et al. (2018) \\ & & easy-to-follow substeps. & & Lyu et al. (2018) \\ & & & & Zhang et al. (2018) \\ & & & & Zhou et al. (2018) \\ \multirow{3}{*}{4} & Elaborate & Anticipate details the user wants & Information Extraction, & Druck and Pang (2018) \\ & & without requiring them to ask. & Commonsense Reasoning & Zhang et al. (2018) \\ \multirow{3}{*}{5} & Volunteer & Proactively tell the user what & Question Generation & Tu et al. (2018) \\ & & affordances are available. & & \\ \multirow{3}{*}{6} & Reorder & Move time-sensitive steps to the point & Event Reasoning, Event Duration & Kiddon et al. (2018) \\ & & in instructions where users should & Prediction, Goal-Step Reasoning, & Zhang et al. (2018) \\ & & begin to follow them. & Temporal Ordering & \\ \multirow{3}{*}{7} & Redistribute & Repeat information that was & Hierarchical Event Reasoning, & Chandrasekaran and Mago (2018) \\ & & fragmented in the written instructions & Semantic Similarity, Relational & Speer et al. (2018) \\ & & whenever it is needed over audio. & Knowledge & \\ \multirow{3}{*}{8} & Visualize & Describe or show visual information to & Visual Goal-Step Inference, & Ramesh et al. (2018) \\ & clarify techniques, materials, and & Text-to-Image Generation & Rombach et al. (2018) \\ & & intended results. & Yang et al. (2018) \\ \hline \hline \end{tabular}
\end{table}
Table 3. HCI goals and relevant work from NLP. We reference relevant work in Natural Language Processing that can help the Human-Computer Interaction research community achieve the 8 goals we describe in Section 5.1.
We may also fear that the advancement of voice assistant technology threatens the social benefits of instruction-following. We often learn procedures by following the guidance of other people, whether we are cooking new recipes (Kraemer et al., 2017; Wang et al., 2018), administering CPR (Bahdan et al., 2019), or tackling any number of other tasks. We also value exchanging additional insight and building relationships beyond the procedure itself. In today's digital age, instructions have become more diverse and accessible than ever, but they have also become less personal now that we have the option of going online instead of the necessity of seeking out experts in person.
Like other digital resources, voice assistants can add to diversity and accessibility, without necessarily detracting from human life and relationships. We see the future role of voice assistants as increasing access to information rather than replacing human guidance. Whether they are guiding us quickly through complex instructions or leaving out details to help us practice procedural knowledge (e.g., (Kraemer et al., 2017)), voice assistants can be designed for both learning and executing at the same time. Working with a voice assistant does not have to be a solitary activity, either: voice assistants can help us collaborate with each other (e.g., (Kraemer et al., 2017)). No matter how well they rewrite the script, voice assistants are still _assistants_, and we have the power to choose how they assist us.
## 6. Conclusion
In this paper, we studied how voice assistants should be designed to guide users through complex instructions. Focusing on recipes as an example, we observed 12 people as they cooked at home while being guided by Amazon Alexa. This led us to nine key challenges that users face when modern voice assistant technology for complex task guidance falls short. Many challenges-like information overload, fragmentation, and time-insensitivity-arose from voice assistants reciting written recipes as though they were scripts. We propose eight ways for voice assistants to "rewrite the script" into a form that is easier to follow in hands- and eyes-free settings. Rewriting the script is crucial for any intelligent agent that communicates through spoken conversation, even devices that incorporate visual output. Future voice assistants can solve these problems by bringing together insights from human-computer interaction and natural language processing research, one step at a time.
###### Acknowledgements.
We would like to thank the participants of our official and pilot studies. We are especially grateful to Liam Dugan for his invaluable suggestions throughout our work, Daphne Ippolito and Artemis Panagopoulou for their insight, and Hita Kambhamettu for her assistance on Figure 4 on short notice. Finally, we thank the anonymous reviewers for their feedback. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1845298.
|
2310.14943 | Gradient Bounds and Liouville theorems for Quasi-linear equations on
compact Manifolds with nonnegative Ricci curvature | In this work we establish a gradient bound and Liouville-type theorems for
solutions to Quasi-linear elliptic equations on compact Riemannian Manifolds
with nonnegative Ricci curvature. Also, we provide a local splitting theorem
when the inequality in the gradient bound becomes equality at some point.
Moreover, we prove a Harnack-type inequality and an ABP estimate for the
gradient of solutions in domains contained in the manifold. | Dimitrios Gazoulis, George Zacharopoulos | 2023-10-23T13:41:31Z | http://arxiv.org/abs/2310.14943v2 | Gradient Bounds and Liouville theorems for Quasi-linear equations on compact Manifolds with nonnegative Ricci curvature
###### Abstract
In this work we establish a gradient bound and Liouville-type theorems for solutions to Quasi-linear elliptic equations on compact Riemannian Manifolds with nonnegative Ricci curvature. Also, we provide a local slitting theorem when the inequality in the gradient bound becomes equality at some point. Moreover, we prove a Harnack-type inequality and an ABP estimate for the gradient of solutions in domains contained in the manifold.
## 1 Introduction
Let \((\mathcal{M},g)\) be a smooth Riemannian manifold. Throughout this paper we shall assume that \((\mathcal{M},g)\) or simply \(\mathcal{M}\) is a compact, connected, smooth and boundaryless Riemannian manifold of dimension \(n\geq 2\) with nonnegative Ricci curvature. We will indicate otherwise if some of these assumptions are dropped. Also we denote by \(\nabla\) the Levi-Civita connection with respect to the Riemannian metric \(g\).
Consider the equation
\[\begin{split} div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=F^{ \prime}(u)\\ \text{where}\,\,\,u\,:\mathcal{M}\to\mathbb{R}\,\,\,\text{and}\, \,F\,,\,\Phi\,:\mathbb{R}\to\mathbb{R}\end{split} \tag{1}\]
This equation arise as the Euler-Lagrange of the energy functional
\[J(u)=\int_{\mathcal{M}}\left(\frac{1}{2}\Phi(|\nabla u|^{2})+F(u)\right)d\mu_{g} \tag{2}\]
In this paper we apply the "\(P-\)function technique" to obtain a pointwise gradient estimate for the equation (1). The method of proof is based on Maximum Principles and it has been introduced in [14, 15, 18]. In addition, we establish Liouville-type theorems, one of which generalizes the constancy of bounded harmonic functions for Quasi-linear equations when a stability condition is assumed. Other applications are Harnack-type estimates and Alexandrov-Bekelman-Pucci type estimates for the gradient of solutions. In the last section, we also determine a local splitting result as an extension of [8] for Quasi-linear equations.
The idea of obtaining gradient bounds via the Maximum Principle turned out to be very effective and it found several applications in many topics including Riemannian geometry. To be more precise, relevant works can be found in [7, 8, 9, 10, 12, 17] to cite a few. Furthermore, a novel approach to the Maximum Principle method has been recently exploited in a very successful way in [1, 2, 3], in order to obtain oscillation and modulus of continuity estimates.
Let now
\[P(u;x)=2\Phi^{\prime}(|\nabla u(x)|^{2})|\nabla u(x)|^{2}-\Phi(|\nabla u(x)|^{2})- 2F(u(x))\ \ \,\ x\in{\cal M}. \tag{3}\]
The above quantity is the related \(P-\)function of equation (1). An abstract definition of \(P-\)functions can be found in [11] and is similarly formulated for Riemannian Manifolds.
We assume that \(\Phi\in C^{3}([0,+\infty))\), \(F\geq 0\) and \(\Phi(0)=0\) and we define
\[a_{ij}(\sigma):=2\Phi^{\prime\prime}(|\sigma|^{2})\sigma_{i}\sigma_{j}+\Phi^{ \prime}(|\sigma|^{2})\delta_{ij} \tag{4}\]
and we suppose that one of the following conditions is satisfied:
**Assumption (A)** There exist \(p>1\), \(a\geq 0\) and \(c_{1},c_{2}>0\) such that for any \(\sigma,\ \xi\in\mathbb{R}^{n}\setminus\{0\}\),
\[c_{1}(a+|\sigma|)^{p-2}\leq\Phi^{\prime}(|\sigma|^{2})\leq c_{2}(a+|\sigma|)^{ p-2} \tag{5}\]
and
\[c_{1}(a+|\sigma|)^{p-2}|\xi|^{2}\leq\sum_{i,j=1}^{n}a_{ij}(\sigma)\xi_{i}\xi_ {j}\leq c_{2}(a+|\sigma|)^{p-2}|\xi|^{2} \tag{6}\]
**Assumption (B)** There exist \(c_{1},\,c_{2}>0\) such that for any \(\sigma\in\mathbb{R}^{n}\)
\[c_{1}(1+|\sigma|)^{-1}\leq\Phi^{\prime}(|\sigma|^{2})\leq c_{2}(a+|\sigma|)^{-1} \tag{7}\]
and
\[c_{1}(1+|\sigma|)^{-1}|\xi^{\prime}|^{2}\leq\sum_{i,j=1}^{n}a_{ij}(\sigma)\xi_ {i}\xi_{j}\leq c_{2}(1+|\sigma|)^{-1}|\xi^{\prime}|^{2} \tag{8}\]
for any \(\xi^{\prime}=(\xi,\xi_{n+1})\in\mathbb{R}^{n+1}\) which is orthogonal to \((-\sigma,1)\in\mathbb{R}^{n+1}\).
The above assumptions (A) and (B) are classical, they agree for instance with the ones of [6], and examples of functional satisfying the above conditions are the Allen-Cahn equation, the \(p-\)Laplacian (with \(p>1\)) and the mean curvature operators, which correspond to the cases
\[\begin{array}{c}(i)\ \Phi(t)=t\ \,,\ \Delta u=F^{\prime}(u)\\ \mbox{where}\ \,\Delta u=\frac{1}{\sqrt{det(g_{ij})}}\partial_{k}\left( \sqrt{det(g_{ij})}g^{kl}\partial_{l}u\right),\end{array} \tag{9}\]
\[\begin{array}{c}(ii)\ \Phi(t)=\frac{2}{p}t^{p/2}\ \,\ \Delta_{p}u=F^{ \prime}(u)\\ \mbox{where}\ \,\Delta_{p}u=\frac{1}{\sqrt{det(g_{ij})}}\partial_{k} \left(\sqrt{det(g_{ij})}g^{kl}\partial_{l}(|\nabla u|^{p-2}\nabla u)\right), \end{array} \tag{10}\]
\[\begin{array}{c}(iii)\ \Phi(t)=2\sqrt{1+t}-2\ \,\ div(\frac{\nabla u}{\sqrt{1+| \nabla u|^{2}}})=F^{\prime}(u)\\ \mbox{where}\ \,div(\frac{\nabla u}{\sqrt{1+|\nabla u|^{2}}})=\frac{1}{ \sqrt{det(g_{ij})}}\partial_{k}\left(\sqrt{det(g_{ij})}g^{kl}\partial_{l}( \frac{\nabla u}{\sqrt{1+|\nabla u|^{2}}})\right),\end{array} \tag{11}\]
written in local coordinates respectively.
In this work we prove the following gradient bound for solutions of (1).
**Theorem 1.1**.: _Let \(\mathcal{M}\) be a smooth and compact Riemannian manifold with nonnegative Ricci curvature. Let \(u\in C^{3}(\mathcal{M})\cap C^{2,\alpha}(\mathcal{M})\) be a solution of_
\[div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=F^{\prime}(u) \tag{12}\]
_such that \(F\geq 0\) and \(\alpha\in(0,1)\)._
_Then_
\[2\Phi^{\prime}(|\nabla u(x)|^{2})|\nabla u(x)|^{2}-\Phi(|\nabla u(x)|^{2})\leq 2 F(u(x)) \tag{13}\]
_for any \(x\in\mathcal{M}\)._
When \(\Phi^{\prime}\equiv 1\) in (1), the gradient bound becomes
\[|\nabla u(x)|^{2}\leq 2F(u(x))\;\;,\;\forall\,x\in\mathcal{M}\]
as we see in [9] and the respective \(P-\) function is
\[P(u;x)=|\nabla u(x)|^{2}-2F(u(x))\]
In this case, \(P\) satisfies the following elliptic inequality
\[|\nabla u|^{2}\Delta P-2F^{\prime}\langle\nabla u,\nabla P\rangle\geq\frac{| \nabla P|^{2}}{2}+2|\nabla u|^{2}Ric(\nabla u,\nabla u)\]
as proved in [9].
For the proof of the Theorem 1.1 we follow both [6] and [9].
Our second main result is a Liouville-type theorem for solutions of (1) when \(F^{\prime\prime}\geq 0\). This assumption on \(F\) guarantees stability for any solution, to be more precise, the second variation of the energy functional \(J(u)=\int_{\mathcal{M}}\frac{1}{2}\Phi(|\nabla u|^{2})+F(u)\) is non negative.
**Theorem 1.2**.: _Let \(\mathcal{M}\) be a smooth and compact Riemannian manifold with nonnegative Ricci curvature. Let \(u\in C^{3}(\mathcal{M})\cap C^{2,\alpha}(\mathcal{M})\) be a solution of_
\[div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=F^{\prime}(u) \tag{14}\]
_such that \(F^{\prime\prime}\geq 0\) and \(\alpha\in(0,1)\)._
_Then \(u\) is a constant._
Note that if \(F\) convex and \(\Phi(t)=\frac{2}{p}t^{p/2}\), we obtain a Liouville-type result for the \(p-\)Laplacian that generalizes the classical result that the only bounded harmonic functions on compact manifolds are the constant functions.
Another Liouville-type theorem is the following
**Theorem 1.3**.: _Let \(u\) be a solution of (1) and suppose assumptions of Theorem 1.1 are satisfied. If there exists \(x_{0}\in\mathcal{M}\) such that \(F(u(x_{0}))=0\), then \(u\) is a constant in \(\mathcal{M}\)._
Also, we establish some gradient estimates for the solutions of (1). First, a Harnack inequality for the gradient of solutions
**Theorem 1.4**.: _Let \(\mathcal{M}\) be a smooth Riemannian manifold and \(u\in C^{3}(\mathcal{M})\) be a solution of_
\[div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=F^{\prime}(u) \tag{15}\]
_such that \(F^{\prime\prime}\leq 0\) and assume that the Hessian of \(|\nabla u|^{2}\) is bounded from above._
_Then_
\[\begin{array}{c}\frac{1}{|B_{R}|^{1/p}}(\int_{B_{R}}|\nabla u|^{2p}d\mu_{g})^ {1/p}\leq C(\inf_{B_{R}}|\nabla u|^{2}+\frac{R^{2}}{|B_{2R}|^{1/n}}||Hes\,u||_{L ^{2n}(B_{2R})}^{2}\\ +\frac{R^{2}}{|B_{2R}|^{1/n}}||\Phi^{\prime}(|\nabla u|^{2})Ric(\nabla u, \nabla u))||_{L^{n}(B_{2R})})\end{array} \tag{16}\]
_In particular, if \(\mathcal{M}\) has non positive Ricci curvature it holds_
\[\frac{1}{|B_{R}|^{1/p}}(\int_{B_{R}}|\nabla u|^{2p}d\mu_{g})^{1/p}\leq C(\inf _{B_{R}}|\nabla u|^{2}+\frac{R^{2}}{|B_{2R}|^{1/n}}||Hes\,u||_{L^{2n}(B_{2R}) }^{2}) \tag{17}\]
Additionally, an Alexandrov-Bekelman-Pucci type estimate (ABP estimate) for the gradient of solutions is obtained. We assume the following property for a given bounded domain (bounded, open and connected set) \(\Omega\subset\mathcal{M}\):
\[\text{Given }\,R>0\,\text{ and }\,\theta\in(0,1),\,\text{ it holds }\,|B_{R}(x)\setminus\Omega|\geq\theta|B_{R}(x)|\quad\forall x\in\Omega \tag{18}\]
**Theorem 1.5**.: _Let \(\Omega\subset\mathcal{M}\) be a bounded domain and assume that (18) holds for some constants \(R>0\) and \(\theta\in(0,1)\) and \(\mathcal{M}\) has nonnegative Ricci curvature. Let \(u\in C^{3}(\mathcal{M})\) be a solution of (1) that satisfy \(\limsup_{x\to\partial\Omega}|\nabla u|=0\)._
_Then, for some \(z_{0}\in\overline{\Omega}\),_
\[\sup_{\Omega}|\nabla u|^{2}\leq C_{\theta}\frac{R^{2}}{|B_{2R}(z_{0})|^{1/n}} ||F^{\prime\prime}(u)|\nabla u|^{2}||_{L^{n}(\Omega\cap B_{2R}(z_{0}))} \tag{19}\]
Finally, in the last section we will prove that the existence of a nonconstant bounded solution \(u\) for which the gradient bound (13) becomes equality at some point \(x_{0}\in\mathcal{M}\), leads to a local splitting theorem as well as to a classification of such solution \(u\). This result is motivated by the work in [8], in which they proved local and global splitting theorems for the Allen-Cahn equations when the equipartition of the energy holds at some point. We extend some of these results for Quasi-linear equations on compact manifolds.
## 2 Proof of Theorem 1.1
First we prove that \(P\) defined in (3) is a \(P\) function of (1).
**Lemma 2.1**.: _Let \(\mathcal{M}\) be a smooth Riemannian manifold and let \(u\in C^{3}(\mathcal{M})\) be a solution of (1)._
_Then_
\[|\nabla u|^{2}\sum_{i,j}\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}P)+\sum_{i}B_{i} \nabla_{i}P\geq\frac{|\nabla P|^{2}}{2\Lambda(|\nabla u|^{2})}+2\Phi^{\prime} (|\nabla u|^{2})Ric(\nabla u,\nabla u) \tag{20}\]
_where \(P\) defined in (3)._
Proof.: Throughout this proof we will use abstract index notation for the calculations. We consider the function (3) and take it's covariant derivative.
\[\nabla_{i}P=\nabla_{i}(2\Phi^{\prime}(|\nabla u|^{2})|\nabla u|^{2}-\Phi(|\nabla u |^{2})-2F(u))\]
\[=4\Phi^{\prime\prime}(|\nabla u|^{2})g(\nabla_{i}\nabla u,\nabla u)|\nabla u|^{2 }+4\Phi^{\prime}(|\nabla u|^{2})g(\nabla_{i}\nabla u,\nabla u)-2\Phi^{\prime}(| \nabla u|^{2})g(\nabla_{i}\nabla u,\nabla u)-2F^{\prime}(u)\nabla_{i}u\]
\[=4\Phi^{\prime\prime}(|\nabla u|^{2})|\nabla u|^{2}g(\nabla_{i}\nabla u,\nabla u )+2\Phi^{\prime}(|\nabla u|^{2})g(\nabla_{i}\nabla u,\nabla u)-2f(u)\nabla_{i}u\]
where \(F^{\prime}=f\). By denoting
\[\Lambda(|\nabla u|^{2})=2\Phi^{\prime\prime}(|\nabla u|^{2})|\nabla u|^{2}+ \Phi^{\prime}(|\nabla u|^{2}) \tag{21}\]
we get
\[\nabla_{i}P=2\Lambda(|\nabla u|^{2})g(\nabla_{i}\nabla u,\nabla u)-2f(u) \nabla_{i}u \tag{22}\]
Next we will multiply \(\nabla_{i}P\) with
\[d_{ij}(\nabla u)=\frac{a_{ij}(\nabla u)}{\Lambda(|\nabla u|^{2})}\]
and take the covariant derivative of \(d_{ij}(\nabla u)\nabla_{i}P\).
\[\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}P)= \nabla_{j}(2a_{ij}(\nabla u)g(\nabla_{i}\nabla u,\nabla u)-2f(u)d _{ij}(\nabla u)\nabla_{i}u) \tag{23}\] \[= \nabla_{j}(2a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u)\nabla_{k}u+2a_ {ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\] \[-\nabla_{j}(2d_{ij}(\nabla u)\nabla_{i}u)f(u)-2f^{\prime}(u)d_{ij }(\nabla u)\nabla_{j}u\nabla_{i}u\]
First we will compute \(\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}u)\). Using (12) we get
\[\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}u) =\nabla_{j}d_{ij}(\nabla u)\nabla_{i}u+d_{ij}(\nabla u)\nabla_{j} \nabla_{i}u \tag{24}\] \[=\nabla_{j}d_{ij}(\nabla u)\nabla_{i}u+\frac{f(u)}{\Lambda(|\nabla u |^{2})}\]
Claim: The following identity holds
\[\nabla_{j}d_{ij}(\nabla u)\nabla_{i}u=\frac{2\Phi^{\prime\prime}(|\nabla u|^{2 })}{\Lambda(|\nabla u|^{2})}(|\nabla u|^{2}\Delta u-g(\nabla_{i}\nabla u, \nabla u)\nabla_{i}u)\]
Proof of the claim:
We have
\[\nabla_{j}d_{ij}(\nabla u)\nabla_{i}u=\frac{\nabla_{j}(a_{ij}(\nabla u))\Lambda (|\nabla u|^{2})\nabla_{i}u-a_{ij}(\nabla u)\nabla_{j}(\Lambda(|\nabla u|^{2} ))\nabla_{i}u}{\Lambda(|\nabla u|^{2})^{2}}\]
The numerator equals to
\[\nabla_{j}(a_{ij}(\nabla u))\Lambda(|\nabla u|^{2})\nabla_{i}u- a_{ij}(\nabla u)\nabla_{j}(\Lambda(|\nabla u|^{2}))\nabla_{i}u\] \[=8\Phi^{\prime\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u, \nabla u)|\nabla u|^{4}\nabla_{j}u\Phi^{\prime\prime}(|\nabla u|^{2})\] \[+4\Phi^{\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u,\nabla u )|\nabla u|^{2}\nabla_{j}u\Phi^{\prime}(|\nabla u|^{2})\] \[+4\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{i}u\nabla_ {j}u\nabla_{i}u|\nabla u|^{2}\Phi^{\prime\prime}(|\nabla u|^{2})\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{i}u\nabla_ {j}u\nabla_{i}u\Phi^{\prime}(|\nabla u|^{2})\] \[+4\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j}u\Delta u \nabla u|^{2}\Phi^{\prime\prime}(|\nabla u|^{2})\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{i}u\Delta u \Phi^{\prime}(|\nabla u|^{2})\] \[+4\Phi^{\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u,\nabla u )\nabla_{j}u|\nabla u|^{2}\Phi^{\prime\prime}(|\nabla u|^{2})\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u,\nabla u )\nabla_{j}u\Phi^{\prime}(|\nabla u|^{2})\] \[-12g(\nabla_{j}\nabla u,\nabla u)\Phi^{\prime\prime}(|\nabla u|^{2} )\nabla_{i}u\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j}u\] \[-6g(\nabla_{i}\nabla u,\nabla u)\Phi^{\prime\prime}(|\nabla u|^{2 })\nabla_{i}u\Phi^{\prime}(|\nabla u|^{2})\] \[-8g(\nabla_{j}\nabla u,\nabla u)|\nabla u|^{4}\Phi^{\prime \prime\prime}(|\nabla u|^{2})\nabla_{j}u\Phi^{\prime\prime}(|\nabla u|^{2})\] \[-4g(\nabla_{i}\nabla u,\nabla u)|\nabla u|^{2}\Phi^{\prime\prime}(| \nabla u|^{2})\nabla_{i}u\Phi^{\prime}(|\nabla u|^{2})\]
after we cancel some terms we obtain.
\[\begin{array}{c}\nabla_{j}(a_{ij}(\nabla u))\Lambda(|\nabla u|^{2})\nabla_{i}u-a_ {ij}(\nabla u)\nabla_{j}(\Lambda(|\nabla u|^{2}))\nabla_{i}u=4\Phi^{\prime\prime }(|\nabla u|^{2})^{2}\nabla_{i}u\nabla_{i}u\Delta|\nabla u|^{2}\\ \hskip 14.226378pt+2\Phi^{\prime\prime}(|\nabla u|^{2})\Phi^{\prime}(|\nabla u |^{2})\nabla_{i}\nabla_{i}u\Delta u-4\Phi^{\prime\prime}(|\nabla u|^{2})^{2}g( \nabla_{j}\nabla u,\nabla u)\nabla_{i}u\nabla_{i}u\nabla_{j}u\\ \hskip 14.226378pt-2\Phi^{\prime\prime}(|\nabla u|^{2})\Phi^{\prime}(|\nabla u |^{2})g(\nabla_{i}u\nabla u,\nabla u)\nabla_{i}u\\ \hskip 14.226378pt=2\Phi^{\prime\prime}(|\nabla u|^{2})\Delta u|\nabla u|^{2}( 2|\nabla u|^{2}\Phi^{\prime\prime}(|\nabla u|^{2})+\Phi^{\prime}(|\nabla u|^{2}) )\\ \hskip 14.226378pt-2\Phi^{\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u, \nabla u)\nabla_{j}u(2|\nabla u|^{2}\Phi^{\prime\prime}(|\nabla u|^{2})+\Phi^ {\prime}(|\nabla u|^{2}))\\ \hskip 14.226378pt=2\Phi^{\prime\prime}(|\nabla u|^{2})\Delta u|\nabla u|^{2} \Lambda(|\nabla u|^{2})-2\Phi^{\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u,\nabla u)\nabla_{j}u\Lambda(|\nabla u|^{2})\\ \hskip 14.226378pt=2\Phi^{\prime\prime}(|\nabla u|^{2})\Lambda(|\nabla u|^{2})( \Delta u|\nabla u|^{2}-g(\nabla_{j}\nabla u,\nabla u)\nabla_{j}u)\end{array} \tag{25}\]
Therefore,
\[\nabla_{j}d_{ij}(\nabla u)\nabla_{i}u=\frac{2\Phi^{\prime\prime}(|\nabla u|^{2} )(\Delta u|\nabla u|^{2}-g(\nabla_{j}\nabla u,\nabla u)\nabla_{j}u)}{\Lambda( |\nabla u|^{2})} \tag{26}\]
This finishes the proof of the claim.
Using (26) and (24) we get,
\[\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}u)=\frac{2\Phi^{\prime\prime}(|\nabla u|^ {2})}{\Lambda(|\nabla u|^{2})}(|\nabla u|^{2}\Delta u-g(\nabla_{i}\nabla u, \nabla u)\nabla_{i}u)+\frac{f(u)}{\Lambda(|\nabla u|^{2})} \tag{27}\]
Next we will calculate the term \(\nabla_{j}(2a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u)\nabla_{k}u\). We will prove the following
\[\nabla_{j}(a_{ij}(\nabla u)\nabla_{k}\nabla_{i}u)\nabla_{k}u=\nabla_{k}(a_{ij }(\nabla u)\nabla_{i}\nabla_{j}u)\nabla_{k}u+\Phi^{\prime}(|\nabla u|^{2})R_{ ki}\nabla_{i}u\nabla_{k}u \tag{28}\]
For the proof of (28) we are going to compute \(\nabla_{j}(a_{ij}(\nabla u)\nabla_{k}\nabla_{i}u)\) and the use commuting covariant derivative formula in order of the curvature tensor to appear.
Proof of (28):
\[\begin{array}{c}\nabla_{j}(a_{ij}(\nabla u)\nabla_{k}\nabla_{i}u)=\nabla_{j} (2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j}u\nabla_{k}\nabla_{i }u+\Phi^{\prime}(|\nabla u|^{2})\nabla_{k}\nabla_{j}u)\\ =4\Phi^{\prime\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u,\nabla u)\nabla _{i}u\nabla_{j}u\nabla_{k}\nabla_{i}u+2\Phi^{\prime\prime}(|\nabla u|^{2}) \nabla_{j}\nabla_{i}u\nabla_{j}u\nabla_{k}\nabla_{i}u\\ \hskip 14.226378pt+2\Phi^{\prime\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla _{j}\nabla_{j}u\nabla_{k}\nabla_{i}u+2\Phi^{\prime\prime}(|\nabla u|^{2}) \nabla_{i}u\nabla_{j}u\nabla_{j}u\nabla_{k}\nabla_{i}u\\ \hskip 14.226378pt+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}\nabla_{l}u \nabla_{l}u\nabla_{k}\nabla_{i}u+\Phi^{\prime}(|\nabla u|^{2})\nabla_{j}\nabla _{k}\nabla_{j}u\\ \hskip 14.226378pt=4\Phi^{\prime\prime\prime}(|\nabla u|^{2})g(\nabla_{j}\nabla u,\nabla u)\nabla_{i}u\nabla_{j}u\nabla_{k}\nabla_{i}u+2\Phi^{\prime\prime}(| \nabla u|^{2})\nabla_{j}\nabla_{i}u\nabla_{j}u\nabla_{k}\nabla_{i}u\\ \hskip 14.226378pt+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j} \nabla_{k}\nabla_{i}u+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j} \nabla_{k}\nabla_{j}\nabla_{i}u-2\Phi^{\prime\prime}\nabla_{i}u\nabla_{j}u\nabla _{k}j_{ip}u\\ \hskip 14.226378pt+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}\nabla_{l}u \nabla_{l}u\nabla_{k}\nabla_{i}u+\Phi^{\prime}(|\nabla u|^{2})\nabla_{k}\nabla_{ j}\nabla_{j}u+\Phi^{\prime}(|\nabla u|^{2})R_{kp}\nabla_{p}u\\ \hskip 14.226378pt=\nabla_{k}(a_{ij}(\nabla u)\nabla_{j}\nabla_{i}u)+\Phi^{ \prime}(|\nabla u|^{2})R_{kp}\nabla_{p}u\end{array} \tag{29}\]
where \(R_{ijkl}\) are the components of the curvature tensor while \(R_{ij}=R_{ippj}\) are the components of the Ricci tensor. Note that from the skew-symmetry of the Riemann tensor the term \(-2\Phi^{\prime\prime}\nabla_{i}u\nabla_{j}uR_{jkip}\nabla_{p}u=0\) and so we showed (28). Also by taking the covariant derivative of \(a_{ij}(\nabla u)\nabla_{i}\nabla_{j}u=f(u)\) along the direction \(k\) we get \(\nabla_{k}(a_{ij}(\nabla u)\nabla_{i}\nabla_{j}u)=f^{\prime}(u)\nabla_{k}u\). As a result (28) becomes
\[\nabla_{j}(a_{ij}(\nabla u)\nabla_{k}\nabla_{i}u)\nabla_{k}u=f^{\prime}(u)|\nabla u |^{2}+\Phi^{\prime}(|\nabla u|^{2})R_{ki}\nabla_{i}u\nabla_{k}u \tag{30}\]
Combining (23), (27) and (30) we obtain
\[\begin{array}{c}\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}P)=& 2a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u+2\Phi^{\prime}(|\nabla u |^{2})R_{ij}\nabla_{i}u\nabla_{j}u\\ &+2\frac{\Phi^{\prime\prime}(|\nabla u|^{2})f(u)}{\Lambda(|\nabla u|^{2})\Phi^{ \prime}(|\nabla u|^{2})}\nabla_{i}u\nabla_{i}P-2\frac{f^{2}(u)}{\Lambda(|\nabla u| ^{2})}\end{array}\]
It follows directly that from (1) and (4),
\[\Delta u=\frac{f(u)}{\Phi^{\prime}(|\nabla u|^{2})}-\frac{\Phi^{\prime\prime}(| \nabla u|^{2})}{\Phi^{\prime}(|\nabla u|^{2})}\nabla_{i}u\nabla_{j}u\nabla_{i} \nabla_{j}u\]
and thus we get
\[\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}P)-\frac{2\Phi^{\prime\prime}(| \nabla u|^{2})f(u)}{\Lambda(|\nabla u|^{2})\Phi^{\prime}(|\nabla u|^{2})}\nabla_ {i}u\nabla_{i}P=2a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\] \[+2\Phi^{\prime}(|\nabla u|^{2})R_{ij}\nabla_{i}u\nabla_{i}u-2 \frac{f^{2}(u)}{\Lambda(|\nabla u|^{2})}\]
From the Cauchy-Shwartz inequality we have
\[|\nabla u|^{2}\nabla_{i}\nabla_{k}u\nabla_{i}\nabla_{k}u\geq\nabla_{i}\nabla_ {k}u\nabla_{i}u\nabla_{j}\nabla_{k}u\nabla_{j}u\]
and we apply this to the term \(a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\) so we get
\[a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\geq\frac{\Lambda(| \nabla u|^{2})}{|\nabla u|^{2}}\nabla_{i}\nabla_{k}u\nabla_{i}u\nabla_{j} \nabla_{k}u\nabla_{j}u\]
Also directly from (22) we obtain
\[\nabla_{i}\nabla_{k}u\nabla_{i}u\nabla_{j}\nabla_{k}u\nabla_{j}u=\frac{( \nabla_{k}P+2f(u)\nabla_{k}u)(\nabla_{k}P+2f(u)\nabla_{k}u)}{4\Lambda(|\nabla u |^{2})}\]
This concludes the proof of Lemma 2.1.
We now complete the proof of Theorem 1.1. We argue as in [6] and [9] with the appropriate modifications.
Let \(u\) be a solution of (12) and consider the set
\[\mathcal{E}:=\{v\in C^{2}(\mathcal{M})\;\;\mbox{solution of}\;\;(\ref{eq:12}) \;\;\mbox{such that}\;\;||v||_{C^{2,\alpha}(\mathcal{M})}\leq||u||_{C^{2,\alpha}( \mathcal{M})}\} \tag{31}\]
Let \(P=P(u;x)\) defined in (3) and consider
\[P_{0}=\sup\{P(v;x)\;|\;v\in\mathcal{E}\;,\;x\in\mathcal{M}\} \tag{32}\]
For proving the bound (13) it suffices to prove that
\[P_{0}\leq 0 \tag{33}\]
We argue by contradiction and suppose that
\[P_{0}>0 \tag{34}\]
So, there exist two sequences \(v_{k}\in\mathcal{E}\) and \(x_{k}\in\mathcal{E}\) such that
\[P_{0}-\frac{1}{k}\leq P(v_{k};x_{k})\leq P_{0} \tag{35}\]
By the compactness of \(\mathcal{M}\), \(x_{k}\) converges to some \(x_{0}\in\mathcal{M}\) up to a subsequence that we still denote as \(x_{k}\).
In addition, by the uniform bound
\[||v_{k}||_{C^{2,\alpha}(\mathcal{M})}\leq||u||_{C^{2,\alpha}(\mathcal{M})} \tag{36}\]
and the Ascoli-Arzela teoorem for compact manifolds, we have that \(v_{k}\) converges uniformly in \(C^{2}(\mathcal{M})\) to some \(v_{0}\in\mathcal{E}\), up to a subsequence.
Thus,
\[P_{0}=\lim_{k\rightarrow+\infty}P(v_{k};x_{k})=P(v_{0};x_{0}) \tag{37}\]
by (35).
This gives
\[0<P_{0}=2\Phi^{\prime}(|\nabla v_{0}(x_{0})|^{2})|\nabla v_{0}(x_{0})|^{2}-\Phi(| \nabla v_{0}(x_{0})|^{2})-2F(v_{0}(x_{0})) \tag{38}\]
and since \(F\geq 0\) and \(2t\Phi^{\prime\prime}(t)+\Phi^{\prime}(t)>0\;,\;\forall t\), it holds that
\[\nabla v_{0}(x_{0})\neq 0 \tag{39}\]
Now, utilizing Lemma 2.1 and the Strong Maximum Principle together with (39), we obtain
\[P(v_{0};x)=P_{0}\quad\mbox{for all}\;\;x\in\mathcal{M} \tag{40}\]
On the other hand, since \(\mathcal{M}\) is compact and \(v_{0}\in C^{2}(\mathcal{M})\) there exists \(y_{0}\in\mathcal{M}\) in which \(v_{0}\) attains it's minimum, and thus
\[\nabla v_{0}(y_{0})=0 \tag{41}\]
but then
\[P_{0}=P(v_{0};y_{0})=-2F(v_{0}(y_{0}))\leq 0 \tag{42}\]
and contradicts (34).
Therefore \(P_{0}\leq 0\) and we conclude.
## 3 Proof of Theorems 1.2 and 1.3
In this section we will prove the Liouville-type theorems. We begin with an appropriate elliptic inequality. However, in this case, this inequality is satisfied by the quantity \(P=|\nabla u|^{2}\). We observe that for proving Lemma 3.1, the ellipticity conditions in Assumption (A) and (B) can be relaxed, in the sense that we can only assume the ellipticity condition \(c_{0}|\xi|^{2}\leq a_{ij}\xi_{i}\xi_{j}\leq C_{0}|\xi|^{2}\).
**Lemma 3.1**.: _Let \(u\in C^{3}(\mathcal{M})\) be a solution of (1) and assume that \(F^{\prime\prime}\geq 0\)._
_Then_
\[\sum_{i,j}\nabla_{j}(a_{ij}(\nabla u)\nabla_{i}P)\geq 2C_{0}|Hes\;u|^{2}+2F^{ \prime\prime}(u)|\nabla u|^{2}+2\Phi^{\prime}(|\nabla u|^{2})Ric(\nabla u, \nabla u) \tag{43}\]
_where \(P=|\nabla u|^{2}\)._
Proof.: We have the equation \(div(\Phi^{\prime}(|\nabla u|^{2})\nabla u)=F^{\prime}(u)\) which can be written as \(a_{ij}(\nabla u)\nabla_{i}\nabla_{j}u=F^{\prime}(u)\). Here
\[a_{ij}(\sigma)=2\Phi^{\prime\prime}(|\sigma|^{2})\nabla_{i}\sigma\nabla_{j} \sigma+\Phi^{\prime}(|\sigma|^{2})\delta_{ij}\]
Denote by \(P=|\nabla u|^{2}\) then
\[\nabla_{i}P= \nabla_{i}(g(\nabla u,\nabla u))=2g(\nabla_{i}\nabla u,\nabla u )=2\nabla_{i}\nabla_{k}u\nabla_{k}u\]
and
\[\begin{split}\nabla_{j}\nabla_{i}P=\nabla_{j}(2\nabla_{i}u \nabla_{k}u\nabla_{k}u)\\ =2\nabla_{j}\nabla_{i}\nabla_{k}u\nabla_{k}u+2\nabla_{i}\nabla_{ k}u\nabla_{j}\nabla_{k}u\end{split} \tag{44}\]
We claim that
\[\nabla_{j}(a_{ij}(\nabla u)\nabla_{k}\nabla_{i}u)-\nabla_{k}(a_{ij}(\nabla u) \nabla_{i}\nabla_{j}u)=\Phi^{\prime}(|\nabla u|^{2})R_{ik}\nabla_{k}u\]
Proof of the claim
\[\nabla_{j}(a_{ij}(\nabla u)\nabla_{k}\nabla_{i}u)= \nabla_{j}(2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j }u\nabla_{k}\nabla_{i}u+\Phi^{\prime}(|\nabla u|^{2})\nabla_{k}\nabla_{j}u)\] \[= 4\Phi^{\prime\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{l}u \nabla_{i}u\nabla_{j}u\nabla_{k}\nabla_{i}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{i}u\nabla_ {j}u\nabla_{k}\nabla_{i}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j}u\nabla_ {j}\nabla_{k}\nabla_{i}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{l}u\nabla_ {l}u\nabla_{k}\nabla_{j}u\] \[+\Phi^{\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{k}\nabla_{j}u\] \[= 4\Phi^{\prime\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{l}u \nabla_{l}u\nabla_{i}u\nabla_{j}u\nabla_{k}\nabla_{i}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{i}u\nabla_ {j}u\nabla_{k}\nabla_{i}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j}\nabla_ {j}u\nabla_{k}\nabla_{i}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u\nabla_{j}u\nabla _{k}\nabla_{j}\nabla_{i}u+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{i}u \nabla_{j}uR_{jkil}\nabla_{l}u\] \[+2\Phi^{\prime\prime}(|\nabla u|^{2})\nabla_{j}\nabla_{l}u\nabla_ {l}u\nabla_{k}\nabla_{j}u\] \[+\Phi^{\prime}(|\nabla u|^{2})\nabla_{k}\nabla_{j}\nabla_{j}u+ \Phi^{\prime}(|\nabla u|^{2})R_{jk}\nabla_{j}u\] \[= \nabla_{k}(a_{ij}(\nabla u)\nabla_{i}\nabla_{j}u)+\Phi^{\prime}(| \nabla u|^{2})R_{ik}\nabla_{i}u\]
where again \(R_{ijkl}\) are the components of the Riemann curvature tensor and \(R_{ij}\) are the components of the Ricci tensor.
We will now show that (43) holds. We utilize that
\[a_{ij}\nabla_{j}\nabla_{i}P=2a_{ij}(\nabla u)\nabla_{j}\nabla_{i}\nabla_{k}u \nabla_{k}u+2a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\]
So,
\[\nabla_{j}(a_{ij}\nabla_{i}P)= 2\nabla_{j}(a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u)\nabla_{k}u+2 a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\] \[= 2\nabla_{k}(a_{ij}(\nabla u)\nabla_{i}\nabla_{j}u)\nabla_{k}u+2 \Phi^{\prime}(|\nabla u|^{2})R_{ik}\nabla_{i}u\nabla_{k}u+2a_{ij}(\nabla u) \nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\] \[= 2F^{\prime\prime}(u)\nabla_{k}u\nabla_{k}u+2a_{ij}\nabla_{i} \nabla_{k}u\nabla_{j}\nabla_{k}u+2\Phi^{\prime}(|\nabla u|^{2})R_{ik}\nabla_ {i}u\nabla_{k}u\] \[= 2F^{\prime\prime}(u)|\nabla u|^{2}+2a_{ij}\nabla_{i}\nabla_{k}u \nabla_{j}\nabla_{k}u+2\Phi^{\prime}(|\nabla u|^{2})R_{ik}\nabla_{i}u\nabla_{k}u\]
From the ellipticity condition of \(a_{ij}\) we have
\[2a_{ij}(\nabla u)\nabla_{i}\nabla_{k}u\nabla_{j}\nabla_{k}u\geq C_{0}|Hesu|^{2}\]
and thus we get the result.
To complete the proof of Theorem 1.2, we argue as in the proof of Theorem 1.1 in the previous section, utilizing Lemma 3.1 in this case, to obtain \(P\equiv 0\) and we conclude.
For the proof of Theorem 1.3 we argue as in the proof of Theorem 1.8 in [6], utilizing the gradient bound obtained in the Theorem 1.1. The only difference is that we define
\[\phi(t)=u(x_{1}+t\,exp(v))-u(x_{0})\]
where \(exp:T_{x_{1}}\mathcal{M}\rightarrow\mathcal{M}\) is the exponential map and similarly conclude that \(\phi\) is identically zero for every \(x_{1}\in\mathcal{M}\).
Proof of Theorems 1.4 and 1.5
In this section, arguing similarly to [11], we will prove the gradient estimates, that is a Harnack-type inequality and an ABP estimate for the gradient of solutions. Note that for proving this Theorem we do not assume that the manifold is compact.
Proof.: We begin as follows. First we obtain an elliptic inequality for the quantity \(P=|\nabla u|^{2}\) similar to that of Lemma 3.1. In particular, arguing as in the proof of Lemma 3.1 we have
\[\sum_{i,j}\nabla_{j}(a_{ij}(\nabla u)\nabla_{i}P)\leq 2\tilde{C}|Hes\,u|^{2}+2F ^{\prime\prime}(u)|\nabla u|^{2}+2\Phi^{\prime}(|\nabla u|^{2})Ric(\nabla u, \nabla u) \tag{45}\]
where \(P=|\nabla u|^{2}\).
Since \(F^{\prime\prime}\leq 0\) we obtain
\[\sum_{i,j}\nabla_{j}(a_{ij}(\nabla u)\nabla_{i}P)\leq 2\tilde{C}|Hes\,u|^{2}+2 \Phi^{\prime}(|\nabla u|^{2})Ric(\nabla u,\nabla u) \tag{46}\]
So, utilizing the Harnack inequality on manifolds for \(P=|\nabla u|^{2}\), in particular Theorem 8.1 in [4], we conclude.
In the case where \(\mathcal{M}\) has non positive Ricci curvature (46) becomes
\[\sum_{i,j}\nabla_{j}(a_{ij}(\nabla u)\nabla_{i}P)\leq 2\tilde{C}|Hes\,u|^{2} \tag{47}\]
and similarly we obtain (17).
The proof of Theorem 1.5 is established similarly, by Lemma 3.1 and by Theorem 2.3 in [4].
## 5 A Local Splitting Theorem
In this last section we will prove a local splitting theorem of the manifold in a neighborhood of \(x_{0}\) together with a precise description of the solution in this neighborhood. The intuition behind these types of results when dealing with \(\mathbb{R}^{n}\), arises from the fact that if the equipartition of the energy of the Allen-Cahn functional \(\int\frac{1}{2}|\nabla u|^{2}+F(u)\) holds true at a single point (i.e. \(\frac{1}{2}|\nabla u|^{2}=F(u)\)), then the solutions are one dimensional. This has been generalized in [8] for complete manifolds where local and global splitting theorems are proved when the equipartition of the energy holds at some point. We will prove the local splitting analog for Quasi-linear equations and we point out the main obstruction for extending the global splitting theorem in our case in Remark 5.1.
The local splitting theorem is the following
**Theorem 5.1**.: _Let \(u\in C^{3}(\mathcal{M})\) and assume that equality is achieved in (13) at a regular point \(x_{0}\), i.e. \(\nabla u(x_{0})\neq 0\). Then, (i) equality in (13) holds in the connected component of \(\mathcal{M}\cap\{\nabla u\neq 0\}\) that contains \(x_{0}\). (ii) \(Ric(\nabla u,\nabla u)\) vanishes at the connected component of \(\mathcal{M}\cap\{\nabla u\neq 0\}\) that contains \(x_{0}\). (iii) there is a neighborhood of \(U_{x_{0}}\subset\mathcal{M}\) of \(x_{0}\), that splits as the Riemannian product \(\mathcal{N}\times I\) where \(\mathcal{N}\subset\mathcal{M}\) is a totally geodesic and isoparametric hypersurface with \(Ric(\mathcal{N})\geq 0\) and \(I\subset\mathbb{R}\) is an interval, (iv) the solution \(u\) restricted to the neighborhood \(U_{x_{0}}\), is equal to \(u(p,s)=\phi(s)\) where \(\phi\) is a bounded and strictly monotone solution of \(\phi^{\prime\prime}=\dfrac{F^{\prime}(\phi)}{\Lambda((\phi^{\prime})^{2})}\) and \(\Lambda(t)=2t\Phi^{\prime\prime}(t)+\Phi^{\prime}(t)\)._
For the proof of Theorem 5.1 we utilize the techniques in [8] with some modifications. For the convenience of the reader we provide the details.
Proof.: Assume that \(u\) is a non constant solution of (1) and set
\[P(u;x)=2\Phi^{\prime}(|\nabla u(x)|^{2})|\nabla u(x)|^{2}-\Phi(|\nabla u(x)|^{2} )-2F(u(x))\ \,\ x\in\mathcal{M}.\]
By Lemma 2.1 the following inequality holds
\[|\nabla u|^{2}\nabla_{j}(d_{ij}(\nabla u)\nabla_{i}P)+B_{i}\nabla_{i}P\geq \frac{|\nabla P|^{2}}{2\Lambda(|\nabla u|^{2})}+2\Phi^{\prime}(|\nabla u|^{2} )Ric(\nabla u,\nabla u) \tag{48}\]
Theorem 1.1 gives that \(P\leq 0\) on \(\mathcal{M}\) and therefore by (48) and the strong maximum principle it holds that
\[P(u;x)=P(u;x_{0})=0\ \ \,\ \text{in the connected component of}\ \mathcal{M}\cap\{\nabla u\neq 0\}\,\ \text{that contains}\ \ x_{0}.\]
since \(|\nabla u(x_{0})|>0\) by assumption.
Also, from (48) we have that \(Ric(\nabla u,\nabla u)=0\) in the connected component of \(\mathcal{M}\cap\{\nabla_{g}u\neq 0\}\) that contains \(x_{0}\).
We now proceed to the remaining statements of Theorem 5.1. Define
\[w=Q(u)=\int_{u_{0}}^{u}G(s)^{-\frac{1}{2}}ds\]
where \(G(s)=\Psi^{-1}(2F(s))\).
A direct calculation gives \(|\nabla w|=1\) and \(\Delta w=0\) (see also Theorem 5.1 in [6]). That is the function \(w\) is harmonic and has constant gradient of length one, so it generates a parallel vector field. From local splitting theorems (see [16] or section 3 in [8]) we conclude.
The one dimensionality and monotonicity of the solutions restricted to this neighborhood is straightforward.
**Remark 5.1**.: _Note that the extension of the above result to a global splitting theorem for compact manifolds would require an analogous result to that of Cheeger-Gromoll splitting theorem for complete noncompact manifolds. However such result is not known in general and so, it will possibly be a motivation for future research._
|
2307.00447 | Strongly exceptional Legendrian connected sum of two Hopf links | In this paper, we give a complete coarse classification of strongly
exceptional Legendrian realizations of connected sum of two Hopf links in
contact 3-spheres. This is the first classification result about exceptional
Legendrian representatives for connected sums of link families. | Youlin Li, Sinem Onaran | 2023-07-02T00:55:03Z | http://arxiv.org/abs/2307.00447v2 | # Strongly exceptional Legendrian connected sum of two Hopf links
###### Abstract.
In this paper, we give a complete coarse classification of strongly exceptional Legendrian realizations of connected sum of two Hopf links in contact 3-spheres. These are the first classification results about exceptional Legendrian representatives for connected sums of link families.
## 1. Introduction
A Legendrian link in an overtwisted contact 3-manifold is exceptional (a.k.a. non-loose) if its complement is tight. There have been several classification for exceptional Legendrian knots and links in overtwisted contact 3-spheres, including unknots [6], [5], torus knots [11], [15], [8] and Hopf links [10]. While there has been very little progress in the classification of Legendrian links with two or more components in either tight or overtwisted contact 3-spheres, a few papers, [1], [2], [3], [10], have tackled the problem.
In this paper, we study the classification of Legendrian realizations of connected sum of two Hopf links up to coarse equivalence in any contact \(3\)-sphere. This is one of the first families of connected sum of links for which a classification is known. Two Legendrian realizations \(K_{0}\cup K_{1}\cup K_{2}\) and \(K_{0}^{\prime}\cup K_{1}^{\prime}\cup K_{2}^{\prime}\) of the connected sum of two Hopf links in some contact \(3\)-sphere \(S^{3}\) are coarsely equivalent if there is a contactomorphism of \(S^{3}\) sending \(K_{0}\cup K_{1}\cup K_{2}\) to \(K_{0}^{\prime}\cup K_{1}^{\prime}\cup K_{2}^{\prime}\) as an ordered, oriented link.
Let \(A_{3}=K_{0}\cup K_{1}\cup K_{2}\subset S^{3}\) be the oriented connected sum of two Hopf links, where \(K_{0}\) is the central component. It is shown in Figure 1. The orientations of the components are also indicated. We think of \(K_{1}\) and \(K_{2}\) as two oriented meridians of \(K_{0}\).
We consider the Legendrian realizations of \(A_{3}\) in all contact 3-spheres. For \(i=0,1,2\), denote the Thurston-Bennequin invariant of \(K_{i}\) by \(t_{i}\), and the rotation number of \(K_{i}\) by \(r_{i}\).
Let \((M,\xi)\) be a contact 3-manifold and \([T]\) an isotopy class of embedded tori in \(M\). The Giroux torsion of \((M,\xi)\) is the supremum of \(n\in\mathbb{N}_{0}\) for which there is a contact embedding of
\[(T^{2}\times[0,1],\ker(\sin(n\pi z)dx+\cos(n\pi z)dy))\]
Figure 1. The link \(A_{3}=K_{0}\cup K_{1}\cup K_{2}\) in \(S^{3}\).
Introduction
Let \(G\) be a connected connected graph with a connected connected graph \(G\). A _connected graph_\(G\) is a connected graph \(G\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\) and a connected graph \(G^{\prime}\) with vertex set \(V\). The _connected graph_\(G\) is a connected graph with vertex set \(V\) and a connected graph \(
**Theorem 1.4**.: _Suppose \(t_{1}>1\) and \(t_{2}>1\), the number of strongly exceptional Legendrian \(A_{3}\) links is_
\[\left\{\begin{array}{ll}18,&\mbox{if $t_{0}\geq 4$ and $t_{1}=t_{2}=2$},\\ 14,&\mbox{if $t_{0}=3$ and $t_{1}=t_{2}=2$},\\ 10,&\mbox{if $t_{0}=2$ and $t_{1}=t_{2}=2$},\\ 24,&\mbox{if $t_{0}\geq 4,t_{1}\geq 3$ and $t_{2}=2$},\\ 20,&\mbox{if $t_{0}=3,t_{1}\geq 3$ and $t_{2}=2$},\\ 16,&\mbox{if $t_{0}=2,t_{1}\geq 3$ and $t_{2}=2$},\\ 32,&\mbox{if $t_{0}\geq 4,t_{1}\geq 3$ and $t_{2}\geq 3$},\\ 28,&\mbox{if $t_{0}=3,t_{1}\geq 3$ and $t_{2}\geq 3$},\\ 24,&\mbox{if $t_{0}=2,t_{1}\geq 3$ and $t_{2}\geq 3$},\\ 8-4t_{0},&\mbox{if $t_{0}\leq 1$}.\end{array}\right.\]
**Theorem 1.5**.: _Suppose \(t_{1}<0\) and \(t_{2}=1\), then the number of strongly exceptional Legendrian \(A_{3}\) links is_
\[\left\{\begin{array}{ll}4-4t_{1},&\mbox{if $t_{0}\geq 4$},\\ 4-3t_{1},&\mbox{if $t_{0}=3$},\\ 4-2t_{1},&\mbox{if $t_{0}=2$},\\ t_{0}t_{1}-2t_{1},&\mbox{if $t_{0}\leq 1$}.\end{array}\right.\]
**Theorem 1.6**.: _Suppose \(t_{1}<0\) and \(t_{2}>1\), then the number of strongly exceptional Legendrian \(A_{3}\) links is_
\[\left\{\begin{array}{ll}6-6t_{1},&\mbox{if $t_{0}\geq 3,t_{2}=2$},\\ 6-4t_{1},&\mbox{if $t_{0}=2,t_{2}=2$},\\ 6-2t_{1},&\mbox{if $t_{0}=1,t_{2}=2$},\\ 8-8t_{1},&\mbox{if $t_{0}\geq 3,t_{2}\geq 3$},\\ 8-6t_{1},&\mbox{if $t_{0}=2,t_{2}\geq 3$},\\ 8-4t_{1},&\mbox{if $t_{0}=1,t_{2}\geq 4$},\\ 8-3t_{1},&\mbox{if $t_{0}=1,t_{2}=3$},\\ 2t_{0}t_{1}-2t_{1},&\mbox{if $t_{0}\leq 0$}.\end{array}\right.\]
**Theorem 1.7**.: _Suppose \(t_{1}=0\), then the number of strongly exceptional Legendrian \(A_{3}\) links is_
\[\left\{\begin{array}{ll}2-2t_{2},&\mbox{if $t_{2}\leq 0$},\\ 4,&\mbox{if $t_{2}=1$},\\ 6,&\mbox{if $t_{2}=2$},\\ 8,&\mbox{if $t_{2}\geq 3$}.\end{array}\right.\]
By exchange of the roles of \(K_{1}\) and \(K_{2}\) as necessary, we have covered all cases. Therefore, we have completely classified strongly exceptional Legendrian \(A_{3}\) links. The reader can look up the explicit rotation numbers and corresponding \(d_{3}\)-invariants in Lemmas 4.3-4.6, 4.8-4.28, 4.30-4.40, 4.43-4.46 of Section 4. In particular, we have:
**Theorem 1.8**.: _The strongly exceptional Legendrian \(A_{3}\) links are determined up to coarse equivalence by their Thurston-Bennequin invariants and rotation numbers._
**Remark 1.9**.: Strongly exceptional Legendrian \(A_{3}\) links exist only in overtwisted contact 3-spheres with \(d_{3}=-\frac{3}{2},-\frac{1}{2},\frac{1}{2},\frac{3}{2},\frac{5}{2}\).
**Remark 1.10**.: Suppose \(t_{1},t_{2}\neq 0\). If \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil\geq 2\), then any strongly exceptional Legendrian \(A_{3}\) link can be destabilized at the component \(K_{0}\) to another strongly exceptional one. If \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil<1\), then any strongly exceptional Legendrian \(A_{3}\) link can be destabilized at the component \(K_{0}\) to a strongly exceptional Legendrian link with \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil=1\). In the case \(t_{1}=0\), any strongly exceptional Legendrian \(A_{3}\) link can be destabilized at the component \(K_{0}\) to another strongly exceptional one. On the other hand, a positive (or negative) stabilization at the component \(K_{0}\) of a strongly exceptional Legendrian \(A_{3}\) link is strongly exceptional if and only if the resulted rotation numbers are indeed the rotation numbers of a strongly exceptional Legendrian \(A_{3}\) link.
The following is the structure of this paper. Section 2 presents upper bounds for appropriate tight contact structures on \(\Sigma\times S^{1}\). In section 3, we discuss various methods to realize the strongly exceptional Legendrian \(A_{3}\) links. Section 4 focuses on the realization of the strongly exceptional Legendrian \(A_{3}\) links, including the calculation of their rotation numbers and the \(d_{3}\)-invariants of their ambient contact \(S^{3}\). In section 5, we explore the stabilizations among the strongly exceptional Legendrian \(A_{3}\) links. Finally, the last section provides a detailed computation as a sample, showcasing the calculation of rotation numbers and \(d_{3}\)-invariants.
### Acknowledgements
The authors would like to thank John Etnyre for some correspondence. The first author was partially supported by Grants No. 12271349 of the National Natural Science Foundation of China. The second author was partially supported by the Turkish Fulbright Commission, IMSA Visitor Program, TUBITAK 2219 and TUBITAK Grant No. 119F411.
## 2. Tight contact structures on \(\Sigma\times S^{1}\)
Let \(N(K_{i})\) be the standard neighborhood of \(K_{i}\). The closure of the complement \(S^{3}\setminus N(K_{i})\) is called the exterior of \(K_{i}\). The Seifert longitude and meridian of \(K_{i}\) are \(\lambda_{i}\) and \(\mu_{i}\). The exterior of \(K_{0}\cup K_{1}\cup K_{2}\), \(\overline{S^{3}\setminus(N(K_{0})\cup N(K_{1})\cup N(K_{2}))}\), is diffeomorphic to \(\Sigma\times S^{1}\), where \(\Sigma\) is a pair of pants. Suppose \(\partial\Sigma=c_{0}\cup c_{1}\cup c_{2}\) as shown in Figure 2. Let \(h\) denote the homology class of the \(S^{1}\) factor, namely the vertical circle. Then \(\lambda_{0}=c_{0}\), \(\lambda_{1}=\lambda_{2}=h\), \(\mu_{0}=c_{0}\), \(\mu_{1}=-c_{1}\), \(\mu_{2}=-c_{2}\). Suppose \(\partial(\Sigma\times S^{1})=T_{0}\cup T_{1}\cup T_{2}\), where \(T_{i}=c_{i}\times S^{1}\). Then the dividing set of \(T_{0}\) has slope \(t_{0}\), i.e., has the homology \(c_{0}+t_{0}h\), and the dividing set of \(T_{i}\) has slope \(-\frac{1}{t_{i}}\), i.e., has the homology \(-t_{i}c_{i}+h\), for \(i=1,2\).
Following [17], we say that a tight contact structure \(\xi\) on \(\Sigma\times S^{1}\) is appropriate if there is no contact embedding of
\[(T^{2}\times[0,1],\ker(\sin(\pi z)dx+\cos(\pi z)dy))\]
into \((M,\xi)\), with \(T^{2}\times\{0\}\) is isotopic to a boundary component of \(\Sigma\times S^{1}\). A Legendrian representation of the link \(A_{3}=K_{0}\cup K_{1}\cup K_{2}\) in an overtwisted contact 3-sphere is strongly exceptional if and only if its complement is an appropriate tight contact \(\Sigma\times S^{1}\).
In this section, we study the appropriate tight contact structures on \(\Sigma\times S^{1}\) with minimal convex boundary, and the boundary slopes \(s_{0}=s(T_{0})=t_{0}\), \(s_{1}=s(T_{1})=-\frac{1}{t_{1}}\) and \(s_{2}=s(T_{2})=-\frac{1}{t_{2}}\), where \(t_{0},t_{1},t_{2}\) are integers.
**Lemma 2.1**.: _[_12_]_ _Let \(T^{2}\) be a convex surface in a contact 3-manifold with \(\#\Gamma_{T^{2}}=2\) and slope \(s\). If a bypass \(D\) is attached to \(T^{2}\) from the front (the back, resp.) along a Legendrian ruling curve of slope \(r\neq s\), then the resulting convex surface \(\tilde{T}^{2}\) will have \(\#\Gamma_{\tilde{T}^{2}}=2\) and the slope \(s^{\prime}\) which is obtained as follows: Take the arc \([r,s]\subset\partial\mathbb{H}^{2}\) obtained by starting from \(r\) and moving counterclockwise (clockwise, resp.) until we hit \(s\). On this arc, let \(s^{\prime}\) be the point that is closest to \(r\) and has an edge from \(s^{\prime}\) to \(s\)._
Every vertical circle in a contact \(\Sigma\times S^{1}\) has a canonical framing that arises from the product structure. Let \(\gamma\) be a Legendrian circle that lies in the vertical direction. The twisting number \(t(\gamma)\) of \(\gamma\) measures the amount by which the contact framing of \(\gamma\) deviates from the canonical framing. If \(t(\gamma)=0\), then we say that \(\gamma\) a \(0\)-twisting vertical Legendrian circle.
Figure 3. Farey graph on the Poincare disk \(\mathbb{H}^{2}\).
**Lemma 2.2**.: _Suppose \(\xi\) is an appropriate tight contact structure on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\). If \(t_{1},t_{2}\neq 0\) and \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil\geq 2\), then \(\xi\) has a \(0\)-twisting vertical Legendrian circle._
Proof.: We assume the Legendrian rulings on \(T_{1}\) and \(T_{2}\) to have infinite slopes. Consider a convex vertical annulus \(A\) such that the boundary consists of a Legendrian ruling on \(T_{1}\) and a Legendrian ruling on \(T_{2}\). The dividing set of \(A\) intersects \(T_{i}\), \(i=1,2\), in exactly \(2|t_{i}|\) points. If every dividing curve of \(A\) is boundary parallel, then there exists a \(0\)-twisting vertical Legendrian circle in \(A\). So we assume that there exist dividing arcs on \(A\), which connect the two boundary components of \(A\). If there is a boundary parallel dividing curve on \(A\), then we perform a bypass (attached from the back of \(T_{i}\)) to eliminate it.
(1) Suppose \(t_{1}<0\) and \(t_{2}<0\). By Lemma 2.1, we can obtain a submanifold \(\tilde{\Sigma}\times S^{1}\) of \(\Sigma\times S^{1}\) whose boundary is \(T_{0}\cup\tilde{T}_{1}\cup\tilde{T}_{2}\), where both \(\tilde{T}_{1}\) and \(\tilde{T}_{2}\) have slopes \(-\frac{1}{t_{3}}\) for some integer \(t_{3}\in[\max\{t_{1},t_{2}\},-1]\). Moreover, each dividing curve on \(\tilde{A}=A\cap(\tilde{\Sigma}\times S^{1})\) connects the two boundary components. Let \(N\) be a neighborhood of \(\tilde{T}_{1}\cup\tilde{T}_{2}\cup\tilde{A}\), and \(\partial N=\tilde{T}_{1}\cup\tilde{T}_{2}\cup\tilde{T}\). Then, by edge-rounding, \(\tilde{T}\) has slope \(\frac{1}{t_{3}}+\frac{1}{t_{3}}+\frac{1}{-t_{3}}=\frac{1}{t_{3}}\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\tilde{\Sigma}\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(\frac{1}{t_{3}}\). Since \(t_{0}\geq 0>\frac{1}{t_{3}}\), there must exist a \(0\)-twisting vertical Legendrian circles in this thickened torus, and hence in \(\Sigma\times S^{1}\).
(2) Suppose \(t_{1}=1\) and \(t_{2}=1\). It follows from [13, Lemma 5.1].
(3) Suppose \(t_{1}>1\) and \(t_{2}=1\). By Lemma 2.1, we can obtain a submanifold \(\tilde{\Sigma}\times S^{1}\) of \(\Sigma\times S^{1}\) whose boundary is \(T_{0}\cup\tilde{T}_{1}\cup T_{2}\), where \(\tilde{T}_{1}\) has slope \(0\). Moreover, each dividing curve on \(\tilde{A}=A\cap(\tilde{\Sigma}\times S^{1})\) connects the two boundary components. Let \(N\) be a neighborhood of \(\tilde{T}_{1}\cup T_{2}\cup\tilde{A}\), and \(\partial N=\tilde{T}_{1}\cup T_{2}\cup\tilde{T}\). Then, by edge-rounding, \(\tilde{T}\) has slope \(0+1+1=2\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\tilde{\Sigma}\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(2\). Since \(t_{0}\geq 3>2\), there must exist a \(0\)-twisting vertical Legendrian circle in this thickened torus, and hence in \(\Sigma\times S^{1}\).
(4) Suppose \(t_{1}>1\) and \(t_{2}>1\). We divide this case into two subcases:
(i) There exist boundary parallel dividing curves on \(A\). By Lemma 2.1, we can obtain a submanifold \(\tilde{\Sigma}\times S^{1}\) of \(\Sigma\times S^{1}\) whose boundary is \(T_{0}\cup\tilde{T}_{1}\cup\tilde{T}_{2}\), where both \(\tilde{T}_{1}\) and \(\tilde{T}_{2}\) have slopes \(0\). Moreover, each dividing curve on \(\tilde{A}=A\cap(\tilde{\Sigma}\times S^{1})\) connects the two boundary components. Let \(N\) be a neighborhood of \(\tilde{T}_{1}\cup\tilde{T}_{2}\cup\tilde{A}\), and \(\partial N=\tilde{T}_{1}\cup\tilde{T}_{2}\cup\tilde{T}\). Then, by edge-rounding, \(\tilde{T}\) has slope \(0+0+1=1\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\tilde{\Sigma}\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(1\). Since \(t_{0}\geq 2>1\), there must exist a \(0\)-twisting vertical Legendrian circles in this thickened torus, and hence in \(\Sigma\times S^{1}\).
(ii) There exists no boundary parallel dividing curve on \(A\). Then \(t_{1}=t_{2}\) and all dividing curves on \(A\) connect the two boundary components of \(A\). Let \(N\) be a neighborhood of \(T_{1}\cup T_{2}\cup\tilde{A}\), and \(\partial N=T_{1}\cup T_{2}\cup\tilde{T}\). Then, by edge-rounding, \(\tilde{T}\) has slope \(\frac{1}{t_{1}}+\frac{1}{t_{1}}+\frac{1}{t_{1}}=\frac{3}{t_{1}}\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\Sigma\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(\frac{3}{t_{1}}\). Since \(t_{0}\geq 2>\frac{3}{t_{1}}\), there must exist a \(0\)-twisting vertical Legendrian circles in this thickened torus, and hence in \(\Sigma\times S^{1}\).
(5) Suppose \(t_{1}<0\) and \(t_{2}=1\). There are boundary parallel dividing curves on \(A\). By Lemma 2.1, we can obtain a submanifold \(\tilde{\Sigma}\times S^{1}\) of \(\Sigma\times S^{1}\) whose boundary is \(T_{0}\cup\tilde{T}_{1}\cup T_{2}\), where both \(\tilde{T}_{1}\) have slopes \(1\). Moreover, each dividing curve on \(\tilde{A}=A\cap(\tilde{\Sigma}\times S^{1})\) connects the two boundary components. Let \(N\) be a neighborhood of \(\tilde{T}_{1}\cup T_{2}\cup\tilde{A}\), and \(\partial N=\tilde{T}_{1}\cup T_{2}\cup\tilde{T}\). Then, by edge-rounding, \(\tilde{T}\) has slope \(1+(-1)+1=1\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\tilde{\Sigma}\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(1\). Since \(t_{0}\geq 2>1\), there must exist a \(0\)-twisting vertical Legendrian circles in this thickened torus, and hence in \(\Sigma\times S^{1}\).
(6) Suppose \(t_{1}<0\) and \(t_{2}>1\). We divide this case into two subcases.
(i) If there exist boundary parallel dividing curves on \(A\) whose boundary points belong to \(A\cap T_{2}\), we can use Lemma 2.1 to obtain a submanifold \(\tilde{\Sigma}\times S^{1}\) of \(\Sigma\times S^{1}\) whose boundary is \(T_{0}\cup\tilde{T}_{1}\cup\tilde{T}_{2}\), where \(\tilde{T}_{1}\) has slope \(1\) and \(\tilde{T}_{2}\) has slope \(0\). Furthermore, each dividing curve on \(\tilde{A}=A\cap(\tilde{\Sigma}\times S^{1})\) connects the two boundary components. Let \(N\) be a neighborhood of \(\tilde{T}_{1}\cup\tilde{T}_{2}\cup\tilde{A}\), and \(\partial N=\tilde{T}_{1}\cup\tilde{T}_{2}\cup\tilde{T}\). By performing edge-rounding, \(\tilde{T}\) will have slope \(-1+0+1=0\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\tilde{\Sigma}\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(1\). Since \(t_{0}\geq 1>0\), there must exist a \(0\)-twisting vertical Legendrian circle in this thickened torus, and hence in \(\Sigma\times S^{1}\).
(ii) If there are no boundary parallel dividing curves on \(A\) whose boundary points belong to \(A\cap T_{2}\), we can use Lemma 2.1 to obtain a submanifold \(\tilde{\Sigma}\times S^{1}\) of \(\Sigma\times S^{1}\) whose boundary is \(T_{0}\cup\tilde{T}_{1}\cup T_{2}\), where \(\tilde{T}_{1}\) has slope \(\frac{1}{t_{2}}\). Furthermore, each dividing curve on \(\tilde{A}=A\cap(\tilde{\Sigma}\times S^{1})\) connects the two boundary components. Let \(N\) be a neighborhood of \(\tilde{T}_{1}\cup T_{2}\cup\tilde{A}\), and \(\partial N=\tilde{T}_{1}\cup T_{2}\cup\tilde{T}\). By performing edge-rounding, \(\tilde{T}\) will have slope \(-\frac{1}{t_{2}}+\frac{1}{t_{2}}+\frac{1}{t_{2}}=\frac{1}{t_{2}}\) (as seen form \(T_{0}\)). Therefore, the thickened torus \(\tilde{\Sigma}\times S^{1}\setminus N\) has boundary slopes \(t_{0}\) and \(1\). Since \(t_{0}\geq 1>\frac{1}{t_{2}}\), there must exist a \(0\)-twisting vertical Legendrian circle in this thickened torus, and hence in \(\Sigma\times S^{1}\).
**Lemma 2.3**.: _If \(\xi\) is a tight contact structure on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\), has a \(0\)-twisting vertical Legendrian circle, where \(t_{1},t_{2}\neq 0\). Then it admits a factorization \(\Sigma\times S^{1}=L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup \Sigma^{\prime}\times S^{1}\), where \(L^{\prime}_{i}\) are disjoint toric annuli with minimal twisting and minimal convex boundary \(\partial L^{\prime}_{i}=T_{i}-T^{\prime}_{i}\), and all the components of \(\partial\Sigma^{\prime}\times S^{1}=T^{\prime}_{0}\cup T^{\prime}_{1}\cup T^{ \prime}_{2}\) have boundary slopes \(\infty\)._
Proof.: The proof is similar to that of [13, Lemma 5.1, Part 1].
Let \(\xi\) be a contact structure on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\). Assume it admits a factorization \(\Sigma\times S^{1}=L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup \Sigma^{\prime}\times S^{1}\), where \(L^{\prime}_{i}\) are disjoint toric annuli with minimal twisting and minimal convex boundary \(\partial L^{\prime}_{i}=T_{i}-T^{\prime}_{i}\), and all the components of \(\partial\Sigma^{\prime}\times S^{1}=T^{\prime}_{0}\cup T^{\prime}_{1}\cup T^{ \prime}_{2}\) have boundary slopes \(\infty\).
According to the proof of Lemma 2.2, we know that for \(i=1,2\), there exists a basic slice \(B^{\prime}_{i}\subset L^{\prime}_{i}\) with one boundary component \(T^{\prime}_{i}\) and another boundary slope \(\lceil-\frac{1}{t_{i}}\rceil\). Let \(C^{\prime}_{i}\) be the continued fraction block in \(L^{\prime}_{i}\) that contains \(B^{\prime}_{i}\). The basic slices in \(C^{\prime}_{i}\) can be shuffled. Namely, any basic slice in \(C^{\prime}_{i}\) can be shuffled to be \(B^{\prime}_{i}\).
**Lemma 2.4**.: _(1) Suppose \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil=3\). If the signs of \(L_{0}^{\prime}\), \(B_{1}^{\prime}\) and \(B_{2}^{\prime}\) are the same, then \(\xi\) remains unchanged if we change the three signs simultaneously. (2) Suppose \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil\leq 2\). If the signs of \(L_{0}^{\prime}\), \(B_{1}^{\prime}\), and \(B_{2}^{\prime}\) are the same, then \(\xi\) is overtwisted._
Proof.: The restriction of \(\xi\) on \(L_{0}^{\prime}\cup B_{1}^{\prime}\cup B_{2}^{\prime}\cup\Sigma^{\prime}\times S ^{1}\) has boundary slopes \(t_{0}\), \(\lceil-\frac{1}{t_{1}}\rceil\) and \(\lceil-\frac{1}{t_{2}}\rceil\). So the lemma follows by applying [13, Lemma 5.1] to \(L_{0}^{\prime}\cup B_{1}^{\prime}\cup B_{2}^{\prime}\cup\Sigma^{\prime}\times S ^{1}\).
**Lemma 2.5**.: _[_8_]_ _There is a unique appropriate tight contact structure on \(\Sigma\times S^{1}\) whose three boundary slopes are all \(\infty\) up to isotopy (not fixing the boundary point-wise, but preserving it set-wise)._
**Lemma 2.6**.: _Let \(\xi\) be a contact structure on \(\Sigma\times S^{1}\). Assume that each \(T_{i}\) is minimal convex with dividing curves of finite slope \(t_{0}\), \(-\frac{1}{t_{1}}\) and \(-\frac{1}{t_{2}}\). If \(\xi\) has \(0\)-twisting vertical Legendrian circles and \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil\leq 1\), then \(\xi\) is not appropriate tight._
Proof.: As there is a \(0\)-twisting vertical Legendrian circle, there exists a minimal convex torus \(T_{i}^{\prime}\), parallel to \(T_{i}\), with slope \(\lceil-\frac{1}{t_{i}}\rceil\), \(i=1,2\). Consider a convex annulus \(\tilde{A}\) with a boundary consisting of a Legendrian ruling on \(T_{1}^{\prime}\) and a Legendrian ruling on \(T_{2}^{\prime}\). Let \(N\) be a neighborhood of \(T_{1}^{\prime}\cup T_{2}^{\prime}\cup\tilde{A}\), and \(\partial N=T_{1}^{\prime}\cup T_{2}^{\prime}\cup\tilde{T}\). Then through edge-rounding, \(\tilde{T}\) has slope \(-\lceil-\frac{1}{t_{1}}\rceil-\lceil-\frac{1}{t_{2}}\rceil+1\) (as seen form \(T_{0}\)). We obtain a thickened torus with boundary slopes \(t_{0}\) and \(-\lceil-\frac{1}{t_{1}}\rceil-\lceil-\frac{1}{t_{2}}\rceil+1\), and a boundary parallel convex torus with slope \(\infty\). Thus, from \(t_{0}\leq-\lceil-\frac{1}{t_{1}}\rceil-\lceil-\frac{1}{t_{2}}\rceil+1\), it follows that the Giroux torsion of this thickened torus is at least \(1\). Hence the Lemma holds.
**Lemma 2.7**.: _Let \(\xi\) be an appropriate tight contact structure on \(\Sigma\times S^{1}\). Assume that each \(T_{i}\) is minimal convex with dividing curves of finite slope \(t_{0}\), \(-\frac{1}{t_{1}}\) and \(-\frac{1}{t_{2}}\). Suppose \(\xi\) has no \(0\)-twisting vertical Legendrian circle. Then there exist collar neighborhoods \(L_{i}^{\prime\prime}\) of \(T_{i}\) for \(i=1,2\) satisfying that \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L_{1}^{\prime\prime} \cup L_{2}^{\prime\prime}\) and the boundary slope of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(\lceil-\frac{1}{t_{1}}\rceil\) and \(\lceil-\frac{1}{t_{2}}\rceil\)._
Proof.: We modify the Legendrian rulings on \(T_{0}\) and \(T_{i}\) to have infinite slopes. Consider a convex vertical annulus \(A\) whose boundary consists of Legendrian rulings on \(T_{0}\) and \(T_{i}\). The dividing set of \(A\) intersects \(T_{0}\) in exactly \(2\) points. The dividing set of \(A\) intersects \(T_{i}\), \(i=1,2\), in exactly \(2|t_{i}|\) points. As \(\xi\) has no \(0\)-twisting vertical Legendrian circle, there exist dividing arcs on \(A\) that connect the two boundary components of \(A\). If there is a boundary parallel dividing curve on \(A\), then its endpoints must belong to \(A\cap T_{i}\) for some \(i=1,2\). We perform a bypass (attached from the back of \(T_{i}\)) to eliminate it. Applying Lemma 2.1, we obtain a thickened torus \(L_{i}^{\prime\prime}\) for \(i=1,2\) that satisfies \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L_{1}^{\prime\prime} \cup L_{2}^{\prime\prime}\) and the boundary slope of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(\lceil-\frac{1}{t_{1}}\rceil\) and \(\lceil-\frac{1}{t_{2}}\rceil\).
**Lemma 2.8**.: _Suppose \(t_{1}<0\) and \(t_{2}<0\), then there are at most_
\[\left\{\begin{array}{ll}2t_{1}t_{2}-2t_{1}-2t_{2}+2,&\text{ if }t_{0}\geq 2,\\ t_{1}t_{2}-2t_{1}-2t_{2}+2,&\text{ if }t_{0}=1,\\ -2t_{1}-2t_{2}+2,&\text{ if }t_{0}=0,\\ -t_{0}t_{1}t_{2},&\text{ if }t_{0}\leq-1,\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes._
Proof.: By Lemma 2.2, if \(t_{0}\geq 0\), then the tight contact structures on \(\Sigma\times S^{1}\) always exist \(0\)-twisting vertical Legendrian circles.
If an appropriate contact structure \(\xi\) on \(\Sigma\times S^{1}\) has a \(0\)-twisting vertical Legendrian circle, then Lemma 2.3 tells us that \(\Sigma\times S^{1}\) can be factored into \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\), where the boundary slopes of \(\Sigma^{\prime}\times S^{1}\) are all \(\infty\), the boundary slopes of \(L^{\prime}_{0}\) are \(\infty\) and \(t_{0}\) the boundary slopes of \(L^{\prime}_{i}\) are \(\infty\) and \(-\frac{1}{t_{i}}\) for \(i=1,2\). Moreover, There are \(2\) minimally twisting tight contact structures on \(L^{\prime}_{0}\).
If \(t_{i}<0\), \(i=1,2\), we have
\[\begin{bmatrix}0&-1\\ 1&1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}=\begin{bmatrix}-1\\ 1\end{bmatrix},\begin{bmatrix}0&-1\\ 1&1\end{bmatrix}\begin{bmatrix}-t_{i}\\ 1\end{bmatrix}=\begin{bmatrix}-1\\ -t_{i}+1\end{bmatrix}.\]
The thickened torus \(L_{i}\) is a continued fraction block with \(-t_{i}\) basic slices, and therefore admits \(-t_{i}+1\) minimally twisting tight contact structures.
By applying Lemma 2.5, we can conclude that there are at most \(2t_{1}t_{2}-2t_{1}-2t_{2}+2\) appropriate tight contact structures on \(\Sigma\times S^{1}\) if \(t_{0}\geq 2\). If \(t_{0}=1\) and there are basic slices in \(L^{\prime}_{i}\) which have the same signs as that of \(L^{\prime}_{0}\) for \(i=1,2\), then after shuffling, we can assume that \(L^{\prime}_{0}\), \(B^{\prime}_{1}\) and \(B^{\prime}_{2}\) have the same signs. According to Lemma 2.4, a tight contact structure that has positive basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\) are isotopic to a tight contact structure which is obtained by changing a positive basic slice in \(L^{\prime}_{i}\) for \(i=0,1,2\) to a negative basic slice. Therefore, there are at most \(t_{1}t_{2}-2t_{1}-2t_{2}+2\) appropriate tight contact structures on \(\Sigma\times S^{1}\) if \(t_{0}=1\). If \(t_{0}=0\), then by Lemma 2.4, a contact structure which has positive basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\) are overtwisted. Thus, there are at most \(-2t_{1}-2t_{2}+2\) appropriate tight contact structures on \(\Sigma\times S^{1}\) if \(t_{0}=0\).
Suppose \(t_{0}\leq-1\). By Lemma 2.6, there are no appropriate tight contact structures having \(0\)-twisting vertical Legendrian circle. We consider the appropriate tight contact structures without a \(0\)-twisting vertical Legendrian circle. By Lemma 2.7, we can factorize \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L^{\prime\prime}_{1} \cup L^{\prime\prime}_{2}\), where the boundary slopes of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(1\) and \(1\), the boundary slopes of \(L^{\prime\prime}_{i}\) are \(1\) and \(-\frac{1}{t_{i}}\) for \(i=1,2\). Since \(t_{0}<0\), by [13, Lemma 5.1], there are exactly \(-t_{0}\) tight contact structures on \(\Sigma^{\prime\prime}\times S^{1}\) without any \(0\)-twisting vertical Legendrian circle. By [12, Theorem 2.2], there are \(-t_{i}\) minimally twisting tight contact structures on \(L^{\prime\prime}_{i}\) for \(i=1,2\). Therefore, there are at most \(-t_{0}t_{1}t_{2}\) tight contact structures on \(\Sigma\times S^{1}\) without any \(0\)-twisting vertical Legendrian curve and with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\)
To denote the \(2t_{1}t_{2}-2t_{1}-2t_{2}+2\) contact structures on \(\Sigma\times S^{1}\) with \(0\)-twisting vertical Legendrian circle, We use the decorations \((\pm)(\underbrace{\pm\cdots\pm}_{-t_{1}})(\underbrace{\pm\cdots\pm}_{-t_{2}})\). See Figure 4 for an example. The sign in the first bracket corresponds to the sign of the basic slice \(L^{\prime}_{0}\), while the signs in the second and the third brackets correspond to the signs of the basic slices in \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\), respectively. We order the basic slices in \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\) from the innermost boundary to the outmost boundary. As both \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\) are continued fraction blocks, the signs in the second and the third brackets can be shuffled. For example, the decorations \((+)(+--)(--)\) and \((+)(--+)(--)\) denote the same contact structures.
### \(t_{1}>0\) and \(t_{2}>0\)
**Lemma 2.9**.: _Suppose \(t_{1}=t_{2}=1\); then there are exactly_
\[\left\{\begin{array}{ll}8,&\mbox{if }t_{0}\geq 6,\\ 7,&\mbox{if }t_{0}=5,\\ 6,&\mbox{if }t_{0}=4,\\ 4-t_{0},&\mbox{if }t_{0}\leq 3,\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes._
Proof.: The boundary slopes of \(\Sigma\times S^{1}\) are \(t_{0}\), \(-1\) and \(-1\). If \(t_{0}\leq 3\), according to [13, Lemma 5.1], there are exactly \(4-t_{0}\) appropriate tight contact structures on \(\Sigma\times S^{1}\) without \(0\)-twisting vertical Legendrian circle. By Lemma 2.6, there are no appropriate tight contact structures on \(\Sigma\times S^{1}\) with \(0\)-twisting vertical Legendrian circle. If \(t_{0}\geq 4\), then any tight contact structure on \(\Sigma\times S^{1}\) has a \(0\)-twisting vertical Legendrian circle. By applying [13, Lemma 5.1] again, we can conclude that when \(t_{0}=4\), there are exactly \(6\) appropriate tight contact structures on \(\Sigma\times S^{1}\). When \(t_{0}=5\), there are exactly \(7\) appropriate tight contact structures on \(\Sigma\times S^{1}\). When \(t_{0}\geq 6\), there are exactly \(8\) appropriate tight contact structures on \(\Sigma\times S^{1}\).
We use the decorations \((\pm)(\pm)(\pm)\) to denote the \(8\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle.
**Lemma 2.10**.: _Suppose \(t_{1}>1\) and \(t_{2}=1\), then there are at most_
\[\left\{\begin{array}{ll}12,&\mbox{if $t_{0}\geq 5$ and $t_{1}=2$},\\ 10,&\mbox{if $t_{0}=4$ and $t_{1}=2$},\\ 8,&\mbox{if $t_{0}=3$ and $t_{1}=2$},\\ 16,&\mbox{if $t_{0}\geq 5$ and $t_{1}\geq 3$},\\ 14,&\mbox{if $t_{0}=4$ and $t_{1}\geq 3$},\\ 12,&\mbox{if $t_{0}=3$ and $t_{1}\geq 4$},\\ 11,&\mbox{if $t_{0}=t_{1}=3$},\\ 6-2t_{0},&\mbox{if $t_{0}\leq 2$},\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes._
Proof.: The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}\) and \(s_{2}=-1\).
If \(t_{0}\geq 3\), then the tight contact structures on \(\Sigma\times S^{1}\) always exist \(0\)-twisting vertical Legendrian circles.
If \(t_{1}>1\), we have
\[\begin{bmatrix}1&1\\ -2&-1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}=\begin{bmatrix}1\\ -1\end{bmatrix},\begin{bmatrix}1&1\\ -2&-1\end{bmatrix}\begin{bmatrix}t_{1}\\ -1\end{bmatrix}=\begin{bmatrix}t_{1}-1\\ -2t_{1}+1\end{bmatrix},\]
\[\frac{-2t_{1}+1}{t_{1}-1}=[-3,\underbrace{-2,\cdots,-2}_{t_{1}-2}].\]
If \(t_{1}=2\), then \(L^{\prime}_{1}\) is a continued fraction block with two basic slices with slopes \(-\frac{1}{2}\), \(0\) and \(\infty\), and thus admits exactly \(3\) tight contact structures. If \(t_{1}\geq 3\), then \(L^{\prime}_{1}\) consists of two continued fraction blocks, each of which has one basic slice. The slopes are \(-\frac{1}{t_{1}}\), \(0\) and \(\infty\). Therefore, it admits exactly \(4\) tight contact structures.
If \(t_{0}\geq 5\) and \(t_{1}=2\), then there are at most \(2\times 3\times 2=12\) tight contact structures. The number of such contact structures depends on the signs of the basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\). If \(t_{0}=4\) and \(t_{1}=2\), then there are at most \(10\) tight contact structures by deleting \(2\) duplications. If \(t_{0}\leq 3\) and \(t_{1}=2\), then there are at most \(8\) tight contact structures by deleting \(4\) overtwisted cases.
If \(t_{0}\geq 5\) and \(t_{1}\geq 3\), then there are at most \(2\times 4\times 2=16\) tight contact structures. The number of such contact structures depends depend on the signs of the basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\). If \(t_{0}=4\) and \(t_{1}\geq 3\), then there are at most \(14\) tight contact structures by deleting \(2\) duplications. If \(t_{0}\leq 3\) and \(t_{1}\geq 3\), then there are at most \(12\) tight contact structures by deleting \(4\) overtwisted cases.
Suppose \(t_{0}\leq 2\). By Lemma 2.6, there are no appropriate tight contact structures with a \(0\)-twisting vertical Legendrian circle. We consider the appropriate tight contact structures without \(0\)-twisting vertical Legendrian circle. By Lemma 2.7, we can factorize \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L^{\prime\prime}_{1}\), where the boundary slopes of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(0\) and \(-1\), and the boundary slopes of \(L^{\prime\prime}_{1}\) are \(0\) and \(-\frac{1}{t_{1}}\). Since \(t_{0}<3\), according to [13, Lemma 5.1], there are exactly \(3-t_{0}\) tight contact structures on \(\Sigma^{\prime\prime}\times S^{1}\) without \(0\)-twisting vertical Legendrian circle. There are \(2\) minimally twisting tight contact structures on \(L^{\prime\prime}_{1}\). Therefore, there are at most
appropriate tight contact structures on \(\Sigma\times S^{1}\) without \(0\)-twisting vertical Legendrian circle and with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\).
If \(t_{1}=2\), then we denote the \(12\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle using the decorations \((\pm)(\pm\pm)(\pm)\). For \(t_{1}\geq 3\), we use the decorations \((\pm)((\pm)(\pm))(\pm)\) to denote the \(16\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle. In the latter case, \(((\pm)(\pm))\) refers to the two signed basic slices in \(L^{\prime}_{1}\) that do not form a continued fraction block.
If \(t_{0}=t_{1}=3\) and \(t_{2}=1\), we claim the two decorations \((+)((-)(+))(+)\) and \((-)((+)(-))(-)\) denote the same contact structure on \(\Sigma\times S^{1}\). As before, there is a convex vertical annulus \(A\) such that \(\partial A\) consists of a Legendrian ruling on \(T_{0}\) and a Legendrian ruling on \(T_{2}\), and the dividing set on \(A\) run from one boundary component to the other. If we cut \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\) along \(A\) we will obtain a thickened torus admitting a factorization into two basic slices with slopes \(-\frac{1}{3}\), \(0\) and \(0\), \(-1\), and opposite signs. Here the slope \(-1\) is obtained by \(-s_{0}-s_{2}+1=-3-(-1)+1\). The three slopes can be transformed into \(\frac{1}{3}\), \(\frac{1}{2}\) and \(1\) as follows,
\[\begin{bmatrix}2&3\\ 1&2\end{bmatrix}\begin{bmatrix}3\\ -1\end{bmatrix}=\begin{bmatrix}3\\ 1\end{bmatrix},\begin{bmatrix}2&3\\ 1&2\end{bmatrix}\begin{bmatrix}1\\ 0\end{bmatrix}=\begin{bmatrix}2\\ 1\end{bmatrix},\begin{bmatrix}2&3\\ 1&2\end{bmatrix}\begin{bmatrix}-1\\ 1\end{bmatrix}=\begin{bmatrix}1\\ 1\end{bmatrix}.\]
So these two basic slices form a continued fraction block and can be interchanged. Similar to the argument in [13, Page 135], this leads to an exchange between \((+)((-)(+))(+)\) and \((-)((+)(-))(-)\) while preserving the isotopy classes of contact structures.
**Lemma 2.11**.: _Suppose \(t_{1}>1\) and \(t_{2}>1\), then there are at most_
\[\left\{\begin{array}{ll}18,&\text{if $t_{0}\geq 4$ and $t_{1}=t_{2}=2$},\\ 14,&\text{if $t_{0}=3$ and $t_{1}=t_{2}=2$},\\ 10,&\text{if $t_{0}=2$ and $t_{1}=t_{2}=2$},\\ 24,&\text{if $t_{0}\geq 4$ and $t_{1}\geq 3,t_{2}=2$},\\ 20,&\text{if $t_{0}=3$ and $t_{1}\geq 3,t_{2}=2$},\\ 16,&\text{if $t_{0}=2$ and $t_{1}\geq 3,t_{2}=2$},\\ 32,&\text{if $t_{0}\geq 4$ and $t_{1}\geq 3,t_{2}\geq 3$},\\ 28,&\text{if $t_{0}=3$ and $t_{1}\geq 3,t_{2}\geq 3$},\\ 24,&\text{if $t_{0}=2$ and $t_{1}\geq 3,t_{2}\geq 3$},\\ 8-4t_{0},&\text{if $t_{0}\leq 1$},\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes._
Proof.: If \(t_{0}\geq 2\), then the tight contact structures on \(\Sigma\times S^{1}\) always exist a \(0\)-twisting vertical Legendrian circles.
If \(t_{0}\geq 4\) and \(t_{1}=t_{2}=2\), then there are at most \(2\times 3\times 3=18\) tight contact structures. If \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}=2\), then there are at most \(2\times 4\times 3=24\) tight contact structures. If \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\), then there are at most \(2\times 4\times 4=32\) tight contact structures. The number of such contact structures depends on the signs of the basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\). For the other cases, the upper bound can be obtained by deleting the duplications or the overtwisted contact structures.
Suppose \(t_{0}\leq 1\). By Lemma 2.6, there are no appropriate tight contact structures with a \(0\)-twisting vertical Legendrian circle. We consider the appropriate tight contact structures without a \(0\)-twisting vertical Legendrian circle. By Lemma 2.7, we can factorize \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L_{1}^{\prime\prime} \cup L_{2}^{\prime\prime}\), where the boundary slopes of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(0\) and \(0\), and the boundary slopes of \(L_{i}^{\prime\prime}\) are \(0\) and \(-\frac{1}{t_{i}}\). Since \(t_{0}\leq 1\), according to [13, Lemma 5.1], there are exactly \(2-t_{0}\) tight contact structures on \(\Sigma^{\prime\prime}\times S^{1}\) without a \(0\)-twisting vertical Legendrian circle. There are \(2\) minimally twisting tight contact structures on \(L_{i}^{\prime\prime}\). Therefore, there are at most \(8-4t_{0}\) appropriate tight contact structures on \(\Sigma\times S^{1}\) without a \(0\)-twisting vertical Legendrian circle and with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\).
If \(t_{1}=t_{2}=2\), then the \(18\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle are denoted using the decorations \((\pm)(\pm\pm)(\pm\pm)\). For \(t_{1}\geq 3\) and \(t_{2}=2\), we use the decorations \((\pm)((\pm)(\pm))(\pm\pm)\) to represent the \(24\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle. When \(t_{1}\geq 3\) and \(t_{2}\geq 3\), we use the decorations \((\pm)((\pm)(\pm))((\pm)(\pm))\) to signify the \(32\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle. See Figure 5 for an example.
### \(t_{1}<0\) and \(t_{2}>0\)
**Lemma 2.12**.: _Suppose \(t_{1}<0\) and \(t_{2}=1\), then there are at most_
\[\left\{\begin{array}{ll}4-4t_{1},&\mbox{if }t_{0}\geq 4,\\ 4-3t_{1},&\mbox{if }t_{0}=3,\\ 4-2t_{1},&\mbox{if }t_{0}=2,\\ t_{0}t_{1}-2t_{1},&\mbox{if }t_{0}\leq 1,\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes._
Proof.: The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}>0\) and \(s_{2}=-1\).
If \(t_{0}\geq 2\), then the tight contact structures on \(\Sigma\times S^{1}\) always contain a \(0\)-twisting vertical Legendrian circles.
If \(t_{0}\geq 4\), \(t_{1}<0\) and \(t_{2}=1\), then there are at most \(2\times(1-t_{1})\times 2=4(1-t_{1})\) tight contact structures. They depend on the signs of the basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\). For the other cases, the upper bound can be obtained by deleting the duplication or the overtwisted contact structures.
Suppose \(t_{0}\leq 1\). By Lemma 2.6, there are no appropriate tight contact structures with a \(0\)-twisting vertical Legendrian circle. We consider the appropriate tight contact structures without a \(0\)-twisting vertical Legendrian circle. By Lemma 2.7, we can factorize \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L^{\prime\prime}_{1}\), where the boundary slopes of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(0\) and \(1\), and the boundary slopes of \(L^{\prime\prime}_{1}\) are \(0\) and \(-\frac{1}{t_{1}}\). Since \(t_{0}\leq 1\), according to [13, Lemma 5.1], there are exactly \(2-t_{0}\) tight contact structures on \(\Sigma^{\prime\prime}\times S^{1}\) without a \(0\)-twisting vertical Legendrian circle. There are \(-t_{1}\) minimally twisting tight contact structures on \(L^{\prime\prime}_{1}\). Therefore, there are at most \(-2t_{1}+t_{0}t_{1}\) tight contact structures on \(\Sigma\times S^{1}\) without a \(0\)-twisting vertical Legendrian circle and with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\).
We use the decorations \((\pm)(\underbrace{\pm\cdots\pm}_{-t_{1}})(\pm)\) to denote the \(4-4t_{1}\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle.
**Lemma 2.13**.: _Suppose \(t_{1}<0\) and \(t_{2}>1\), then there are at most_
\[\left\{\begin{array}{ll}6-6t_{1},&\text{if }t_{0}\geq 3,t_{2}=2,\\ 6-4t_{1},&\text{if }t_{0}=2,t_{2}=2,\\ 6-2t_{1},&\text{if }t_{0}=1,t_{2}=2,\\ 8-8t_{1},&\text{if }t_{0}\geq 3,t_{2}\geq 3,\\ 8-6t_{1},&\text{if }t_{0}=2,t_{2}\geq 3,\\ 8-4t_{1},&\text{if }t_{0}=1,t_{2}\geq 4,\\ 8-3t_{1},&\text{if }t_{0}=1,t_{2}=3,\\ 2t_{0}t_{1}-2t_{1},&\text{if }t_{0}\leq 0,t_{2}\geq 3,\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes._
Proof.: The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}>0\) and \(s_{2}=-\frac{1}{t_{2}}\in(-1,0)\).
If \(t_{0}\geq 1\), then the tight contact structures on \(\Sigma\times S^{1}\) always contain a \(0\)-twisting vertical Legendrian circles.
If \(t_{0}\geq 3\), \(t_{1}<0\) and \(t_{2}=2\), then there are at most \(2\times(1-t_{1})\times 3=6(1-t_{1})\) appropriate tight contact structures. If \(t_{0}\geq 3\), \(t_{1}<0\) and \(t_{2}\geq 3\), then there are at most \(2\times(1-t_{1})\times 4=8(1-t_{1})\) appropriate tight contact structures. The number of such contact structures depends on the signs of the basic slices in \(L^{\prime}_{i}\) for \(i=0,1,2\). For the other cases, the upper bound can be obtained by deleting the duplication or the overtwisted contact structures.
Suppose \(t_{0}\leq 0\). By Lemma 2.6, there are no appropriate tight contact structures with a \(0\)-twisting vertical Legendrian circle. We consider the appropriate tight contact structures without a \(0\)-twisting vertical Legendrian circle. By Lemma 2.7, we can factorize \(\Sigma\times S^{1}=\Sigma^{\prime\prime}\times S^{1}\cup L^{\prime\prime}_{1}\cup L ^{\prime\prime}_{2}\), where the boundary slopes of \(\Sigma^{\prime\prime}\times S^{1}\) are \(t_{0}\), \(1\) and \(0\), the boundary slopes of \(L^{\prime\prime}_{1}\) are \(1\) and \(-\frac{1}{t_{1}}\), and the boundary slopes of \(L^{\prime\prime}_{2}\) are \(0\) and \(-\frac{1}{t_{2}}\). Since \(t_{0}\leq 0\), according
to [13, Lemma 5.1], there are exactly \(1-t_{0}\) tight contact structures on \(\Sigma^{\prime\prime}\times S^{1}\) without a \(0\)-twisting vertical Legendrian circle. There are \(-t_{1}\) minimally twisting tight contact structures on \(L^{\prime\prime}_{1}\). There are \(2\) minimally twisting tight contact structures on \(L^{\prime\prime}_{2}\). Therefore, there are at most \(-2t_{1}+2t_{0}t_{1}\) appropriate tight contact structures on \(\Sigma\times S^{1}\) without a \(0\)-twisting vertical Legendrian circle and with boundary slopes \(s_{0}=t_{0}\), \(s_{i}=-\frac{1}{t_{i}}\) for \(i=1,2\).
When \(t_{2}=2\), the \(6-6t_{1}\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle are denoted using the decorations \((\pm)(\underbrace{\pm\cdots\pm}_{-t_{1}})(\pm\pm)\). For \(t_{2}\geq 3\), we use the decorations \((\pm)(\underbrace{\pm\cdots\pm}_{-t_{1}})((\pm)(\pm))\) to represent the \(8-8t_{1}\) contact structures on \(\Sigma\times S^{1}\) with a \(0\)-twisting vertical Legendrian circle.
If \(t_{0}=1\), \(t_{1}<0\) and \(t_{2}=3\), we claim the two decorations
\[(+)(\underbrace{+\cdots+}_{l}\underbrace{-\cdots-}_{k})((-)(+))\text{ and }(-)( \underbrace{-\cdots-}_{k+1}\underbrace{+\cdots+}_{l-1})((+)(-)),\]
where \(l\geq 1,k\geq 0,k+l=-t_{1}\), denote the same contact structure on \(\Sigma\times S^{1}\). We consider \(L^{\prime}_{0}\cup B^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\) in \(\Sigma\times S^{1}\) with the first decoration, where \(B^{\prime}_{1}\) is the inner most basic slice in \(L^{\prime}_{1}\) with two boundary slopes \(\infty\) and \(1\). We can assume the sign of \(B^{\prime}_{1}\) is positive since \(L^{\prime}_{1}\) is a continued fraction block containing at least one positive basic slice. As before, there is a convex vertical annulus \(A\) such that \(\partial A\) consists of a Legendrian ruling on \(T_{0}\) and a Legendrian ruling on the boundary component of \(B^{\prime}_{1}\) with slope \(1\), and the dividing set on \(A\) run from one boundary component to the other. If we cut \(L^{\prime}_{0}\cup B^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\) along \(A\), we will obtain a thickened torus admitting a factorization into two basic slices with slopes \(-\frac{1}{3}\), \(0\) and \(0\), \(-1\), and opposite signs. Here the slope \(-1\) is obtained by \(-1-1+1\). Using the same reasoning as in the proof of Lemma 2.10, we have an exchange from the first decoration to the second without altering the isotopy classes of contact structures.
### \(t_{1}=0\)
**Lemma 2.14**.: _Suppose \(t_{1}=0\); then there are at most_
\[\left\{\begin{array}{ll}8,&\text{if }t_{2}\geq 3,\\ 6,&\text{if }t_{2}=2,\\ 4,&\text{if }t_{2}=1,\\ 2-2t_{2},&\text{if }t_{2}\leq 0,\end{array}\right.\]
_appropriate tight contact structures on \(\Sigma\times S^{1}\) with the given boundary slopes. All of them have \(0\)-twisting vertical Legendrian circles._
Proof.: Since \(s_{1}=\infty\), the appropriate tight contact structures on \(\Sigma\times S^{1}\) always contain \(0\)-twisting vertical Legendrian circles.
The boundary slopes of \(\Sigma\times S^{1}\) are \(t_{0}\), \(\infty\) and \(-\frac{1}{t_{2}}\). We can factorize \(\Sigma\times S^{1}=L^{\prime}_{0}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\), where the boundary slopes of \(\Sigma^{\prime}\times S^{1}\) are all \(\infty\), the boundary slopes of \(L^{\prime}_{0}\) are \(\infty\) and \(t_{0}\), and the boundary slopes of \(L^{\prime}_{2}\) are \(\infty\) and \(-\frac{1}{t_{2}}\). There are exactly \(2\) minimally twisting tight contact structures on \(L^{\prime}_{0}\). If \(t_{2}\leq 0,=1,=2\) or \(\geq 3\), then there are \(1-t_{2}\), \(2\), \(3\) or \(4\) minimally twisting
tight contact structures on \(L^{\prime}_{2}\), respectively. Therefore, if \(t_{2}\leq 0\), \(=1\), \(=2\) or \(\geq 3\), then there are \(2-2t_{2}\), \(4\), \(6\) or \(8\) appropriate tight contact structures on \(\Sigma\times S^{1}\), respectively.
If \(t_{2}\geq 3\), the \(8\) contact structures on \(\Sigma\times S^{1}\) are denoted using the decorations \((\pm)((\pm)(\pm))\). For \(t_{2}=2\), we use the decorations \((\pm)(\pm\pm)\) to represent the \(6\) contact structures on \(\Sigma\times S^{1}\). When \(t_{2}=1\), we use the decorations \((\pm)(\pm)\) to denote the \(4\) contact structures on \(\Sigma\times S^{1}\). If \(t_{2}\leq 0\), we use the decorations \((\pm)(\underbrace{\pm\cdots\pm}_{-t_{2}})\) to denote the \(2-2t_{2}\) contact structures on \(\Sigma\times S^{1}\).
### Some tight contact structures
We use the notation \((T^{2}\times[0,1],s_{0},s_{1})\) to represent a basic slice with boundary slopes \(s_{0}\) and \(s_{1}\) on \(T^{2}\times\{i\}\), where \(i=0,1\). There is a geodesic in the Farey graph connecting \(s_{0}\) and \(s_{1}\). Moreover, any boundary parallel convex torus of this slice has a dividing slope within the range of \([s_{0},s_{1}]\) corresponding to the clockwise arc on the boundary of the Hyperbolic disk.
**Lemma 2.15**.: _There are \(6\) tight contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(t_{0}\), \(-\frac{1}{t_{1}}\) and \(-\frac{1}{t_{2}}\), where \(t_{1},t_{2}\neq 0\), and satisfying that_
* \(\Sigma\times S^{1}\) _can be decomposed as_ \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\)_, where_ \(\Sigma^{\prime}\times S^{1}\) _have boundary slopes_ \(\infty\)_,_
* \(L^{\prime}_{0}\) _is a basic slice,_
* \(L^{\prime}_{i}\)_,_ \(i=1,2\)_, is a thickened torus, all of whose basic slices have the same signs,_
* _the signs of_ \(L^{\prime}_{0}\)_,_ \(L^{\prime}_{1}\) _and_ \(L^{\prime}_{2}\) _are_ \(\pm\mp\mp\)_,_ \(\pm\mp\pm\) _or_ \(\pm\pm\mp\)_._
Proof.: Suppose they have \(0\)-twisting vertical Legendrian circles. By Lemma 2.3, each of them can be decomposed as \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\), where the boundary slopes of \(\Sigma^{\prime}\times S^{1}\) are all \(\infty\), \(L^{\prime}_{0}\) is a basic slice \((T^{2}\times[0,1];\infty,t_{0})\), and the innermost basic slice \(B^{\prime}_{i}\) of \(L^{\prime}_{i}\) is \((T^{2}\times[0,1];\infty,\lceil-\frac{1}{t_{i}}\rceil)\) for \(i=1,2\). Using Part 2 of [13, Lemma 5.1], we know that there are \(6\) universally tight contact structures on \(L^{\prime}_{0}\cup B^{\prime}_{1}\cup B^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\) which are determined by the signs of \(L^{\prime}_{0}\), \(B^{\prime}_{1}\) and \(B^{\prime}_{2}\). Note that the three signs are not the same. Each of them can be extended to a universally tight \(\tilde{\Sigma}\times S^{1}\) whose boundary slopes are all \(\infty\). The contact structure on \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\) can be embedded into \(\tilde{\Sigma}\times S^{1}\). Hence the given contact \(\Sigma\times S^{1}\) is tight.
**Lemma 2.16**.: _There are \(4\) tight contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(t_{0}\), \(\infty\) and \(-\frac{1}{t_{2}}\), where \(t_{2}\neq 0\), and satisfying that_
* \(\Sigma\times S^{1}\) _can be decomposed as_ \(L^{\prime}_{0}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S^{1}\)_, where_ \(\Sigma^{\prime}\times S^{1}\) _have boundary slopes_ \(\infty\)_,_
* \(L^{\prime}_{0}\) _is a basic slice,_
* \(L^{\prime}_{2}\) _is a thickened torus, all of whose basic slices have the same signs,_
* _the signs of_ \(L^{\prime}_{0}\) _and_ \(L^{\prime}_{2}\) _are_ \(\pm\pm\) _or_ \(\pm\mp\)_._
Proof.: Using [13, Lemma 5.2], the proof is similar to that of Lemma 2.15.
## 3. Methods of construction of strongly exceptional Legendrian \(A_{3}\) links
In practice, contact surgery diagrams are a common tool for representing strongly exceptional Legendrian links. Several works, such as [9], [10], [11], [15] and [8] employ this technique. In this paper, we utilize contact surgery diagrams to construct strongly exceptional Legendrian \(A_{3}\) links. It is worth noting that if an exceptional Legendrian \(A_{3}\) link can be represented by a contact surgery diagram, then it must be strongly exceptional. This is because conducting contact surgery along such a Legendrian \(A_{3}\) link results in a tight contact 3-manifold, whereas a Giroux torsion domain in \(\Sigma\times S^{1}\) gives rise to an overtwisted disk after the surgery. Given a contact surgery diagram for an exceptional Legendrian \(A_{3}\) link, the Thurston-Bennequin invariants and rotation numbers can be calculated using the [14, Lemma 6.6]. Furthermore, the \(d_{3}\)-invariant of the ambient contact 3-sphere can be obtained according to [4].
Additionally, we introduce three other methods. The first method involves performing contact connected sums.
**Lemma 3.1**.: _Let \(K_{0}^{\prime}\cup K_{1}\) be a strongly exceptional Legendrian Hopf link in a contact \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(t_{1},r_{1})=(1,0)\) or \((t_{0}^{\prime},r_{0}^{\prime})=(0,\pm 1),t_{1}\geq 2,r_{1}=\pm(t_{1}-1)\). Let \(K_{0}^{\prime\prime}\cup K_{2}\) be a strongly exceptional Legendrian Hopf link in a contact \(S^{3}\). Then the Legendrian connected sum \((K_{0}^{\prime}\#K_{0}^{\prime\prime})\cup K_{1}\cup K_{2}\) is a strongly exceptional Legendrian \(A_{3}\) link in a contact \(S^{3}\)._
Proof.: Suppose \(t_{0}^{\prime}=0,t_{1}\geq 1\). Let \(t_{0}^{\prime\prime}\) be the Thurston-Bennequin invariant of \(K_{0}^{\prime\prime}\). If the pair \((t_{0}^{\prime\prime},t_{2})\) is not \((2,1)\) or \((1,2)\), then any strongly exceptional Legendrian Hopf link \(K_{0}^{\prime\prime}\cup K_{2}\) has a contact surgery diagram [10]. As a result, \((K_{0}^{\prime}\#K_{0}^{\prime\prime})\cup K_{1}\cup K_{2}\) has a contact surgery diagram as shown in the middle and right of Figure 6. We then perform contact \((-1)\)-surgery along \(K_{1}\) and cancel the contact \((+1)\)-surgery along the Legendrian unknots. By ignoring the Legendrian unknots with contact \((-1)\)-surgeries, we obtain a contact surgery diagram for the Legendrian link \(K_{0}^{\prime\prime}\cup K_{2}\). As per [10], some contact surgeries along \(K_{0}^{\prime\prime}\cup K_{2}\) will result in closed tight contact 3-manifolds. Since contact \((-1)\)-surgery on closed contact 3-manifold
Figure 6. In the middle and right picture, for \(t_{1}\) even, \(K_{0}^{\prime}\) and \(K_{1}\) bear the same orientation, for \(t_{1}\) odd, the opposite one.
preserves tightness [16], some contact surgery along \((K^{\prime}_{0}\#K^{\prime\prime}_{0})\cup K_{1}\cup K_{2}\) will yield a tight contact 3-manifold. Therefore, \((K^{\prime}_{0}\#K^{\prime\prime}_{0})\cup K_{1}\cup K_{2}\) is strongly exceptional.
In the case where \((t^{\prime\prime}_{0},t_{2})\) is either \((2,1)\) or \((1,2)\), [10] tells us that its exterior is a universally tight thickened torus and can therefore be contact embedded into a tight contact \(T^{3}\). The contact \((-1)\)-surgery along links in a tight contact \(T^{3}\) results in a tight 3-manifold. As such, the contact \((-1)\)-surgery along links in the exterior of \(K^{\prime\prime}_{0}\cup K_{2}\) will also yield a tight 3-manifold. Therefore, the contact \((-1)\)-surgery along \(K_{1}\) will result in a tight contact 3-manifold. This means that \((K^{\prime}_{0}\#K^{\prime\prime}_{0})\cup K_{1}\cup K_{2}\) is strongly exceptional.
Assuming \(t^{\prime}_{0}=t_{1}=1\). If the pair \((t^{\prime\prime}_{0},t_{2})\) is not \((2,1)\) or \((1,2)\), then \((K^{\prime}_{0}\#K^{\prime\prime}_{0})\cup K_{1}\cup K_{2}\) will have a contact surgery diagram as shown in the left of Figure 6. We then perform contact \((-\frac{1}{2})\)-surgery along \(K_{1}\) and cancel the contact \((+1)\)-surgery along the two Legendrian unknots. By doing so, we obtain a contact surgery diagram for the strongly exceptional Legendrian link \(K^{\prime\prime}_{0}\cup K_{2}\). This means that the exterior of \((K^{\prime}_{0}\#K^{\prime\prime}_{0})\cup K_{1}\cup K_{2}\) is appropriate tight.
If the pair \((t^{\prime\prime}_{0},t_{2})\) is either \((2,1)\) or \((1,2)\), we can apply the same argument as in the previous case.
We recall that the \(d_{3}\)-invariant of the contact connected sum two contact 3-spheres \((S^{3},\xi)\) and \((S^{3},\xi^{\prime})\) is given by \(d_{3}(\xi)+d_{3}(\xi^{\prime})+\frac{1}{2}\). Suppose \(K^{\prime\prime}_{0}\) has Thurston-Bennequin invariant \(t^{\prime\prime}_{0}\) and rotation number \(\tau^{\prime\prime}_{0}\), then \(K^{\prime}_{0}\#K^{\prime\prime}_{0}\) has Thurston-Bennequin invariant \(t^{\prime}_{0}+t^{\prime\prime}_{0}+1\) and rotation number \(r^{\prime}_{0}+r^{\prime\prime}_{0}\).
The second method involves adding local Legendrian meridians. The following lemma is straightforward.
**Lemma 3.2**.: _Suppose \(K_{0}\cup K_{2}\) is a strongly exceptional Legendrian Hopf link. Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\). Then \(K_{0}\cup K_{1}\cup K_{2}\) is an strongly exceptional Legendrian \(A_{3}\) link with \(t_{1}<0\) and \(r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\}\)._
The third method involves extending an (appropriate) tight contact \(\Sigma\times S^{1}\) admitting \(0\)-twisting vertical Legendrian circle to an overtwisted contact \(S^{3}\).
Suppose an (appropriate) tight contact structure \(\xi\) on \(\Sigma\times S^{1}\) has a \(0\)-twisting vertical Legendrian circle \(\gamma\). We attach three contact solid tori \(D^{2}_{i}\times S^{1}\), \(i=0,1,2\), to \((\Sigma\times S^{1},\xi)\) such that \(\partial D^{2}_{0}\) is identified to \(h\), \(\partial D^{2}_{1}\) is identified to \(c_{1}\), and \(\partial D^{2}_{2}\) is identified to \(c_{2}\). Then the resulting manifold \(\Sigma\times S^{1}\cup D^{2}_{0}\times S^{1}\cup D^{2}_{1}\times S^{1}\cup D^{2 }_{2}\times S^{1}\) is diffeomorphic to \(S^{3}\).
If the contact structure on \(D^{2}_{i}\times S^{1}\) has a minimal convex boundary with slope given by a longitude (i.e., the dividing set of the convex boundary intersects the meridional circle in exactly two points), then it admits a unique tight contact structure. Additionally, the core of such contact solid torus is Legendrian.
Since the dividing set of \(T_{i}\) intersects the meridional disk of \(D^{2}_{i}\times S^{1}\) in exactly two points, the contact structure \(\xi\) on \(\Sigma\times S^{1}\) uniquely extends to a contact structure on \(S^{3}\). However, Since \(\partial D^{2}_{0}\) is identified to \(h\), the Legendrian vertical circle \(\gamma\) bounds an overtwisted disk in \(S^{3}\). Therefore, the resulting contact structure on \(S^{3}\) is overtwisted.
**Lemma 3.3**.: _Let \(\xi\) be an (appropriate) tight contact structure on \(\Sigma\times S^{1}\) admits a \(0\)-twisting vertical Legendrian circle. Extending \(\xi\) to a contact 3-sphere as above by adding three tight contact solid tori. Let \(K_{i}\), \(i=0,1,2\), be the core of three attached contact solid tori. Then \(K_{0}\cup K_{1}\cup K_{2}\) is a (strongly) exceptional Legendrian \(A_{3}\) link in an overtwisted contact 3-sphere._
Moreover, we have the following observation.
**Lemma 3.4**.: _Let \(\xi_{1}\) and \(\xi_{2}\) be two tight contact structures on \(\Sigma\times S^{1}\) with \(0\)-twisting vertical Legendrian circles. Suppose they both have minimal convex boundaries with slopes \(t_{0}\), \(-\frac{1}{t_{1}}\) and \(-\frac{1}{t_{2}}\). Suppose their factorizations \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S ^{1}\) (or \(L^{\prime}_{0}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S^{1}\) when \(t_{1}=0\)) differ only in the signs of basic slices in \(L^{\prime}_{0}\cup L^{\prime}_{1}\cup L^{\prime}_{2}\) (or \(L^{\prime}_{0}\cup L^{\prime}_{2}\) when \(t_{1}=0\)). If \(\xi_{1}\) is appropriate tight, then so is \(\xi_{2}\)._
Proof.: This is because the computation of Giroux torsion of an embedded torus \(T\) in a contact 3-manifold only depends on the slopes of the convex tori parallel to \(T\).
**Lemma 3.5**.: _Suppose \(\mathcal{L}\) is an exceptional Legendrian \(A_{3}\) link whose exterior contains a \(0\)-twisting Legendrian vertical circle. Then the component \(K_{0}\) of \(\mathcal{L}\) can always be destabilized._
Proof.: There is a basic slice \(L^{\prime}_{0}\) in the exterior of \(\mathcal{L}\) which is \((T^{2}\times[0,1],\infty,t_{0})\). We can find a basic slice \((T^{2}\times[0,1],t_{0}+1,t_{0})\) in \(L^{\prime}_{0}\). So the component \(K_{0}\) can be destabilized.
## 4. Realizations of strongly exceptional Legendrian \(A_{3}\) links
In this section, we construct strongly exceptional Legendrian \(A_{3}\) links.
### \(t_{1}<0\) and \(t_{2}<0\)
The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}\in(0,1]\) and \(s_{2}=-\frac{1}{t_{2}}\in(0,1]\).
**Lemma 4.1**.: _For any \(t_{0}\in\mathbb{Z}\), there are \(6\) exceptional Legendrian \(A_{3}\) links whose exteriors have \(0\)-twisting vertical Legendrian circles, and have decorations \(\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(\underbrace{-\cdots-}_{-t_{2}})\),_
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(\underbrace{+\cdots+}_{-t_{2}})\text{ and }\pm(+)(\underbrace{+\cdots+}_{-t_{1}})(\underbrace{-\cdots-}_{-t_{2}}).\]
_Their rotation numbers are_
\[r_{0}=\pm(t_{0}-1),r_{1}=\pm(1-t_{1}),r_{2}=\pm(1-t_{2});r_{0}=\pm(t_{0}-1), r_{1}=\pm(1-t_{1}),r_{2}=\pm(t_{2}+1);\]
\[r_{0}=\pm(t_{0}-1),r_{1}=\pm(t_{1}+1),r_{2}=\pm(1-t_{2}).\]
_The corresponding \(d_{3}\)-invariants are independent of \(t_{0}\) if \(t_{1}\) and \(t_{2}\) are fixed._
Proof.: The first statement follows from Lemma 2.15 and Lemma 3.3. The rotation number of a Legendrian knot in a contact 3-sphere is the evaluation of the relative Euler class on a Seifert surface of the knot. We compute the rotation numbers in a similar way as that in [8, Section 2.5]. The Seifert surface of \(K_{0}\) can be obtained by capping the pair of pants \(\Sigma\) by two disks along the boundary components \(c_{1}\) and \(c_{2}\). The Seifert surface of \(K_{i}\), \(i=1,2\), is a union of a meridian disk of \(K_{0}\) and an annulus. For instance, if the signs of \(L^{\prime}_{0}\), \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\) are \(+--\), see Figure 4 for an example, then the rotation numbers can be computed using relative Euler
class as follows. We denote \(\frac{a}{b}\ominus\frac{c}{d}\) to be \(\frac{a-c}{b-d}\), and \(\frac{a}{b}\bullet\frac{c}{d}\) to be \(ad-bc\)[8, Section 2.5]. The denominators are assumed non-negative. The rotation number of \(K_{0}\) is
\[r_{0} =-(\frac{-1}{-t_{1}}\ominus\frac{-1}{-t_{1}-1})\bullet\frac{0}{1} -(\frac{-1}{-t_{1}-1}\ominus\frac{-1}{-t_{1}-2})\bullet\frac{0}{1}-\cdots-( \frac{-1}{1}\ominus\frac{-1}{0})\bullet\frac{0}{1}\] \[-(\frac{-1}{t_{2}}\ominus\frac{-1}{-t_{2}-1})\bullet\frac{0}{1}- (\frac{-1}{-t_{2}-1}\ominus\frac{-1}{-t_{2}-2})\bullet\frac{0}{1}-\cdots-( \frac{-1}{1}\ominus\frac{-1}{0})\bullet\frac{0}{1}\] \[+(\frac{1}{0}\ominus\frac{t_{0}}{1})\bullet\frac{0}{1}=1-t_{0}.\]
The rotation number of \(K_{1}\) is
\[r_{1}=(\frac{-t_{0}}{1}\ominus\frac{-1}{0})\bullet\frac{1}{0}-(\frac{1}{0} \ominus\frac{1}{1})\bullet\frac{1}{0}-(\frac{1}{1}\ominus\frac{1}{2})\bullet \frac{1}{0}-\cdots-(\frac{1}{-t_{1}-1}\ominus\frac{1}{-t_{1}})\bullet\frac{1} {0}=t_{1}-1.\]
The rotation number of \(K_{2}\) is
\[r_{2}=(\frac{-t_{0}}{1}\ominus\frac{-1}{0})\bullet\frac{1}{0}-(\frac{1}{0} \ominus\frac{1}{1})\bullet\frac{1}{0}-(\frac{1}{1}\ominus\frac{1}{2})\bullet \frac{1}{0}-\cdots-(\frac{1}{-t_{2}-1}\ominus\frac{1}{-t_{2}})\bullet\frac{1} {0}=t_{2}-1.\]
In the computation above, when calculating \(r_{0}\), it is necessary to reverse the signs of the dividing slopes in the thickened tori \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\). Similarly, when calculating \(r_{1}\) and \(r_{2}\), the signs of the dividing slopes in the thickened torus \(L^{\prime}_{0}\) should be reversed.
The last statement follows directly from Lemma 3.5.
In a similar way, we can use relative Euler classes and the given decorations to compute the rotation numbers of any other Legendrian \(A_{3}\) links whose exteriors contain a \(0\)-twisting vertical Legendrian circle.
Proof of Theorem 1.1.: Recall that the numbers of strongly exceptional Legendrian \(A_{3}\) links have upper bounds listed in Lemma 2.8. We will show that these upper bounds can be attained.
**Lemma 4.2**.: _The oriented link \(K_{0}\cup K_{1}\cup K_{2}\) in the surgery diagram in Figure 7 is a topological \(A_{3}\) link in \(S^{3}\)._
Proof.: The proof is similar to that of [10, Lemma 5.1, part (i)].
(1) Suppose \(t_{0}\geq 2\).
Figure 7. For \(n\) even, \(K_{0}\) and \(K_{i}\), \(i=1,2\), bear the same orientation, for \(n\) odd, the opposite one.
**Lemma 4.3**.: _If \(t_{0}\geq 2,t_{1}<0,t_{2}<0\), there exist \(2t_{1}t_{2}-2t_{1}-2t_{2}+2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are_
\[r_{0}=\pm(t_{0}-1),r_{i}\in\pm\{t_{i}+1,t_{i}+3,\cdots,-t_{i}+1\},i=1,2.\]
Proof.: There are \(2t_{1}t_{2}-2t_{1}-2t_{2}+2\) strongly exceptional Legendrian \(A_{3}\) links as illustrated in Figure 8. According to Lemma 4.2, \(K_{0}\cup K_{1}\cup K_{2}\) forms a topological \(A_{3}\) link. By performing the same calculations as in the proof of Theorem 1.2 (b1) in [10], we can determine that their rotation numbers are as listed. The corresponding \(d_{3}\)-invariant is \(\frac{1}{2}\). The strong exceptionality property arises from carrying out contact \((-1)\)-surgery along \(K_{0}\) which cancels the contact \((+1)\)-surgery.
(2) Suppose \(t_{0}=1\).
**Lemma 4.4**.: _If \(t_{0}=1,t_{1}<0,t_{2}<0\), then there exist \(t_{1}t_{2}-2t_{1}-2t_{2}+2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are_
\[r_{0}=0,r_{i}\in\{t_{i}+1,t_{i}+3,\cdots,-t_{i}+1\},i=1,2;\]
\[r_{0}=0,r_{1}=t_{1}-1,r_{2}\in\{t_{2}-1,t_{2}+1,\cdots,-t_{2}-1\};\]
\[r_{0}=0,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=t_{2}-1.\]
Proof.: There are \(t_{1}t_{2}-2t_{1}-2t_{2}+2\) strongly exceptional Legendrian \(A_{3}\) links as shown in Figure 9. By [10, Lemma 5.1, part (iii), Figure 3], we can show that it is a topological \(A_{3}\) link. By performing the same calculations as in the proof of Theorem 1.2 (b2) in [10], we can determine that their rotation numbers are as listed. Moreover, the corresponding \(d_{3}\)-invariant is \(\frac{1}{2}\).
(3) Suppose \(t_{0}=0\).
**Lemma 4.5**.: _If \(t_{0}=0,t_{1}<0,t_{2}<0\), then there exist \(-2t_{1}-2t_{2}+2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are_
\[r_{0}=\pm 1,r_{1}=\pm(t_{1}-1),r_{2}\in\{t_{2}+1,t_{2}+3,\cdots,-t_{2}- 1\};\]
\[r_{0}=\pm 1,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}-1);\]
\[r_{0}=\pm 1,r_{1}=\pm(t_{1}-1),r_{2}=\pm(t_{2}-1).\]
Proof.: By [10, Theorem 1.2], there are two strongly exceptional Legendrian Hopf link \(K_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0},r_{0})=(0,\pm 1)\), \(t_{1}<0\) and \(r_{1}=\pm(t_{1}-1)\). Let \(K_{2}\) be a local Legendrian meridian of \(K_{0}\). Then by Lemma 3.2 there are \(-2t_{2}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are
\[r_{0}=\pm 1,r_{1}=\pm(t_{1}-1),r_{2}\in\{t_{2}+1,t_{2}+3,\cdots,-t_{2}-1\}.\]
Similarly, there are \(-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are
\[r_{0}=\pm 1,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}-1).\]
Moreover, by Lemma 4.1 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 1,\pm(t_{1}-1),\pm(t_{2}-1))\).
(4) Suppose \(t_{0}<0\).
**Lemma 4.6**.: _If \(t_{i}<0\) for \(i=0,1,2\), then there exist \(-t_{0}t_{1}t_{2}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{st})\) whose rotation numbers are_
\[r_{i}\in\{t_{i}+1,t_{i}+3,\cdots,-t_{i}-1\},\text{ for }i=0,1,2.\]
Figure 10. A Legendrian \(A_{3}\) link in \((S^{3},\xi_{st})\).
Proof.: By stabilizations of the Legendrian \(A_{3}\) link shown in Figure 10, we obtain \(-t_{0}t_{1}t_{2}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{st})\). Their rotation numbers are as listed.
So there are exactly \(-t_{0}t_{1}t_{2}\) Legendrian \(A_{3}\) links in contact 3-spheres whose complements are appropriate tight if \(t_{i}<0\) for \(i=0,1,2\).
The proof of Theorem 1.1 is completed.
### \(t_{1}>0\) and \(t_{2}>0\)
The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}\in[-1,0)\) and \(s_{2}=-\frac{1}{t_{2}}\in[-1,0)\).
**Lemma 4.7**.: _For any \(t_{0}\in\mathbb{Z}\), there are \(6\) exceptional Legendrian \(A_{3}\) links whose exteriors have \(0\)-twisting vertical Legendrian circles, and the signs of basic slices in \(L^{\prime}_{0},L^{\prime}_{1},L^{\prime}_{2}\) are \(\pm(+--),\pm(++-)\) and \(\pm(+-+)\), respectively. Their rotation numbers are_
\[r_{0}=\pm(t_{0}+3),r_{1}=\pm(t_{1}+1),r_{2}=\pm(t_{2}+1);r_{0}=\pm(t_{0}-1),r _{1}=\pm(1-t_{1}),r_{2}=\pm(t_{2}+1);\]
\[r_{0}=\pm(t_{0}-1),r_{1}=\pm(t_{1}+1),r_{2}=\pm(1-t_{2}).\]
_The corresponding \(d_{3}\)-invariants are independent of \(t_{0}\) if \(t_{1}\) and \(t_{2}\) are fixed._
Proof.: The first statement can be inferred from Lemma 2.15 and Lemma 3.3. For example, when the signs of \(L^{\prime}_{0}\), \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\) are \(+--\), the rotation numbers can be computed using the relative Euler class as follows. Figure 5 for the decoration. The rotation number of \(K_{0}\) is
\[r_{0}=-(\frac{1}{t_{1}}\ominus\frac{0}{1})\bullet\frac{0}{1}-(\frac{0}{1} \ominus\frac{-1}{0})\bullet\frac{0}{1}-(\frac{1}{t_{2}}\ominus\frac{0}{1}) \bullet\frac{0}{1}-(\frac{0}{1}\ominus\frac{-1}{0})\bullet\frac{0}{1}+(\frac{ 1}{0}\ominus\frac{t_{0}}{1})\bullet\frac{0}{1}=-t_{0}-3.\]
The rotation number of \(K_{1}\) is
\[r_{1}=(\frac{-t_{0}}{1}\ominus\frac{-1}{0})\bullet\frac{1}{0}-(\frac{1}{0} \ominus\frac{0}{1})\bullet\frac{1}{0}-(\frac{0}{1}\ominus\frac{-1}{t_{1}}) \bullet\frac{1}{0}=-t_{1}-1.\]
The rotation number of \(K_{2}\) is
\[r_{2}=(\frac{-t_{0}}{1}\ominus\frac{-1}{0})\bullet\frac{1}{0}-(\frac{1}{0} \ominus\frac{0}{1})\bullet\frac{1}{0}-(\frac{0}{1}\ominus\frac{-1}{t_{2}}) \bullet\frac{1}{0}=-t_{2}-1.\]
#### 4.2.1. \(t_{1}=t_{2}=1\)
Proof of Theorem 1.2.: The upper bound of strongly exceptional Legendrian \(A_{3}\) links is given by Lemma 2.9. We will show that these upper bounds can be attained.
(1) Suppose \(t_{0}\geq 6\).
**Lemma 4.8**.: _If \(t_{0}\geq 6,t_{1}=t_{2}=1\), then there exist \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}+3),\pm 2,\pm 2;-\frac{3}{2}),(\pm(t_{0}-1),\pm 2,0;\frac{1}{2}),( \pm(t_{0}-1),0,\pm 2;\frac{1}{2}),(\pm(t_{0}-5),0,0;\frac{5}{2}).\]
Proof.: There exist \(8\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 11. Using the trick of Lemma 4.2, the upper branch in each of the surgery diagrams can be topologically reduced to a single unknot, and the lower two branches in each of the surgery diagrams can be split. Furthermore, using the trick in the proof of [10, Lemma 5.1, part (ii), Figure 5], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers are
\[r_{0}=\pm(t_{0}+3),r_{1}=\pm 2,r_{2}=\pm 2;r_{0}=\pm(t_{0}-1),r_{1}=\pm 2,r_{2}=0;\]
\[r_{0}=\pm(t_{0}-1),r_{1}=0,r_{2}=\pm 2;r_{0}=\pm(t_{0}-5),r_{1}=r_{2}=0.\]
The corresponding \(d_{3}\)-invariants are \(-\frac{3}{2},\frac{1}{2},\frac{1}{2},\frac{5}{2}\). These \(d_{3}\)-invariants are calculated using the algorithm described in [4].
(2) Suppose \(t_{0}=5\).
**Lemma 4.9**.: _If \(t_{0}=5,t_{1}=t_{2}=1\), then there exist \(7\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 4,0,\pm 2;\frac{1}{2}),(0,0,0;\frac{5}{2}),(\pm 4,\pm 2,0;\frac{1}{2}),( \pm 8,\pm 2,\pm 2;-\frac{3}{2}).\]
Proof.: By [10, Theorem 1.2, (c1), (c2)], there is a Legendrian Hopf link \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(t_{1},r_{1})=(1,0)\), two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(3,\pm 4),(t_{2},r_{2})=(1,\pm 2)\), and a Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(3,0),(t_{2},r_{2})=(1,0)\). Connected summing \(K_{0}^{\prime}\) and \(K_{0}^{\prime\prime}\), by Lemma 3.1, we obtain \(3\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=5,t_{1}=t_{2}=1\). Their rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 4,0,\pm 2;\frac{1}{2})\) and \((0,0,0;\frac{5}{2})\).
By exchanging the roles of \(K_{1}\) and \(K_{2}\), we obtain \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 4,\pm 2,0)\).
Figure 11. \(t_{0}\geq 6\), \(t_{1}=t_{2}=1\). For \(t_{0}\) even, \(K_{0}\) and \(K_{i}\), \(i=1,2\), bear the same orientation, for \(t_{0}\) odd, the opposite one.
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 8,\pm 2,\pm 2).\) Their exteriors have decorations \(\pm(+)(-)(-)\).
(3) Suppose \(t_{0}=4\).
**Lemma 4.10**.: _If \(t_{0}=4,t_{1}=t_{2}=1\), then there exist \(6\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 3,0,\pm 2;\frac{1}{2}),(\pm 3,\pm 2,0;\frac{1}{2}),(\pm 7,\pm 2,\pm 2;- \frac{3}{2}).\]
Proof.: Suppose \(t_{0}=4\). By [10, Theorem 1.2], there is a Legendrian Hopf link \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(t_{1},r_{1})=(1,0)\), and two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(2,\pm 3),(t_{2},r_{2})=(1,\pm 2).\) Connected summing \(K^{\prime}_{0}\) and \(K^{\prime\prime}_{0}\), by Lemma 3.1, we obtain \(2\) strongly exceptional Legendrian \(A_{3}\) link with \(t_{0}=4,t_{1}=t_{2}=1\) in \((S^{3},\xi_{\frac{1}{2}})\). Their rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 3,0,\pm 2).\)
By exchanging the roles of \(K_{1}\) and \(K_{2}\) we obtain \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 3,\pm 2,0).\)
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 7,\pm 2,\pm 2).\) Their exteriors have decorations \(\pm(+)(-)(-)\).
(4) Suppose \(t_{0}\leq 3\).
**Lemma 4.11**.: _If \(t_{0}\leq 3,t_{1}=t_{2}=1\), then there exist \(4-t_{0}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers are_
\[r_{0}\in\{t_{0}-3,t_{0}-1,\cdots,3-t_{0}\},r_{1}=r_{2}=0.\]
Proof.: Suppose \(t_{0}\leq 3\). By [10, Theorem 1.2, (c1), (b2)], there is a Legendrian Hopf link \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(t_{1},r_{1})=(1,0),\) and a Legendrian Hopf link \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \(t^{\prime\prime}_{0}\leq 1,r^{\prime\prime}_{0}\in\{t^{\prime\prime}_{0}-1,t^{ \prime\prime}_{0}+1,\cdots,-t^{\prime\prime}_{0}+1\},(t_{2},r_{2})=(1,0).\) By Lemma 3.1, we can construct \(4-t_{0}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) with \(t_{0}\leq 3,t_{1}=t_{2}=1\). Their rotation numbers are as listed.
These \(4-t_{0}\) strongly exceptional Legendrian \(A_{3}\) links are obtained by stabilizations along \(K_{0}\) of the Legendrian \(A_{3}\) link with \(t_{0}=3,t_{1}=t_{2}=1\).
The proof of Theorem 1.2 is completed.
#### 4.2.2. \(t_{1}\geq 2\) and \(t_{2}=1\)
The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}\) and \(s_{2}=-1\).
Proof of Theorem 1.3.: The upper bound of strongly exceptional Legendrian \(A_{3}\) links is given by Lemma 2.10. We will show that these upper bounds can be attained except in the cases \((t_{0},t_{1},t_{2})=(3,3,1)\).
(1) Suppose \(t_{0}\geq 5\) and \(t_{1}=2\).
Figure 12. \(t_{0}\geq 5\), \(t_{1}=2\), \(t_{2}=1\). For \(t_{0}\) odd, \(K_{0}\) and \(K_{2}\) bear the same orientation, while \(K_{0}\) and \(K_{1}\) bear the opposite orientation. For \(t_{0}\) even, \(K_{0}\) and \(K_{1}\) bear the same orientation, while \(K_{0}\) and \(K_{2}\) bear the opposite orientation.
**Lemma 4.12**.: _If \(t_{0}\geq 5\), \(t_{1}=2\) and \(t_{2}=1\), then there exist \(12\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}-5),\mp 1,0;\frac{5}{2}),(\pm(t_{0}-3),\pm 1,0;\frac{5}{2}),(\pm(t_{0 }-1),\pm 3,0;\frac{1}{2}),\]
\[(\pm(t_{0}-1),\mp 1,\pm 2;\frac{1}{2}),(\pm(t_{0}+1),\pm 1,\pm 2;\frac{1}{2}),( \pm(t_{0}+3),\pm 3,\pm 2;-\frac{3}{2}).\]
Proof.: There exist \(12\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 12. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (c3)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
(2) Suppose \(t_{0}=4\) and \(t_{1}=2\).
**Lemma 4.13**.: _If \(t_{0}=4\), \(t_{1}=2\) and \(t_{2}=1\), then there exist \(10\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 5,\pm 1,\pm 2;\frac{1}{2}),(\pm 3,\mp 1,\pm 2;\frac{1}{2}),(\pm 1,\pm 1,0;\frac{5}{2}),(\pm 3,\pm 3,0;\frac{1}{2}),(\pm 7,\pm 3,\pm 2;-\frac{3}{2}).\]
Proof.: By [10, Theorem 1.2, (c2), (d)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\pm 1),(t_{1},r_{1})=(2,\pm 1)\), two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(3,\pm 4),(t_{2},r_{2})=(1,\pm 2)\), and a Legendrian Hopf link \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(3,0),(t_{2},r_{2})=(1,0)\). By Lemma 3.1, we can obtain \(6\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 5,\pm 1,\pm 2;\frac{1}{2}),(\pm 3,\mp 1,\pm 2;\frac{1}{2})\) and \((\pm 1,\pm 1,0;\frac{5}{2})\).
By [10, Theorem 1.2, (c1), (c2)], there is a Legendrian Hopf link \(K^{\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(t_{2},r_{2})=(1,0)\), and four Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{1}\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(t_{1},r_{1})=(2,\pm 3)\) in \((S^{3},\xi_{-\frac{1}{2}})\) or \((2,\pm 1)\) in \((S^{3},\xi_{\frac{3}{2}})\). By Lemma 3.1, we can obtain \(2\) more strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 3,\pm 3,0;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there exist \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 7,\pm 3,\pm 2)\). Their exteriors have decorations \(\pm(+)(--)(-)\).
So there exist \(10\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=4,t_{1}=2,t_{2}=1\). As a corollary, the \(10\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=4,s_{1}=-\frac{1}{2},s_{2}=-1\) listed in Lemma 2.10 are all appropriate tight.
(3) Suppose \(t_{0}=3\) and \(t_{1}=2\).
**Lemma 4.14**.: _If \(t_{0}=3\), \(t_{1}=2\) and \(t_{2}=1\), then there exist \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 2,\pm 3,0;\frac{1}{2}),(\pm 2,\mp 1,\pm 2;\frac{1}{2}),(\pm 4,\pm 1,\pm 2; \frac{1}{2}),(\pm 6,\pm 3,\pm 2;-\frac{3}{2}).\]
Proof.: By [10, Theorem 1.2, (c2), (c1)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(1,\pm 2),(t_{1},r_{1})=(2,\pm 3)\), and one Legendrian Hopf link \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(t_{2},r_{2})=(1,0)\). By Lemma 3.1, we can obtain \(2\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 2,\pm 3,0;\frac{1}{2})\).
By [10, Theorem 1.2, (d), (c2)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(0,\pm 1),(t_{1},r_{1})=(2,\pm 1)\), and two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(2,\pm 3),\,(t_{2},r_{2})=(1,\pm 2)\). By Lemma 3.1, we can obtain \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 2,\mp 1,\pm 2;\frac{1}{2})\) and \((\pm 4,\pm 1,\pm 2;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 6,\pm 3,\pm 2).\) Their exteriors have decorations \(\pm(+)(--)(-)\).
So there are \(8\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}=2,t_{2}=1\). As a corollary, the \(8\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=3,s_{1}=-\frac{1}{2},s_{2}=-1\) listed in Lemma 2.10 are all appropriate tight.
(4) Suppose \(t_{0}\geq 5\) and \(t_{1}\geq 3\).
**Lemma 4.15**.: _If \(t_{0}\geq 5\), \(t_{1}\geq 3\) and \(t_{2}=1\), then there exist \(16\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}+1),\pm(t_{1}-1),\pm 2;\frac{1}{2}),(\pm(t_{0}+3),\pm(t_{1}+1),\pm 2 ;-\frac{3}{2}),\]
\[(\pm(t_{0}-1),\pm(1-t_{1}),\pm 2;\frac{1}{2}),(\pm(t_{0}+1),\pm(3-t_{1}),\pm 2 ;\frac{1}{2}),(\pm(t_{0}-3),\pm(t_{1}-1),0;\frac{5}{2}),\]
\[(\pm(t_{0}-1),\pm(t_{1}+1),0;\frac{1}{2}),(\pm(t_{0}-5),\pm(1-t_{1}),0;\frac{5 }{2}),(\pm(t_{0}-3),\pm(3-t_{1}),0;\frac{5}{2}).\]
Proof.: There exist \(16\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 13. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (c3), (c4)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
(5) Suppose \(t_{0}=4\) and \(t_{1}\geq 3\).
**Lemma 4.16**.: _If \(t_{0}=4\), \(t_{1}\geq 3\) and \(t_{2}=1\), then there exist \(14\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 3,\pm(t_{1}+1),0;\frac{1}{2}),(\mp 1,\pm(1-t_{1}),0;\frac{5}{2}),(\pm 1, \pm(3-t_{1}),0;\frac{5}{2}),\]
\[(\pm 5,\pm(t_{1}-1),\pm 2;\frac{1}{2}),(\pm 3,\pm(1-t_{1}),\pm 2;\frac{1}{2}),( \pm 7,\pm(t_{1}+1),\pm 2;-\frac{3}{2}),(\pm 5,\pm(3-t_{1}),\pm 2;\frac{1}{2}).\]
Proof.: By [10, Theorem 1.2, (c3), (c1)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\pm 3),t_{1}\geq 3,r_{1}=\pm(t_{1}+1)\), two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) are Legendrian.
Figure 13. \(t_{0}\geq 5\), \(t_{1}\geq 3\), \(t_{2}=1\). For \(t_{0}+t_{1}\) even, \(K_{0}\) and \(K_{1}\) bear the same orientation, and for \(t_{0}+t_{1}\) odd, the opposite one. For \(t_{0}\) odd, \(K_{0}\) and \(K_{2}\) bear the same orientation, and for \(t_{0}\) even, the opposite one. If a component is a Legendrian push-off of some \(K_{i}\), then its contact surgery coefficient is \(+1\); otherwise its contact surgery coefficient is \(-1\).
in \((S^{3},\xi_{\frac{3}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(2,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\), two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(2,\mp 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-3)\), and one Legendrian Hopf link \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(t_{2},r_{2})=(1,0)\). By Lemma 3.1, we can obtain \(6\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 3,\pm(t_{1}+1),0;\frac{1}{2}),(\mp 1,\pm(1-t_{1}),0;\frac{5}{2})\) and \((\pm 1,\pm(3-t_{1}),0;\frac{5}{2}).\)
By [10, Theorem 1.2, (d), (c2)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\), two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(3,\pm 4),(t_{2},r_{2})=(1,\pm 2)\). By Lemma 3.1, we can obtain \(4\) more strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 5,\pm(t_{1}-1),\pm 2;\frac{1}{2})\) and \((\pm 3,\pm(1-t_{1}),\pm 2;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers are \((\pm 7,\pm(t_{1}+1),\pm 2).\) The decorations of their exteriors are \(\pm(+)((-)(-))(-)\).
There are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 5,\pm(3-t_{1}),\pm 2).\) The decorations of their exteriors are \(\pm(+)((-)(+))(-)\). These exteriors can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(4,-\frac{1}{2},-1\) and decorations \(\pm(+)(-+)(-)\). This can be achieved by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{1}},-\frac{1}{t_{1}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{1}\), as per the Gluing Theorem [13, Theorem 1.3]. So these exteriors are appropriate tight.
(6) Suppose \(t_{0}=3\) and \(t_{1}\geq 3\).
**Lemma 4.17**.: _If \(t_{0}=3\), \(t_{1}\geq 3\) and \(t_{2}=1\), then there exist \(12\) (\(11\) if \(t_{1}=3\)) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 2,\pm(t_{1}+1),0;\frac{1}{2}),(0,\pm(3-t_{1}),0;\frac{5}{2}),(\pm 4,\pm(t_ {1}-1),\pm 2;\frac{1}{2}),(\pm 2,\pm(1-t_{1}),\pm 2;\frac{1}{2}),\]
\[(\pm 6,\pm(t_{1}+1),\pm 2;-\frac{3}{2}),(\pm 4,\pm(3-t_{1}),\pm 2;\frac{1}{2}).\]
Proof.: By [10, Theorem 1.2, (c3), (c2), (c1)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(1,\pm 2),t_{1}\geq 3,r_{1}=\pm(t_{1}+1)\), two (one if \(t_{1}=3\)) Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(1,0),t_{1}\geq 3,r_{1}=\pm(t_{1}-3)\), and one Legendrian Hopf link \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(t_{2},r_{2})=(1,0)\). By Lemma 3.1, we can obtain \(4\) (\(3\) if \(t_{1}=3\)) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 2,\pm(t_{1}+1),0;\frac{1}{2})\) and \((0,\pm(3-t_{1}),0;\frac{5}{2}).\)
By [10, Theorem 1.2, (d), (c2)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\), and two Legendrian Hopf link \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(2,\pm 3),(t_{2},r_{2})=(1,\pm 2)\). By Lemma 3.1, we can obtain \(4\) more strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 5,\pm(t_{1}-1),\pm 2;\frac{1}{2})\) and \((\pm 3,\pm(1-t_{1}),\pm 2;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers are \((\pm 7,\pm(t_{1}+1),\pm 2).\) The decorations of their exteriors are \(\pm(+)((-)(-))(-)\).
There are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 5,\pm(t_{1}-1),\pm 2;\frac{1}{2})\) and \((\pm 3,\pm(1-t_{1}),\pm 2;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 5,\pm(t_{1}-1),\pm 2;\frac{1}{2})\) and \((\pm 3,\pm(1-t_{1}),\pm 2;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 7,\pm(t_{1}+1),\pm 2).\) The decorations of their exteriors are \(\pm(+)((-)(-)(-))(-)\).
There are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 5,\pm(t_{1}-1),\pm 2;\frac{1}{2})\) and \((\pm 3,\pm(1-t
Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}\geq 3,t_{2}=1\). So there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 4,\pm(t_{1}-1),\pm 2;\frac{1}{2})\) and \((\pm 2,\pm(1-t_{1}),\pm 2;\frac{1}{2})\).
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 6,\pm(t_{1}+1),\pm 2).\) The decorations of their exteriors are \(\pm(+)((-)(-))(-)\).
There are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 4,\pm(3-t_{1}),\pm 2).\) The decorations of their exteriors are \(\pm(+)((-)(+))(-)\). These exteriors are appropriate tight since they can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(3,-\frac{1}{2},-1\) and decorations \(\pm(+)(-+)(-)\) by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{1}},-\frac{1}{t_{1}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{1}\).
So there are exactly \(12\) (resp. \(11\)) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3\), \(t_{1}\geq 4\) (resp. \(t_{1}=3\)), \(t_{2}=1\). If \(t_{0}=t_{1}=3\) and \(t_{2}=1\), then the decorations \((+)((-)(+))(+)\) and \((-)((+)(-))(-)\) correspond to the same Legendrian \(A_{3}\) links with rotation numbers \(r_{0}=r_{1}=r_{2}=0\).
(7) Suppose \(t_{0}\leq 2\).
**Lemma 4.18**.: _If \(t_{0}\leq 2\), \(t_{1}>1\) and \(t_{2}=1\), then there exist \(6-2t_{0}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers are_
\[r_{0}\in\pm\{t_{0}-1,t_{0}+1,\cdots,-t_{0}+1,-t_{0}+3\},r_{1}=\pm(t_{1}-1),r_ {2}=0.\]
Proof.: By [10, Theorem 1.2, (b1), (c1)], there is a Legendrian Hopf link \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \(t_{0}^{\prime}\leq 1,r_{0}^{\prime}\in\pm\{t_{0}^{\prime}+1,t_{0}^{\prime}+3, \cdots,-t_{0}^{\prime}-1,-t_{0}^{\prime}+1\}\), \(t_{1}\geq 2\), \(r_{1}=\pm(t_{1}-1)\), and a Legendrian Hopf link \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(t_{2},r_{2})=(1,0)\). By Lemma 3.1, we can construct \(6-2t_{0}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) with \(t_{0}\leq 2,t_{1}>1,t_{2}=1\). Their rotation numbers are as listed.
These \(6-2t_{0}\) strongly exceptional Legendrian \(A_{3}\) links are stabilizations of the Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}>1,t_{2}=1\).
The proof of Theorem 1.3 is completed.
#### 4.2.3. \(t_{1}\geq 2\) and \(t_{2}\geq 2\).
Proof of Theorem 1.4.: The upper bound of strongly exceptional Legendrian \(A_{3}\) links is given by Lemma 2.11. We will show that these upper bounds can be attained.
(1) Suppose \(t_{0}\geq 4\) and \(t_{1}=t_{2}=2\).
**Lemma 4.19**.: _If \(t_{0}\geq 4\) and \(t_{1}=t_{2}=2\), then there exist \(18\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}-1),\pm 3,\mp 1;\frac{1}{2}),(\pm(t_{0}+1),\pm 3,\pm 1;\frac{1}{2}),(\pm (t_{0}+3),\pm 3,\pm 3;-\frac{3}{2}),\]
\[(\pm(t_{0}-3),\pm 1,\mp 1;\frac{5}{2}),(\pm(t_{0}-1),\pm 1,\pm 1;\frac{5}{2}),(\pm(t_{0 }+1),\pm 1,\pm 3;\frac{1}{2}),\]
\[(\pm(t_{0}-5),\mp 1,\mp 1;\frac{5}{2}),(\pm(t_{0}-3),\mp 1,\pm 1;\frac{5}{2}),(\pm (t_{0}-1),\mp 1,\pm 3;\frac{1}{2}).\]
Proof.: If \(t_{0}\geq 4\) and \(t_{1}=t_{2}=2\), then there exist \(18\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 14. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (c3)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
(2) Suppose \(t_{0}=3\) and \(t_{1}=t_{2}=2\).
Figure 14. \(t_{0}\geq 4\), \(t_{1}=t_{2}=2\). For \(t_{0}\) odd, \(K_{0}\) and \(K_{i}\) are given the same orientation, for \(t_{0}\) even, the opposite one, where \(i=1,2\). If a component is a Legendrian push-off of some \(K_{i}\), then its contact surgery coefficient is \(+1\), otherwise its contact surgery coefficient is \(-1\).
**Lemma 4.20**.: _If \(t_{0}=3\) and \(t_{1}=t_{2}=2\), then there exist \(14\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 4,\pm 3,\pm 1;\frac{1}{2}),(\pm 4,\pm 1,\pm 3;\frac{1}{2}),(\pm 2,\pm 3, \mp 1;\frac{1}{2}),(\pm 2,\mp 1,\pm 3;\frac{1}{2}),\]
\[(\mp 2,\mp 1,\mp 1;\frac{5}{2}),(0,\mp 1,\pm 1;\frac{5}{2}),(\pm 6,\pm 3,\pm 3 ;-\frac{3}{2}).\]
Proof.: By [10, Theorem 1.2, (c2), (d)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(t_{1},r_{1})=(2,\pm 3)\), two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(t_{1},r_{1})=(2,\pm 1)\), and two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(0,\pm 1),(t_{2},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}=t_{2}=2\). So by exchanging the roles of \(K_{1}\) and \(K_{2}\) there are \(12\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 4,\pm 3,\pm 1;\frac{1}{2}),\)\((\pm 4,\pm 1,\pm 3;\frac{1}{2}),\)\((\pm 2,\pm 3,\mp 1;\frac{1}{2}),\)\((\pm 2,\mp 1,\pm 3;\frac{1}{2}),\)\((\mp 2,\mp 1,\mp 1;\frac{5}{2})\) and \((0,\mp 1,\pm 1;\frac{5}{2}).\)
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 6,\pm 3,\pm 3).\) The decorations of their exetriors are \(\pm(+)(--)(--)\).
So there are exactly \(14\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}=2,t_{2}=2\). As a corollary, the \(14\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=3,s_{1}=-\frac{1}{2},s_{2}=-\frac{1}{2}\) listed in Lemma 2.11 are all appropriate tight.
(3) Suppose \(t_{0}=t_{1}=t_{2}=2\).
**Lemma 4.21**.: _If \(t_{0}=t_{1}=t_{2}=2\), then there exist \(10\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 3,\pm 3,\pm 1;\frac{1}{2}),(\pm 3,\pm 1,\pm 3;\frac{1}{2}),(\pm 1,\pm 3, \mp 1;\frac{1}{2}),(\pm 1,\mp 1,\pm 3;\frac{1}{2}),(\pm 5,\pm 3,\pm 3;-\frac{3}{2}).\]
Proof.: By [10, Theorem 1.2, (c2), (d)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(1,\pm 2),(t_{1},r_{1})=(2,\pm 3)\), and two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(0,\pm 1),(t_{2},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=t_{1}=t_{2}=2\). So by exchanging the roles of \(K_{1}\) and \(K_{2}\) there are \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are are \((\pm 3,\pm 3,\pm 1;\frac{1}{2}),\)\((\pm 3,\pm 1,\pm 3;\frac{1}{2}),\)\((\pm 1,\pm 3,\mp 1;\frac{1}{2})\) and \((\pm 1,\mp 1,\pm 3;\frac{1}{2}).\)
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 5,\pm 3,\pm 3).\) The decorations of their exetriors are \(\pm(+)(--)(--)\).
So there are exactly \(10\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}=2,t_{2}=2\). As a corollary, the \(10\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=2,s_{1}=-\frac{1}{2},s_{2}=-\frac{1}{2}\) listed in Lemma 2.11 are all appropriate tight.
(4) Suppose \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}=2\).
Figure 15. \(t_{0}\geq 4\), \(t_{1}\geq 3\), \(t_{2}=2\). If \(t_{0}+t_{1}\) is odd, then \(K_{0}\) and \(K_{1}\) bear the same orientation, if \(t_{0}+t_{1}\) is even, then the opposite one. If \(t_{0}\) is odd, then \(K_{0}\) and \(K_{2}\) bear the same orientation, if \(t_{0}\) is even, then the opposite one. If a component is a Legendrian push-off of some \(K_{i}\), then its contact surgery coefficient is \(+1\), otherwise its contact surgery coefficient is \(-1\).
**Lemma 4.22**.: _If \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}=2\), then there exist \(24\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\) invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}-3),\pm(3-t_{1}),\mp 1;\frac{5}{2}),(\pm(t_{0}-1),\pm(3-t_{1} ),\pm 1;\frac{5}{2}),\] \[(\pm(t_{0}+1),\pm(3-t_{1}),\pm 3;\frac{1}{2}),(\pm(t_{0}-5),\pm(1-t _{1}),\mp 1;\frac{5}{2}),\] \[(\pm(t_{0}-3),\pm(1-t_{1}),\pm 1;\frac{5}{2}),(\pm(t_{0}-1),\pm(1-t _{1}),\pm 3;\frac{1}{2}),\] \[(\pm(t_{0}+1),\pm(t_{1}-1),\pm 3;\frac{1}{2}),(\pm(t_{0}-1),\pm(t _{1}-1),\pm 1;\frac{5}{2}),\] \[(\pm(t_{0}-3),\pm(t_{1}-1),\mp 1;\frac{5}{2}),(\pm(t_{0}+3),\pm(t _{1}+1),\pm 3;-\frac{3}{2}),\] \[(\pm(t_{0}+1),\pm(t_{1}+1),\pm 1;\frac{1}{2}),(\pm(t_{0}-1),\pm (t_{1}+1),\mp 1;\frac{1}{2}).\]
Proof.: If \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}=2\), then there are exactly \(24\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 15. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (c3), (c4)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
(5) Suppose \(t_{0}=3,t_{1}\geq 3\) and \(t_{2}=2\).
**Lemma 4.23**.: _If \(t_{0}=3,t_{1}\geq 3,t_{2}=2\), then there exist \(20\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 4,\pm(t_{1}-1),\pm 3;\frac{1}{2}),(\pm 2,\pm(1-t_{1}),\pm 3; \frac{1}{2}),(\mp 2,\pm(1-t_{1}),\mp 1;\frac{5}{2}),(0,\pm(1-t_{1}),\pm 1; \frac{5}{2}),\] \[(\pm 4,\pm(t_{1}+1),\pm 1;\frac{1}{2}),(\pm 2,\pm(t_{1}+1),\mp 1; \frac{1}{2}),(0,\pm(3-t_{1}),\mp 1;\frac{5}{2}),(\pm 2,\pm(3-t_{1}),\pm 1; \frac{5}{2}),\] \[(\pm 6,\pm(t_{1}+1),\pm 3;-\frac{3}{2}),(\pm 4,\pm(3-t_{1}),\pm 3; \frac{1}{2}).\]
Proof.: By [10, Theorem 1.2, (d), (c2)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(0,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\), there are two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(t_{2},r_{2})=(2,\pm 3)\), and two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(t_{1},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can obtain \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 4,\pm(t_{1}-1),\pm 3;\frac{1}{2}),(\pm 2,\pm(1-t_{1}),\pm 3;\frac{1}{2}),(\mp 2,\pm(1-t_{1}),\mp 1;\frac{5}{2})\) and \((0,\pm(1-t_{1}),\pm 1;\frac{5}{2})\).
By [10, Theorem 1.2, (c3), (d)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\pm 3),t_{1}\geq 3,r_{1}=\pm(t_{1}+1)\), two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\mp 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-3)\), and two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(0,\pm 1),(t_{2},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}\geq 3,t_{2}=2\). Then there are \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and \(d_{3}\)-invariants are \((\pm 4,\pm(t_{1}+1),\pm 1;\frac{1}{2}),(\pm 2,\pm(t_{1}+1),\mp 1;\frac{1}{2}),(0,\pm(3-t_{1} ),\mp 1;\frac{5}{2})\) and \((\pm 2,\pm(3-t_{1}),\pm 1;\frac{5}{2})\)
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 6,\pm(t_{1}+1),\pm 3).\) The decorations of their exteriors are \(\pm(+)((-)(-))(--).\)
There are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 4,\pm(3-t_{1}),\pm 3).\) The decorations of their exteriors are \(\pm(+)((-)(+))(--).\)
These exteriors are appropriate tight since they can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(3,-\frac{1}{2},-\frac{1}{2}\) and decorations \(\pm(+)(-+)(--)\) by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{1}},-\frac{1}{t_{1}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{1}.\)
So there are exactly \(20\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}\geq 3,t_{2}=2\). As a corollary, the \(20\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=3,s_{1}=-\frac{1}{t_{1}},s_{2}=-\frac{1}{2}\) listed in Lemma 2.11 are all appropriate tight.
(6) Suppose \(t_{0}=2\), \(t_{1}\geq 3\) and \(t_{2}=2\).
**Lemma 4.24**.: _If \(t_{0}=2\), \(t_{1}\geq 3\) and \(t_{2}=2\), then there exist \(16\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 3,\pm(t_{1}+1),\pm 1;\frac{1}{2}),(\pm 1,\pm(t_{1}+1),\mp 1;\frac{1}{2 }),(\mp 1,\pm(3-t_{1}),\mp 1;\frac{5}{2}),(\pm 1,\pm(3-t_{1}),\pm 1;\frac{5}{2}),\]
\[(\pm 1,\pm(1-t_{1}),\pm 3;\frac{1}{2}),(\pm 3,\pm(t_{1}-1),\pm 3;\frac{1}{2 }),(\pm 5,\pm(t_{1}+1),\pm 3;-\frac{3}{2}),(\pm 3,\pm(3-t_{1}),\pm 3;\frac{1}{2 }).\]
Proof.: By [10, Theorem 1.2, (c2), (c3), (d)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(1,\pm 2),t_{1}\geq 3,r_{1}=\pm(t_{1}+1)\), two (one if \(t_{1}=3\)) Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(1,0),t_{1}\geq 3,r_{1}=\pm(t_{1}-3)\), and two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(0,\pm 1),(t_{2},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}\geq 3,t_{2}=2\). Then there are \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 3,\pm(t_{1}+1),\pm 1;\frac{1}{2}),\)\((\pm 1,\pm(t_{1}+1),\mp 1;\frac{1}{2}),\)\((\mp 1,\pm(3-t_{1}),\mp 1;\frac{5}{2})\) and \((\pm 1,\pm(3-t_{1}),\pm 1;\frac{5}{2}).\)
By [10, Theorem 1.2, (d), (c2)], there are two Legendrian Hopf links \(K^{\prime}_{0}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\), and two Legendrian Hopf links \(K^{\prime\prime}_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t^{\prime\prime}_{0},r^{\prime\prime}_{0})=(1,\pm 2),(t_{2},r_{2})=(2,\pm 3)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}\geq 3,t_{2}=2\). Then there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 1,\pm(1-t_{1}),\pm 3;\frac{1}{2})\) and \((\pm 3,\pm(t_{1}-1),\pm 3;\frac{1}{2}).\)
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 5,\pm(t_{1}+1),\pm 3).\) The decorations of their exteriors are \(\pm(+)((-)(-))(--).\)
There are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 3,\pm(3-t_{1}),\pm 3).\) The decorations of their exteriors are \(\pm(+)((-)(+))(--).\) These exteriors are appropriate tight since they can be embedded into an appropriate tight
contact \(\Sigma\times S^{1}\) with boundary slopes \(2,-\frac{1}{2},-\frac{1}{2}\) and decorations \(\pm(+)(-+)(--)\) by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{1}},-\frac{1}{t_{1}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{1}\).
So there are exactly \(16\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}\geq 3,t_{2}=2\). As a corollary, the \(16\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=3,s_{1}=-\frac{1}{t_{1}},s_{2}=-\frac{1}{2}\) listed in Lemma 2.11 are all appropriate tight.
(7) Suppose \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\).
**Lemma 4.25**.: _If \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\), then there exist \(32\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}+1),\pm(t_{1}-1),\pm(t_{2}+1);\frac{1}{2}),(\pm(t_{0}-1 ),\pm(1-t_{1}),\pm(t_{2}+1);\frac{1}{2}),\] \[(\pm(t_{0}+3),\pm(t_{1}+1),\pm(t_{2}+1);-\frac{3}{2}),(\pm(t_{0}+ 1),\pm(3-t_{1}),\pm(t_{2}+1);\frac{1}{2}),\] \[(\pm(t_{0}-1),\pm(t_{1}-1),\pm(3-t_{2});\frac{5}{2}),(\pm(t_{0}-3 ),\pm(1-t_{1}),\pm(3-t_{2});\frac{5}{2}),\] \[(\pm(t_{0}+1),\pm(t_{1}+1),\pm(3-t_{2});\frac{1}{2}),(\pm(t_{0}-1 ),\pm(3-t_{1}),\pm(3-t_{2});\frac{5}{2}),\] \[(\pm(t_{0}-1),\pm(t_{1}-1),\pm(t_{2}-1);\frac{5}{2}),(\pm(t_{0}-3 ),\pm(1-t_{1}),\pm(t_{2}-1);\frac{5}{2}),\] \[(\pm(t_{0}+1),\pm(t_{1}+1),\pm(t_{2}-1);\frac{1}{2}),(\pm(t_{0}-1 ),\pm(3-t_{1}),\pm(t_{2}-1);\frac{5}{2}),\] \[(\pm(t_{0}-3),\pm(t_{1}-1),\pm(1-t_{2});\frac{5}{2}),(\pm(t_{0}-5 ),\pm(1-t_{1}),\pm(1-t_{2});\frac{5}{2}),\] \[(\pm(t_{0}-1),\pm(t_{1}+1),\pm(1-t_{2});\frac{1}{2}),(\pm(t_{0}-3 ),\pm(3-t_{1}),\pm(1-t_{2});\frac{5}{2}).\]
Proof.: If \(t_{0}\geq 4\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\), then there are exactly \(32\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 16. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (c4)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
(8) Suppose \(t_{0}=3\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\).
**Lemma 4.26**.: _If \(t_{0}=3\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\), then there exist \(28\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 4,\pm(t_{1}+1),\pm(t_{2}-1);\frac{1}{2}),(\pm 4,\pm(t_{1}-1), \pm(t_{2}+1);\frac{1}{2}),\] \[(\pm 2,\pm(t_{1}+1),\pm(1-t_{2});\frac{1}{2}),(\pm 2,\pm(1-t_{1}), \pm(t_{2}+1);\frac{1}{2}),\] \[(\mp 2,\pm(1-t_{1}),\pm(1-t_{2});\frac{5}{2}),(0,\pm(t_{1}-1), \pm(1-t_{2});\frac{5}{2}),\] \[(0,\pm(3-t_{1}),\pm(1-t_{2});\frac{5}{2}),(0,\pm(1-t_{1}),\pm(3-t _{2});\frac{5}{2}),\]
\[(\pm 2,\pm(3-t_{1}),\pm(t_{2}-1);\frac{5}{2}),(\pm 2,\pm(t_{1}-1),\pm(3-t_{2}); \frac{5}{2}),\]
\[(\pm 6,\pm(t_{1}+1),\pm(t_{2}+1);-\frac{3}{2}),(\pm 2,\pm(3-t_{1}),\pm(3-t_{2}); \frac{5}{2}),\]
Figure 16. \(t_{0}\geq 4\), \(t_{1}\geq 3\), \(t_{2}\geq 3\). If \(t_{0}+t_{i}\) is odd, then \(K_{0}\) and \(K_{i}\) bear the same orientation, \(i=1,2\), if \(t_{0}+t_{i}\) is even, then the opposite one. If a component is a Legendrian push-off of some \(K_{i}\), then its contact surgery coefficient is \(+1\), otherwise its contact surgery coefficient is \(-1\).
\[(\pm 4,\pm(t_{1}+1),\pm(3-t_{2});\frac{1}{2}),(\pm 4,\pm(3-t_{1}),\pm(t_{2}+1); \frac{1}{2}).\]
Proof.: By [10, Theorem 1.2, (c3), (d)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\pm 3),t_{1}\geq 3,r_{1}=\pm(t_{1}+1)\), two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\), two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\mp 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-3)\), and two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(0,\pm 1),t_{2}\geq 3,r_{2}=\pm(t_{2}-1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=3,t_{1}\geq 3,t_{2}\geq 3\). Then, after exchanging the roles of \(K_{1}\) and \(K_{2}\), there are \(20\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are
\[(\pm 4,\pm(t_{1}+1),\pm(t_{2}-1);\frac{1}{2}),(\pm 4,\pm(t_{1}-1), \pm(t_{2}+1);\frac{1}{2}),\] \[(\pm 2,\pm(t_{1}+1),\pm(1-t_{2});\frac{1}{2}),(\pm 2,\pm(1-t_{1} ),\pm(t_{2}+1);\frac{1}{2}),\] \[(\mp 2,\pm(1-t_{1}),\pm(1-t_{2});\frac{5}{2}),(0,\pm(t_{1}-1), \pm(1-t_{2});\frac{5}{2}),\] \[(0,\pm(3-t_{1}),\pm(1-t_{2});\frac{5}{2}),(0,\pm(1-t_{1}),\pm(3 -t_{2});\frac{5}{2}),\] \[(\pm 2,\pm(3-t_{1}),\pm(t_{2}-1);\frac{5}{2}),(\pm 2,\pm(t_{1}-1 ),\pm(3-t_{2});\frac{5}{2}).\]
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 6,\pm(t_{1}+1),\pm(t_{2}+1))\). The decorations of their exteriors are \(\pm(+)((-)(-))((-)(-))\).
There are other \(6\) more strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 2,\pm(3-t_{1}),\pm(3-t_{2});\frac{5}{2}),\)\((\pm 4,\pm(t_{1}+1),\pm(3-t_{2});\frac{1}{2})\) and \((\pm 4,\pm(3-t_{1}),\pm(t_{2}+1);\frac{1}{2})\). The decorations of their exteriors are
\[\pm(+)((-)(+))((-)(+)),\pm(+)((-)(-))((-)(+))\text{ and }\pm(+)((-)(+))((-) (-)),\]
respectively. These exteriors are tight since they can be embedded into a tight contact \(\Sigma\times S^{1}\) with boundary slopes \(3,-\frac{1}{t_{1}},-\frac{1}{2}\) and decorations
\[\pm(+)((-)(+))(-+),\pm(+)((-)(-))(-+)\text{ and }\pm(+)((-)(+))(--)\]
by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{2}},-\frac{1}{t_{2}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{2}\), respectively.
(9) Suppose \(t_{0}=2\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\).
**Lemma 4.27**.: _If \(t_{0}=2\), \(t_{1}\geq 3\) and \(t_{2}\geq 3\), then there exist \(24\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm 3,\pm(t_{1}+1),\pm(t_{2}-1);\frac{1}{2}),(\pm 3,\pm(t_{1}-1),\pm(t_{2}+1) ;\frac{1}{2}),\] \[(\pm 1,\pm(t_{1}+1),\pm(1-t_{2});\frac{1}{2}),(\pm 1,\pm(1-t_{1}), \pm(t_{2}+1);\frac{1}{2}),\]
\[(\mp 1,\pm(3-t_{1}),\pm(1-t_{2});\frac{5}{2}),(\mp 1,\pm(1-t_{1}),\pm(3-t_{2}) ;\frac{5}{2}),\] \[(\pm 1,\pm(3-t_{1}),\pm(t_{2}-1);\frac{5}{2}),(\pm 1,\pm(t_{1}-1), \pm(3-t_{2});\frac{5}{2}),\] \[(\pm 5,\pm(t_{1}+1),\pm(t_{2}+1);-\frac{3}{2}),(\pm 1,\pm(3-t_{1}), \pm(3-t_{2});\frac{5}{2}),\] \[(\pm 3,\pm(t_{1}+1),\pm(3-t_{2});\frac{1}{2}),(\pm 3,\pm(3-t_{1}),\pm(t_{2}+1);\frac{1}{2}).\]
Proof.: By [10, Theorem 1.2, (c2), (c3), (d)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(1,\pm 2),t_{1}\geq 3,r_{1}=\pm(t_{1}+1)\), two (one if \(t_{1}=3\)) Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(1,0),t_{1}\geq 3,r_{1}=\pm(t_{1}-3)\), and two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(0,\pm 1),t_{2}\geq 3,r_{2}=\pm(t_{2}-1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}\geq 3,t_{2}\geq 3\). So, after exchanging the roles of \(K_{1}\) and \(K_{2}\), there are \(16\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are
\[(\pm 3,\pm(t_{1}+1),\pm(t_{2}-1);\frac{1}{2}),(\pm 3,\pm(t_{1}-1),\pm(t_{2}+1 );\frac{1}{2}),\] \[(\pm 1,\pm(t_{1}+1),\pm(1-t_{2});\frac{1}{2}),(\pm 1,\pm(1-t_{1} ),\pm(t_{2}+1);\frac{1}{2}),\] \[(\mp 1,\pm(3-t_{1}),\pm(1-t_{2});\frac{5}{2}),(\mp 1,\pm(1-t_{1} ),\pm(3-t_{2});\frac{5}{2}),\] \[(\pm 1,\pm(3-t_{1}),\pm(t_{2}-1);\frac{5}{2}),(\pm 1,\pm(t_{1}-1 ),\pm(3-t_{2});\frac{5}{2}).\]
By Lemma 4.7 and Lemma 3.4, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 5,\pm(t_{1}+1),\pm(t_{2}+1)).\) The decorations of their exteriors are \(\pm(+)((-)(-))(--)\).
There are \(6\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 1,\pm(3-t_{1}),\pm(3-t_{2});\frac{5}{2}),\)\((\pm 3,\pm(t_{1}+1),\pm(3-t_{2});\frac{1}{2})\) and \((\pm 3,\pm(3-t_{1}),\pm(t_{2}+1);\frac{1}{2}).\) The decorations of their exteriors are
\[\pm(+)((-)(+))((-)(+)),\pm(+)((-)(-))((-)(+))\ \ \mbox{and}\ \pm(+)((-)(+))((-) (-)),\]
respectively. These exteriors are appropriate tight since they can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(2,-\frac{1}{t_{1}},-\frac{1}{2}\) and decorations
\[\pm(+)((-)(+))(-+),\pm(+)((-)(-))(-+)\ \ \mbox{and}\ \pm(+)((-)(+))(--)\]
by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{2}},-\frac{1}{t_{2}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{2}\), respectively.
(10) Suppose \(t_{0}\leq 1\).
**Lemma 4.28**.: _If \(t_{0}\leq 1\), \(t_{1}\geq 2\) and \(t_{2}\geq 2\), then there exist \(8-4t_{0}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers are_
\[r_{0}\in\pm \{t_{0}+1,t_{0}+3,\cdots,-t_{0}+1,-t_{0}+3\},r_{1}=\pm(t_{1}-1),r_{2 }=\pm(t_{2}-1);\] \[r_{0}\in\pm \{t_{0}-1,t_{0}+1,\cdots,-t_{0}-1,-t_{0}+1\},r_{1}=\pm(1-t_{1}),r_ {2}=\pm(t_{2}-1).\]
Proof.: By [10, Theorem 1.2, (d)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(0,\pm 1),t_{1}\geq 3,r_{1}=\pm(t_{1}-1)\). By [10, Theorem 1.2. (b1)], there are \(2(1-t_{0}^{\prime\prime})\) Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \(t_{0}^{\prime\prime}\leq 0\), \(r_{0}^{\prime\prime}\in\pm\{t_{0}^{\prime\prime}+1,t_{0}^{\prime\prime}+3, \cdots,-t_{0}^{\prime\prime}-1,-t_{0}^{\prime\prime}+1\}\), \(t_{2}\geq 2\), \(r_{2}=\pm(t_{2}-1)\). Using Lemma 3.1, we construct \(8-4t_{0}\) Legendrian \(A_{3}\) links in \((S^{3},\xi_{3/2})\) with \(t_{0}\leq 1,t_{1}\geq 2,t_{2}\geq 2\). Their rotation numbers are as listed.
These \(8-4t_{0}\) strongly exceptional Legendrian \(A_{3}\) links are stabilizations of the Legendrian \(A_{3}\) links with \(t_{0}=1,t_{1}\geq 2,t_{2}\geq 2\).
The proof of Theorem 1.4 is completed.
### \(t_{1}<0\) and \(t_{2}>0\)
**Lemma 4.29**.: _For any \(t_{0}\in\mathbb{Z}\), there are \(6\) exceptional Legendrian \(A_{3}\) links whose exteriors have \(0\)-twisting vertical Legendrian circles, and the signs of basic slices in \(L_{0}^{\prime},L_{1}^{\prime},L_{2}^{\prime}\) are \(\pm(+--),\pm(++-)\) and \(\pm(+-+)\), respectively. Their rotation numbers are_
\[r_{0}=\pm(t_{0}+1),r_{1}=\pm(1-t_{1}),r_{2}=\pm(t_{2}+1);r_{0}=\pm(t_{0}+1),r _{1}=\pm(t_{1}+1),r_{2}=\pm(t_{2}+1);\]
\[r_{0}=\pm(t_{0}-3),r_{1}=\pm(1-t_{1}),r_{2}=\pm(1-t_{2}).\]
_The corresponding \(d_{3}\)-invariants are independent of \(t_{0}\) if \(t_{1}\) and \(t_{2}\) are fixed._
Proof.: The first statement follows from Lemma 2.15 and Lemma 3.3. The calculation of rotation numbers is analogous to that in the proof of Lemma 4.7.
#### 4.3.1. \(t_{1}<0\) and \(t_{2}=1\)
The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}\) and \(s_{2}=-1\).
Proof of Theorem 1.5.: The upper bound of strongly exceptional Legendrian \(A_{3}\) links is given by Lemma 2.12. We will show that these upper bounds can be attained.
(1) Suppose \(t_{0}\geq 4\).
**Lemma 4.30**.: _If \(t_{0}\geq 4\), \(t_{1}<0\) and \(t_{2}=1\), then there exist \(2-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) with rotation numbers_
\[r_{0}=\pm(t_{0}+1),r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=\pm 2;\]
_and \(2-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) with rotation numbers_
\[r_{0}=\pm(t_{0}-3),r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=0.\]
Proof.: There exist \(4-4t_{1}\) strongly exceptional Legendrian representatives shown in Figure 17. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (b1), (c3)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are
\[r_{0}=\pm(t_{0}+1),r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=\pm 2 ;d_{3}=-\frac{1}{2},\]
\[r_{0}=\pm(t_{0}-3),r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=0;d_{3} =\frac{3}{2}.\]
(2) Suppose \(t_{0}=3\).
**Lemma 4.31**.: _If \(t_{0}=3\), \(t_{1}<0\) and \(t_{2}=1\), then there exist \(2-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) with rotation numbers_
\[r_{0}=\pm 4,r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=\pm 2;\]
_and \(2-t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) with rotation numbers_
\[r_{0}=0,r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=0.\]
Proof.: By [10, Theorem 1.2], there are two Legendrian Hopf links \(K_{0}\cup K_{2}\) with \((t_{0},r_{0})=(3,\pm 4)\) and \((t_{2},r_{2})=(1,\pm 2)\) in \((S^{3},\xi_{-\frac{1}{2}})\), and a Legendrian Hopf links with \((t_{0},r_{0})=(3,0)\) and \((t_{2},r_{2})=(1,0)\) in \((S^{3},\xi_{\frac{3}{2}})\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\); then there are \(-3t_{1}\) strongly exceptional Legendrian \(A_{3}\) links. Their rotation numbers and corresponding \(d_{3}\)-invariants are
\[r_{0}=\pm 4,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm 2;d_{3}=- \frac{1}{2},\]
Figure 17. \(t_{0}\geq 4\), \(t_{1}\leq 0\), \(t_{2}=1\). \(k_{1}+l_{1}=-t_{1}\). If \(t_{0}\) is even, then \(K_{0}\) and \(K_{i}\), \(i=1,2\), bear the same orientations, if \(t_{0}\) is odd, then the opposite orientation.
\[r_{0}=0,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=0;d_{3}=\frac{3}{2}.\]
By Lemma 4.29 and Lemma 3.4, there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 4,\pm(1-t_{1}),\pm 2;-\frac{1}{2})\) and \((0,\pm(t_{1}-1),0;\frac{3}{2})\).
(3) Suppose \(t_{0}=2\).
**Lemma 4.32**.: _If \(t_{0}=2\), \(t_{1}<0\) and \(t_{2}=1\), then there exist \(2-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) with rotation numbers_
\[r_{0}=\pm 3,r_{1}\in\mp\{t_{1}-1,t_{1}+1,\cdots,-t_{1}-1\},r_{2}=\pm 2;\]
_and \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) with rotation numbers_
\[r_{0}=\pm 1,r_{1}=\pm(t_{1}-1),r_{2}=0.\]
Proof.: If \(t_{0}=2\), then, by [10, Theorem 1.2. (c2)], there exist two Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0},r_{0})=(2,\pm 3)\) and \((t_{2},r_{2})=(1,\pm 2)\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\), then by Lemma 3.2 we can realize \(-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) whose rotation numbers are
\[r_{0}=\pm 3,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm 2.\]
By Lemma 4.29 and Lemma 3.4, there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 3,\pm(1-t_{1}),\pm 2;-\frac{3}{2})\) and \((\pm 1,\pm(t_{1}-1),0;\frac{1}{2})\).
(4) Suppose \(t_{0}\leq 1\).
**Lemma 4.33**.: _If \(t_{0}\leq 1\), \(t_{1}<0\) and \(t_{2}=1\), then there exist \(t_{0}t_{1}-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) with rotation numbers_
\[r_{0}\in\{t_{0}-1,t_{0}+1,\cdots,1-t_{0}\},r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t _{1}-1\},r_{2}=0.\]
Proof.: By [10, Theorem 1.2. (b2), (e)], there are \(2-t_{0}\) strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{1/2})\) with
\[r_{0}\in\{t_{0}-1,t_{0}+1,\cdots,1-t_{0}\},t_{2}=1,r_{2}=0.\]
Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\). Then by Lemma 3.2 there are \((2-t_{0})(-t_{1})=t_{0}t_{1}-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{1/2})\) with rotation numbers are as listed.
These \(t_{0}t_{1}-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links are stabilizations of the Legendrian \(A_{3}\) links with \(t_{0}=1,t_{1}=-1,t_{2}=1\).
The proof of Theorem 1.5 is completed.
#### 4.3.2. \(t_{1}<0\) and \(t_{2}\geq 2\)
The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=-\frac{1}{t_{1}}\) and \(s_{2}=-\frac{1}{t_{2}}\).
Proof of Theorem 1.6.: The upper bound of strongly exceptional Legendrian \(A_{3}\) links is given by Lemma 2.13. We will show that the upper bounds can be attained except in the cases that \(t_{0}=1\), \(t_{1}<0\) and \(t_{2}=3\).
(1) Suppose \(t_{0}\geq 3\) and \(t_{2}=2\).
**Lemma 4.34**.: _If \(t_{0}\geq 3\), \(t_{1}<0\) and \(t_{2}=2\), then there exist \(6-6t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are_
\[r_{0}=\pm(t_{0}+1),r_{1}\in\pm\{t_{1}+1,\cdots,-t_{1}-1,-t_{1}+1\},r_{2}=\pm 3 ;d_{3}=-\frac{1}{2},\]
\[r_{0}=\pm(t_{0}-1),r_{1}\in\pm\{t_{1}+1,\cdots,-t_{1}-1,-t_{1}+1\},r_{2}=\pm 1 ;d_{3}=\frac{3}{2},\]
\[r_{0}=\pm(t_{0}-3),r_{1}\in\pm\{t_{1}+1,\cdots,-t_{1}-1,-t_{1}+1\},r_{2}=\mp 1 ;d_{3}=\frac{3}{2}.\]
Proof.: There are \(6-6t_{1}\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 18. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (b1), (c3)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
(2) Suppose \(t_{0}=t_{2}=2\).
Figure 18. \(t_{0}\geq 3\), \(t_{1}\leq 0\), \(t_{2}=2\). \(k_{1}+l_{1}=-t_{1}\). If \(t_{0}\) is even, then \(K_{0}\) and \(K_{2}\) bear the same orientation, if \(t_{0}\) is odd, then the opposite one. If \(t_{0}\) is odd, then \(K_{0}\) and \(K_{1}\) bear the same orientation, if \(t_{0}\) is even, then the opposite one.
**Lemma 4.35**.: _If \(t_{0}=t_{2}=2\) and \(t_{1}<0\), then there exist \(6-4t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are_
\[r_{0}=\pm 3,r_{1}\in\pm\{t_{1}+1,\cdots,-t_{1}-1,-t_{1}+1\},r_{2}=\pm 3;d_{3}=- \frac{1}{2},\]
\[r_{0}=\pm 1,r_{1}=\pm(1-t_{1}),r_{2}=\pm 1;d_{3}=\frac{3}{2},\]
\[r_{0}=\mp 1,r_{1}\in\pm\{t_{1}+1,\cdots,-t_{1}-1,-t_{1}+1\},r_{2}=\mp 1;d_{3}= \frac{3}{2}.\]
Proof.: If \(t_{0}=t_{2}=2\), then by [10, Theorem 1.2, (c2)], there are two strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0},r_{0})=(2,\pm 3)\) and \((t_{2},r_{2})=(2,\pm 3)\), and two strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0},r_{0})=(2,\pm 1)\) and \((t_{2},r_{2})=(2,\pm 1)\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\), then by Lemma 3.2 there are \(-4t_{1}\) strongly exceptional Legendrian \(A_{3}\) links. Their rotation numbers and corresponding \(d_{3}\)-invariants are
\[r_{0}=\pm 3,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm 3;d_{3}=- \frac{1}{2},\]
\[r_{0}=\pm 1,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm 1;d_{3}= \frac{3}{2}.\]
By Lemma 4.29 and Lemma 3.4, there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 3,\pm(1-t_{1}),\pm 3;-\frac{1}{2})\) and \((\mp 1,\pm(1-t_{1}),\mp 1;\frac{3}{2})\). The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(--)\text{ and }\pm(+)(\underbrace{- \cdots-}_{-t_{1}})(++),\]
respectively.
By [10, Theorem 1.2, (b2), (d)], there are two Legendrian Hopf links in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(1,0),t_{1}<0,r_{1}=\mp(t_{1}-1)\), there are two Legendrian Hopf links in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\pm 1),(t_{2},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can construct \(2\) strongly exceptional Legendrian \(A_{3}\) in \((S^{3},\xi_{\frac{3}{2}})\) links with \(t_{0}=t_{2}=2,t_{1}<0\). Their rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm 1,\pm(1-t_{1}),\pm 1).\) The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(+-).\]
So there are \(6-4t_{1}\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=2,t_{1}<0,t_{2}=2\). As a corollary, the \(6-4t_{1}\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=2,s_{1}=-\frac{1}{t_{1}},s_{2}=-\frac{1}{2}\) listed in Lemma 2.13 are all appropriate tight.
(3) Suppose \(t_{0}=1\) and \(t_{2}=2\).
**Lemma 4.36**.: _If \(t_{0}=1\), \(t_{1}<0\) and \(t_{2}=2\), there exist \(6-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are_
\[r_{0}=\pm 2,r_{1}\in\pm\{t_{1}+1,\cdots,-t_{1}-1,-t_{1}+1\},r_{2}=\pm 3;d_{3}=- \frac{1}{2},\]
\[r_{0}=\mp 2,r_{1}=\pm(1-t_{1}),r_{2}=\mp 1;d_{3}=\frac{3}{2},\]
\[r_{0}=0,r_{1}=\pm(1-t_{1}),r_{2}=\pm 1;d_{3}=\frac{3}{2}.\]
Proof.: If \(t_{0}=1\) and \(t_{2}=2\), then, by [10, Theorem 1.2], there are two strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) with \((t_{0},r_{0})=(1,\pm 2)\) and \((t_{2},r_{2})=(2,\pm 3)\) in \((S^{3},\xi_{-\frac{1}{2}})\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\), then by Lemma 3.2 we can realize \(-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) whose rotation numbers are
\[r_{0}=\pm 2,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm 3.\]
By Lemma 4.29 and Lemma 3.4, there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 2,\pm(1-t_{1}),\pm 3;-\frac{1}{2})\) and \((\mp 2,\pm(1-t_{1}),\mp 1;\frac{3}{2})\). The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(--)\text{ and }\pm(+)(\underbrace{- \cdots-}_{-t_{1}})(++),\]
respectively.
By [10, Theorem 1.2, (d)], there are two Legendrian Hopf links in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\mp 1),t_{1}<0,r_{1}=\mp(t_{1}-1)\), there are two Legendrian Hopf links in \((S^{3},\xi_{\frac{1}{2}})\) with \((t^{\prime}_{0},r^{\prime}_{0})=(0,\pm 1),(t_{2},r_{2})=(2,\pm 1)\). By Lemma 3.1, we can construct \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) with \(t_{0}=1,t_{1}<0,t_{2}=2\). Their rotation numbers \((r_{0},r_{1},r_{2})\) are \((0,\pm(1-t_{1}),\pm 1)\). The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(+-).\]
So there are \(6-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=1,t_{1}<0,t_{2}=2\). As a corollary, the \(6-2t_{1}\) contact structures on \(\Sigma\times S^{1}\) with boundary slopes \(s_{0}=1,s_{1}=-\frac{1}{t_{1}},s_{2}=-\frac{1}{2}\) listed in Lemma 2.13 are all appropriate tight.
(4) Suppose \(t_{0}\geq 3\) and \(t_{2}\geq 3\).
**Lemma 4.37**.: _If \(t_{0}\geq 3\), \(t_{1}<0\) and \(t_{2}\geq 3\), then there are \(8-8t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are_
\[r_{0}=\pm(t_{0}+1),r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(t _{2}+1);d_{3}=-\frac{1}{2},\]
\[r_{0}=\pm(t_{0}-1),r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(t _{2}-1);d_{3}=\frac{3}{2},\]
\[r_{0}=\pm(t_{0}-3),r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(1-t_{2}) ;d_{3}=\frac{3}{2},\]
\[r_{0}=\pm(t_{0}-1),r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(3-t_{ 2});d_{3}=\frac{3}{2}.\]
Proof.: If \(t_{0}\geq 3\) and \(t_{2}\geq 3\), then there are \(8-8t_{1}\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 19. Using the trick of Lemma 4.2 and the proof of [10, Theorem 1.2, (b1), (c4)], we can show that \(K_{0}\cup K_{1}\cup K_{2}\) is a topological \(A_{3}\) link. Their rotation numbers are as listed.
(5) Suppose \(t_{0}=2\) and \(t_{2}\geq 3\).
**Lemma 4.38**.: _If \(t_{0}=2\), \(t_{1}<0\) and \(t_{2}\geq 3\), then there exist \(8-6t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are_
\[r_{0}=\pm 3,r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(t_{2}+1);d_{3 }=-\frac{1}{2},\]
\[r_{0}=\pm 1,r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(t_{2}-1);d_{3 }=\frac{3}{2},\]
\[r_{0}=\mp 1,r_{1}=\pm(1-t_{1}),r_{2}=\pm(1-t_{2});d_{3}=\frac{3}{2},\]
\[r_{0}=\pm 1,r_{1}\in\pm\{t_{1}+1,t_{1}+3,\cdots,-t_{1}+1\},r_{2}=\pm(3-t_{2});d_{3}= \frac{3}{2}.\]
Proof.: If \(t_{0}=2\) and \(t_{2}\geq 3\), then, by [10, Theorem 1.2, (c3)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\pm 3),t_{2}\geq 3,r_{2}=\pm(t_{2}+1)\), two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\pm 1),t_{2}\geq 3,r_{2}=\pm(t_{2}-1)\), two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0}^{\prime},r_{0}^{\prime})=(2,\mp 1),t_{2}\geq 3,r_{2}=\pm(t_{2}-3)\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\); then by Lemma 3.2, we can realize \(-6t_{1}\) strongly exceptional Legendrian representatives. There are \(-2t_{1}\) of them belong to \((S^{3},\xi_{-\frac{1}{2}})\) with rotation numbers
\[r_{0}=\pm 3,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}+1).\]
There are \(-4t_{1}\) of them belong to \((S^{3},\xi_{\frac{3}{2}})\) with rotation numbers
\[r_{0}=\pm 1,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}-1);\]
\[r_{0}=\mp 1,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}-3).\]
By Lemma 4.29 and Lemma 3.4, there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 3,\pm(1-t_{1}),\pm(t_{2}+1);-\frac{1}{2})\) and \((\mp 1,\pm(1-t_{1}),\pm(1-t_{2});\frac{3}{2})\). The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})((-)(-))\text{ and }\pm(+)(\underbrace{- \cdots-}_{-t_{1}})((+)(+)),\]
respectively.
There are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 1,\pm(1-t_{1}),\pm(t_{2}-1);\frac{3}{2})\) and \((\pm 1,\pm(1-t_{1}),\pm(3-t_{2});\frac{3}{2}).\) The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})((-)(+))\text{ and }\pm(+)(\underbrace{- \cdots-}_{-t_{1}})((+)(-)),\]
respectively. These exteriors are appropriate tight since they can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(2,-\frac{1}{t_{1}},-\frac{1}{2}\) and decorations \(\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(-+)\) by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{2}},-\frac{1}{t_{2}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{2}\), respectively.
(6) Suppose \(t_{0}=1\) and \(t_{2}\geq 3\).
**Lemma 4.39**.: _If \(t_{0}=1\), \(t_{1}<0\) and \(t_{2}\geq 4\) (resp. \(t_{2}=3\)), then there exist \(8-4t_{1}\) (resp. \(8-3t_{1}\)) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are_
\[r_{0}=\pm 2,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\}\cup\{\pm(1-t_{1})\},r_{2} =\pm(t_{2}+1);d_{3}=-\frac{1}{2},\]
\[r_{0}=0,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\}\cup\{\pm(t_{1}-1)\},r_{2}=\pm(t _{2}-3);d_{3}=\frac{3}{2},\]
\[r_{0}=\mp 2,r_{1}=\pm(1-t_{1}),r_{2}=\pm(1-t_{2});d_{3}=\frac{3}{2},\]
\[r_{0}=0,r_{1}=\pm(1-t_{1}),r_{2}=\pm(t_{2}-1);d_{3}=\frac{3}{2}.\]
Proof.: If \(t_{0}=1\) and \(t_{2}=3\), then, by [10, Theorem 1.2, (c2)], there are two strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0},r_{0})=(1,\pm 2)\) and \((t_{2},r_{2})=(3,\pm 4)\), and one strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0},r_{0})=(1,0)\) and \((t_{2},r_{2})=(3,0)\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\), then by Lemma 3.2 we can realize \(-3t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are
\[r_{0}=\pm 2,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm 4;d_{3}=- \frac{1}{2},\]
\[r_{0}=0,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=0;d_{3}=\frac{3}{2}.\]
If \(t_{0}=1\) and \(t_{2}\geq 4\), then, by [10, Theorem 1.2, (c3)], there are two strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{-\frac{1}{2}})\) with \((t_{0},r_{0})=(1,\pm 2)\), \(t_{2}\geq 4\), and \(r_{2}=\pm(t_{2}+1)\), and two strongly exceptional Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{\frac{3}{2}})\) with \((t_{0},r_{0})=(1,0)\), \(t_{2}\geq 4\) and \(r_{2}=\pm(t_{2}-3)\). Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\), then by Lemma 3.2 we can realize \(-4t_{1}\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants are
\[r_{0}=\pm 2,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}+1);d_{3}= -\frac{1}{2},\]
\[r_{0}=0,r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}-3);d_{3}= \frac{3}{2}.\]
For any \(t_{2}\geq 3\), by Lemma 4.29 and Lemma 3.4, there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((\pm 2,\pm(1-t_{1}),\pm(t_{2}+1);-\frac{1}{2})\) and \((\mp 2,\pm(1-t_{1}),\pm(1-t_{2});\frac{3}{2}).\) The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})((-)(-))\text{ and }\pm(+)(\underbrace{- \cdots-}_{-t_{1}})((+)(+)),\]
respectively.
For any \(t_{2}\geq 3\), there are \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are \((0,\pm(1-t_{1}),\pm(t_{2}-1);\frac{3}{2})\) and \((0,\pm(t_{1}-1),\pm(t_{2}-3);\frac{3}{2}).\) The decorations of their exteriors are
\[\pm(+)(\underbrace{-\cdots-}_{-t_{1}})((+)(-))\text{ and }\pm(+)(\underbrace{- \cdots-}_{-t_{1}})((-)(+)),\]
respectively. These exteriors are appropriate tight since they can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(1,-\frac{1}{t_{1}},-\frac{1}{2}\) and decorations \(\pm(+)(\underbrace{-\cdots-}_{-t_{1}})(+-)\)
by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{2}},-\frac{1}{t_{2}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{2}\), respectively.
So, there are exactly \(8-4t_{1}\) (resp. exactly \(8-3t_{1}\)) strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}=1,t_{1}<0,t_{2}\geq 4\) (resp. \(t_{2}=3\)). If \(t_{0}=1\), \(t_{1}<0\) and \(t_{2}=3\), then the decorations
\[(+)(\underbrace{+\cdots+}_{l}\underbrace{-\cdots-}_{k})((-)(+))\text{ and }(-)( \underbrace{-\cdots-}_{k+1}\underbrace{+\cdots+}_{l-1})((+)(-))\]
correspond to the same Legendrian \(A_{3}\) links with rotation numbers \(r_{0}=r_{2}=0,r_{1}=l-k-1\), where \(k\geq 0,l\geq 1,k+l=-t_{1}\).
(7) Suppose \(t_{0}\leq 0\).
**Lemma 4.40**.: _If \(t_{0}\leq 0\), \(t_{1}<0\) and \(t_{2}>1\), then there exist \(2t_{0}t_{1}-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are_
\[r_{0}\in\pm\{t_{0}+1,t_{0}+3,\cdots,-t_{0}-1,-t_{0}+1\},\] \[r_{1}\in\{t_{1}+1,t_{1}+3,\cdots,-t_{1}-1\},r_{2}=\pm(t_{2}-1).\]
Proof.: By [10, Theorem 1.2. (b1)], there are \(2(1-t_{0})\) Legendrian Hopf links \(K_{0}\cup K_{2}\) in \((S^{3},\xi_{1/2})\) whose rotation numbers are
\[r_{0}\in\pm\{t_{0}+1,t_{0}+3,\cdots,-t_{0}-1,-t_{0}+1\},t_{2}\geq 2,r_{2}=\pm(t_{ 2}-1).\]
Let \(K_{1}\) be a local Legendrian meridian of \(K_{0}\). Then by Lemma 3.2 there are \(2(1-t_{0})(-t_{1})=2t_{0}t_{1}-2t_{1}\) isotopy classes. Their rotation numbers are as listed.
These \(2t_{0}t_{1}-2t_{1}\) strongly exceptional Legendrian \(A_{3}\) links are stabilizations of the Legendrian \(A_{3}\) links with \(t_{0}=0,t_{1}=-1,t_{2}>1\).
The proof of Theorem 1.6 is completed.
### \(t_{1}=0\)
The boundary slopes of \(\Sigma\times S^{1}\) are \(s_{0}=t_{0}\), \(s_{1}=\infty\) and \(s_{2}=-\frac{1}{t_{2}}\). The appropriate tight contact structures on \(\Sigma\times S^{1}\) can be decomposed as \(L^{\prime}_{0}\cup L^{\prime}_{2}\cup\Sigma^{\prime}\times S^{1}\).
**Lemma 4.41**.: _For any \(t_{0}\in\mathbb{Z}\), there are \(4\) exceptional Legendrian \(A_{3}\) links whose signs of basic slices in \(L^{\prime}_{0},L^{\prime}_{2}\) are \(\pm(+-)\) and \(\pm(++)\), respectively. Their rotation numbers are_
\[r_{0}=\pm(t_{0}+1),r_{1}=\pm 1,r_{2}=\pm(t_{2}+1);r_{0}=\pm(t_{0}-3),r_{1}= \pm 1,r_{2}=\pm(1-t_{2}).\]
_The corresponding \(d_{3}\)-invariants are independent of \(t_{0}\) if \(t_{2}\) is fixed._
Proof.: The first statement follows from Lemma 2.16 and Lemma 3.3. Suppose the signs of the basic slices in \(L^{\prime}_{0}\) and \(L^{\prime}_{2}\) are \(+\) and \(-\), respectively. Then
\[r_{0}=-(\frac{1}{t_{2}}\ominus\frac{0}{1})\bullet\frac{0}{1}-( \frac{0}{1}\ominus\frac{-1}{0})\bullet\frac{0}{1}+(\frac{1}{0}\ominus\frac{t_{ 0}}{1})\bullet\frac{0}{1}=-(t_{0}+1),\] \[r_{1}=(\frac{-t_{0}}{1}\ominus\frac{-1}{0})\bullet\frac{1}{0}=-1,\] \[r_{2}=(\frac{-t_{0}}{1}\ominus\frac{-1}{0})\bullet\frac{1}{0}-( \frac{1}{0}\ominus\frac{0}{1})\bullet\frac{0}{1}-(\frac{0}{1}\ominus\frac{-1}{ t_{2}})\bullet\frac{1}{0}=-(t_{2}+1).\]
The computation of other cases are similar.
**Lemma 4.42**.: _Suppose \(t_{0}\leq 2\), \(t_{1}=0\) and \(t_{2}\geq 2\). Then there are \(4\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers are_
\[r_{0}=\pm(t_{0}-1),r_{1}=\pm 1,r_{2}=\pm(t_{2}-1);r_{0}=\pm(t_{0}-3),r_{1}=\pm 1,r_{2}=\pm(1-t_{2}).\]
Proof.: By [10, Theorem 1.2, (d)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \(t_{0}^{\prime}\leq 1,r_{0}^{\prime}=\pm(t_{0}^{\prime}-1),(t_{1},r_{1})=(0, \pm 1)\), and two Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(0,\pm 1),t_{2}\geq 2,r_{2}=\pm(t_{2}-1)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}\leq 2,t_{1}=0,t_{2}\geq 2\). So there are \(4\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers are as listed.
Proof of Theorem 1.7.: The upper bound of strongly exceptional Legendrian \(A_{3}\) links is given by Lemma 2.14. We will show that these upper bounds can be attained.
(1) Suppose \(t_{2}\leq 0\).
**Lemma 4.43**.: _If \(t_{1}=0\) and \(t_{2}\leq 0\), then there exist \(2-2t_{2}\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) whose rotation numbers are_
\[r_{0}=\pm(t_{0}-1),r_{1}=\pm 1,r_{2}\in\pm\{t_{2}+1,t_{2}+3,\cdots,-t_{2}+1\}.\]
Proof.: If \(t_{2}\leq 0\) and \(t_{0}\leq 0\), there exist \(2(1-t_{2})\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 20. Similar to the proof of [10, Lemma 5.1, part (iii), Figure 6], we can show that the link \(K_{0}\cup K_{1}\cup K_{2}\) in Figure 20 is indeed a topological \(A_{3}\) link. By performing the same calculations as in the proof of Theorem 1.2 (d) in [10], we can determine that their rotation numbers are as listed. Moreover, the corresponding \(d_{3}\)-invariant is \(\frac{1}{2}\).
If \(t_{2}\leq 0\) and \(t_{0}=1\) (resp. \(t_{0}\geq 2\)), then there exist \(2(1-t_{2})\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 9 (resp. Figure 8) with \(k_{1}=l_{1}=0\). Their rotation numbers and the corresponding \(d_{3}\)-invariants are as listed.
(2) Suppose \(t_{2}=1\).
**Lemma 4.44**.: _If \(t_{1}=0\) and \(t_{2}=1\), then there exist \(4\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}-3),\pm 1,0;\frac{3}{2}),(\pm(t_{0}+1),\pm 1,\pm 2;-\frac{1}{2}).\]
Proof.: If \(t_{2}=1\) and \(t_{0}\geq 4\), then there exist \(4\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 17 with \(k_{1}=l_{1}=0\). Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
Suppose \(t_{2}=1\) and \(t_{0}\leq 3\). By [10, Theorem 1.2, (d), (c1)], there are two Legendrian Hopf links \(K_{0}^{\prime}\cup K_{1}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \(t_{0}^{\prime}\leq 1,r_{0}^{\prime}=\pm(t_{0}^{\prime}-1),(t_{1},r_{1})=(0,\pm 1)\), and one Legendrian Hopf links \(K_{0}^{\prime\prime}\cup K_{2}\) in \((S^{3},\xi_{\frac{1}{2}})\) with \((t_{0}^{\prime\prime},r_{0}^{\prime\prime})=(t_{2},r_{2})=(1,0)\). By Lemma 3.1, we can obtain strongly exceptional Legendrian \(A_{3}\) links with \(t_{0}\leq 3,t_{1}=0,t_{2}=1\). So there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}-3),\pm 1,0)\).
Moreover, by Lemma 4.41 and Lemma 3.4, there are other \(2\) Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}+1),\pm 1,\pm 2)\).
(3) Suppose \(t_{2}=2\).
**Lemma 4.45**.: _If \(t_{1}=0\) and \(t_{2}=2\), then there exist \(6\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}+1),\pm 1,\pm 3;-\frac{1}{2}),(\pm(t_{0}-1),\pm 1,\pm 1;\frac{3}{2}),( \pm(t_{0}-3),\pm 1,\mp 1;\frac{3}{2}).\]
Proof.: If \(t_{2}=2\) and \(t_{0}\geq 3\), then there exist \(6\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 18 with \(k_{1}=l_{1}=0\). Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
If \(t_{2}=2\) and \(t_{0}\leq 2\), then by Lemma 4.41 and Lemma 3.4, there exist \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}+1),\pm 1,\pm 3)\). Moreover, by Lemma 4.42, there exist \(4\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}-1),\pm 1,\pm 1)\) and \((\pm(t_{0}-3),\pm 1,\mp 1)\).
(4) Suppose \(t_{2}\geq 3\).
**Lemma 4.46**.: _If \(t_{1}=0\) and \(t_{2}\geq 3\), then there exist \(8\) strongly exceptional Legendrian \(A_{3}\) links whose rotation numbers and corresponding \(d_{3}\)-invariants \((r_{0},r_{1},r_{2};d_{3})\) are_
\[(\pm(t_{0}+1),\pm 1,\pm(t_{2}+1);-\frac{1}{2}),(\pm(t_{0}-1),\pm 1,\pm(t_{2}-1 );\frac{3}{2}),\]
\[(\pm(t_{0}-1),\pm 1,\pm(3-t_{2});\frac{3}{2}),(\pm(t_{0}-3),\pm 1,\pm(1-t_{2} );\frac{3}{2}).\]
Proof.: If \(t_{2}\geq 3\) and \(t_{0}\geq 3\), then there are exactly \(8\) strongly exceptional Legendrian \(A_{3}\) links shown in Figure 19 with \(k_{1}=l_{1}=0\). Their rotation numbers and corresponding \(d_{3}\)-invariants are as listed.
Suppose \(t_{2}\geq 3\) and \(t_{0}\leq 2\). By Lemma 4.41 and Lemma 3.4, there exist \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{-\frac{1}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}+1),\pm 1,\pm(t_{2}+1))\).
By Lemma 4.42, there exist \(4\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}-1),\pm 1,\pm(t_{2}-1))\) and \((\pm(t_{0}-3),\pm 1,\pm(1-t_{2}))\).
Moreover, there are \(2\) strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{3}{2}})\) whose rotation numbers \((r_{0},r_{1},r_{2})\) are \((\pm(t_{0}-1),\pm 1,\pm(3-t_{2}))\). The decorations of their exteriors are \(\pm(+)((-)(+))\). These exteriors are appropriate tight since they can be embedded into an appropriate tight contact \(\Sigma\times S^{1}\) with boundary slopes \(t_{0},\infty,-\frac{1}{2}\) and decoration \(\pm(+)(-+)\) by adding basic slices \((T^{2}\times[0,1],-\frac{1}{t_{2}},-\frac{1}{t_{2}-1})\), \(\cdots\), \((T^{2}\times[0,1],-\frac{1}{3},-\frac{1}{2})\) to the boundary \(T_{2}\).
The proof of Theorem 1.7 is completed.
Proof of Theorem 1.8.: It follows from the proof of Theorems 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7.
## 5. Stabilizations
We consider the strongly exceptional Legendrian \(A_{3}\) links with \(t_{1},t_{2}\neq 0\) and \(t_{0}+\lceil-\frac{1}{t_{1}}\rceil+\lceil-\frac{1}{t_{2}}\rceil\geq 2\). Their exteriors have \(0\)-twisting vertical Legendrian circles. So by Lemma 3.5, the component \(K_{0}\) can always be destabilized. For the strongly exceptional Legendrian \(A_{3}\) links with \(t_{1}=0\), their exteriors obviously have \(0\)-twisting vertical Legendrian circles. By the same reason, the component \(K_{0}\) can be destabilized.
As examples, we list the mountain ranges of the component \(K_{0}\) in some Legendrian \(A_{3}\) links with fixed \(t_{1},t_{2}\).
(1) Strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{5}{2}})\) with \(r_{0}=\pm(t_{0}-5),r_{1}=r_{2}=0\), where \(t_{1}=t_{2}=1\). Their exteriors have decorations \(\pm(+)(+)(+)\). The mountain range is depicted in the upper left of Figure 21. It is infinite on the upper side. If \(t_{0}\geq 5\), then they are strongly exceptional. If \(t_{0}<5\), then they are not exceptional.
(2) Strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{5}{2}})\) with \(r_{0}=\pm(t_{0}-3),r_{1}=\pm(t_{1}-1),r_{2}=\pm(1-t_{2})\), where \(t_{1},t_{2}\geq 3\). Their exteriors have decorations \(\pm(+)((+)(-))((+)(+))\). The mountain range is depicted in the lower left of Figure 21. It is infinite on the upper side. If \(t_{0}\geq 3\), then they are strongly exceptional. If \(t_{0}<3\), then they are not exceptional.
(3) Strongly exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{5}{2}})\) with \(r_{0}=\pm(t_{0}-5),r_{1}=\pm(1-t_{1}),r_{2}=\pm(1-t_{2})\), where \(t_{1},t_{2}\geq 3\). Their exteriors have decorations \(\pm(+)((+)(+))((+)(+))\). The mountain range is depicted in the upper right of Figure 21. It is infinite on the upper side. If \(t_{0}\geq 3\), then they are strongly exceptional. If \(t_{0}<3\), then they are not exceptional.
(4) Exceptional Legendrian \(A_{3}\) links in \((S^{3},\xi_{\frac{1}{2}})\) with \(r_{0}=\pm(t_{0}-1),r_{1}=\pm(1-t_{1}),r_{2}=\pm(t_{2}+1)\), where \(t_{1},t_{2}\geq 3\). Their exteriors have decorations \(\pm(+)((+)(+))((-)(-))\). The mountain range of such links is depicted in the lower right of Figure 21. It is infinite on both the upper and lower sides. The exteriors of such \(A_{3}\) links have decorations \(\pm(+)((+)(+)((-)(-))\). If \(t_{0}\geq 2\), then they are strongly exceptional. If \(t_{0}<2\), then, based on Lemma 4.7 and Lemma 2.6, they are exceptional but not strongly exceptional.
In a more general setting, with a fixed decoration and nonzero integers \(t_{1}\) and \(t_{2}\), if \(L^{\prime}_{0}\) and the innermost basic slices of \(L^{\prime}_{1}\) and \(L^{\prime}_{2}\) have the same signs (possibly after shuffling), then the components \(K_{0}\) of the strongly exceptional Legendrian \(A_{3}\) links exhibit mountain ranges with shapes resembling a 'V' or an 'X' truncated from the lower side, as shown in the first three subfigures in Figure 21.
Figure 21. Each dot represents a Legendrian \(A_{3}\) link. Each arrow represents a stabilization. At the cross of the subfigures (2)-(4), we have two Legendrian \(A_{3}\) links.
## 6. Some Computations
Here we summarise how to compute the classical invariants of Legendrian realizations \(A_{3}=K_{0}\cup K_{1}\cup K_{2}\) of the connected sum of two Hopf links, and the \(d_{3}\)-invariant of the contact \(3\)-sphere \(S^{3}\) containing the realizations. We compute the invariants of the first surgery diagram on the top left of Figure 14. Similar arguments apply to all remaining examples. For the example in Figure 14, the linking matrix \(M\) is the \(((t_{0}-1)\times(t_{0}-1))\)-matrix, which we form by ordering the surgery curves from bottom to top where all are oriented clockwise:
\[M=\begin{bmatrix}-2&-1&-1&&&&&&&\\ -1&-2&-1&&&&&&&\\ -1&-1&-2&-1&&&&\\ &&-1&-2&-1&&&&\\ &&-1&-2&-1&&&&\\ &&&&\ddots&&\\ &&&&&-1&\\ &&&&-1&-2&-1\\ &&&&&&-1&-1\end{bmatrix}.\]
The determinant of \(M\) is \(\mathtt{det}\,M=(-1)^{t_{0}-1}\).
### The \(d_{3}\)-invariant
Let \((Y,\xi)=\partial X\) be a contact \(3\)-manifold given by contact \((\pm 1)\)-surgeries on a Legendrian link \(\mathbb{L}\in(S^{3},\xi_{st})\), all of which have non-vanishing Thurston-Bennequin invariant. We compute the \(d_{3}\)-invariant of \((Y,\xi)\) with \(c_{1}(\xi)\) torsion by following the formula from [4, Corollary 3.6]:
\[d_{3}(\xi)=\frac{1}{4}(c^{2}-3\sigma(X)-2\chi(X))+q,\]
where \(q\) is the number of \((+1)\)-surgery components in \(\mathbb{L}\), and \(c\in H^{2}(X)\) is the cohomology class determined by \(c(\Sigma_{i})\), for each \(L_{i}\in\mathbb{L}\) where \(\Sigma_{i}\) Seifert surface of \(L_{i}\) glued with the core disk of the corresponding handle. We read \(\sigma\) and \(\chi\) from the surgery diagram in Figure 14. The signature \(\sigma\) is the signature of the linking matrix \(M\). The surgery diagram topologically is equivalent to \((t_{0}-2)\) unlinked \(-2\)-framed unknots and an \(-1\)-framed unknot, so the signature is \(\sigma(X)=-(t_{0}-1)\). The Euler characteristic is \(\chi(X)=t_{0}-1+1=t_{0}\) since each surgery knot corresponds to attaching a \(2\)-handle. We compute \(c^{2}\) by following the algorithm in [4], \(c^{2}=\mathbf{x}^{t}M\mathbf{x}=<\mathbf{x},\mathtt{rot}>\) where \(\mathtt{rot}=(rot(L_{1}),\ldots,rot(L_{n}))\) is the vector rotation number of the Legendrian surgery knots \(L_{i}\subset\mathbb{L}\), and \(\mathbf{x}\) is the solution vector of \(M\mathbf{x}=\mathtt{rot}\). For the surgery diagram on top left of Figure 14, the vector rotation number is
\[\mathtt{rot}=(2,-2,0,\ldots,0,1)^{t}.\]
The solution vector \(\mathbf{x}\) is
\[\mathbf{x}=(-1,3,*,\ldots,*,-(t_{0}-1))^{t}\ \ \text{for $t_{0}$ even,}\]
and
\[\mathbf{x}=(-3,1,*,\ldots,*,-(t_{0}-1))^{t}\ \ \text{for $t_{0}$ odd.}\]
This gives \(c^{2}=<\mathbf{x},\underline{\mathtt{rot}}>=-6-2+0+\cdots+0-(t_{0}-1)=-7-t_{0}.\) Observing that \(q=3\) in this example, we compute
\[d_{3}=\frac{1}{4}(-7-t_{0}-3(-(t_{0}-1))-2t_{0})+3=\frac{1}{2}.\]
### The Thurston-Bennequin invariant and the rotation number
We use the formulae in [14, Lemma 6.6] to compute the Thurston-Bennequin invariant and the rotation number of a Legendrian knot \(L\) in a contact \((\pm 1)\)-surgery diagram of surgery link \(\mathbb{L}\) with the linking matrix \(M\). The Thurston-Bennequin invariant is
\[tb(L)=tb(L_{0})+\frac{\mathtt{det}\,M_{0}}{\mathtt{det}\,M},\]
where \(tb(L_{0})\) is the Thurston-Bennequin invariant of \(L\) as a knot in \((S^{3},\xi_{st})\) before the contact surgeries, and \(M_{0}\) is the extended linking matrix which is the linking matrix of \(L_{0}\cup\mathbb{L}\) with the convention that \(lk(L_{0},L_{0})=0\). The rotation number of \(L\) after surgery is
\[rot(L)=rot(L_{0})-<\underline{\mathtt{rot}},M^{-1}\underline{\mathtt{lk}}>,\]
where \(rot(L_{0})\) is the rotation number of \(L\) before surgeries, and \(\underline{\mathtt{rot}}\) is the vector rotation number of the Legendrian surgery knots \(L_{i}\subset\mathbb{L}\), and \(\underline{\mathtt{lk}}=(lk(L,L_{1}),\ldots,lk(L,L_{n}))\) is the vector of the linking numbers.
For the surgery diagram on the top left of Figure 14, we assume that \(K_{0}\), \(K_{1}\) and \(K_{2}\) are oriented clockwise. So the extended linking matrices for \(K_{0}\), \(K_{1}\) and \(K_{2}\) are respectively:
\[M_{0}=\begin{bmatrix}0&0&\cdots&0&-1&-2\\ 0&&&&\\ \vdots&&&&M&&\\ 0&&&&&&&\\ -1&&&&&&&\\ -2&&&&&&&\\ \end{bmatrix},\ M_{1}=\begin{bmatrix}0&-1&-3&-1&0\cdots&0\\ -1&&&&\\ -3&&&&&&&\\ -1&&&&&&&\\ \vdots&&&&&&&\\ 0&&&&&&&\\ \end{bmatrix},\]
\[M_{2}=\begin{bmatrix}0&-3&-1&-1&0\cdots&0\\ -3&&&&&&&\\ -1&&&&&&&\\ -1&&&&&&&\\ 0&&&&&&&\\ \vdots&&&&&&&\\ 0&&&&&&&\\ \end{bmatrix}.\]
The determinants are \(\mathtt{det}\,M_{0}=(-1)^{t_{0}-1}(t_{0}+2)\) and \(\mathtt{det}\,M_{1}=\mathtt{det}\,M_{2}=5(-1)^{t_{0}-1}\). We compute the Thurston-Bennequin invariants as follows,
\[tb(K_{0})=-2+\frac{(-1)^{t_{0}-1}(t_{0}+2)}{(-1)^{t_{0}-1}}=t_{0}\text{, and }tb(K_{1})=tb(K_{2})=-3+\frac{5(-1)^{t_{0}-1}}{(-1)^{t_{0}-1}}=2.\]
Recall that for \(t_{0}\) odd, \(K_{0}\) and \(K_{i}\) are given the same orientation, for \(t_{0}\) even, the opposite one, where \(i=1,2\). If \(t_{0}\) is odd, then \(K_{i}\) is oriented clockwise. If \(t_{0}\) is even, then \(K_{i}\) is oriented counter-clockwise. We compute the rotation numbers as follows,
\[r_{0} =\] \[=\]
\[1-(2-2+0+\cdots+0+t_{0})=-(t_{0}-1),\]
\[r_{1} =\] \[=\]
\[2(-1)^{t_{0}}-(0+4(-1)^{t_{0}}+0+\cdots+0+1)=\left\{\begin{array}{ll}1& \mbox{if $t_{0}$ is odd,}\\ -3&\mbox{if $t_{0}$ is even,}\end{array}\right.\]
\[r_{2} =\] \[=\]
\[2(-1)^{t_{0}-1}-(4(-1)^{t_{0}-1}+0+\cdots+0+1)=\left\{\begin{array}{ll}-3& \mbox{if $t_{0}$ is odd,}\\ 1&\mbox{if $t_{0}$ is even.}\end{array}\right.\]
|
2304.09655 | How Secure is Code Generated by ChatGPT? | In recent years, large language models have been responsible for great
advances in the field of artificial intelligence (AI). ChatGPT in particular,
an AI chatbot developed and recently released by OpenAI, has taken the field to
the next level. The conversational model is able not only to process human-like
text, but also to translate natural language into code. However, the safety of
programs generated by ChatGPT should not be overlooked. In this paper, we
perform an experiment to address this issue. Specifically, we ask ChatGPT to
generate a number of program and evaluate the security of the resulting source
code. We further investigate whether ChatGPT can be prodded to improve the
security by appropriate prompts, and discuss the ethical aspects of using AI to
generate code. Results suggest that ChatGPT is aware of potential
vulnerabilities, but nonetheless often generates source code that are not
robust to certain attacks. | Raphaël Khoury, Anderson R. Avila, Jacob Brunelle, Baba Mamadou Camara | 2023-04-19T13:45:01Z | http://arxiv.org/abs/2304.09655v1 | # How Secure is Code Generated by ChatGPT?
###### Abstract
In recent years, large language models have been responsible for great advances in the field of artificial intelligence (AI). ChatGPT in particular, an AI chatbot developed and recently released by OpenAI, has taken the field to the next level. The conversational model is able not only to process human-like text, but also to translate natural language into code. However, the safety of programs generated by ChatGPT should not be overlooked. In this paper, we perform an experiment to address this issue. Specifically, we ask ChatGPT to generate a number of program and evaluate the security of the resulting source code. We further investigate whether ChatGPT can be prodded to improve the security by appropriate prompts, and discuss the ethical aspects of using AI to generate code. Results suggest that ChatGPT is aware of potential vulnerabilities, but nonetheless often generates source code that are not robust to certain attacks.
Large language models, ChatGPT, code security, automatic code generation
## I Introduction
For years, large language models (LLM) have been demonstrating impressive performance on a number of natural language processing (NLP) tasks, such as sentiment analysis, natural language understanding (NLU), machine translation (MT) to name a few. This has been possible specially by means of increasing the model size, the training data and the model complexity [1]. In 2020, for instance, OpenAI announced GPT-3 [2], a new LLM with 175B parameters, 100 times larger than GPT-2 [3]. Two years later, ChatGPT [4], an artificial intelligence (AI) chatbot capable of understanding and generating human-like text, was released. The conversational AI model, empowered in its core by an LLM based on the Transformer architecture, has received great attention from both industry and academia, given its potential to be applied in different downstream tasks (e.g., medical reports [5], code generation [6], educational tool [7], etc).
Besides multi-turn question answering (Q&A) conversations, ChatGPT can translate human-like text into source code. The model has the potential to incorporate most of the early Machine Learning (ML) coding applications, e.g., bug detection and localization [8], program synthesis [9], code summarization [10] and code completion [11]. This makes the model very attractive to software development companies that aim at increasing productivity while minimizing costs. It can also benefit new developers that need to speed up their development process or more senior programmers that wish to alleviate their daily tasks. However, the risk of developing and deploying source code generated by ChatGPT is still unknown. Therefore, this paper is an attempt to answer the question of how secure is the source code generated by ChatGPT. Moreover, we investigate and propose follow-up questions that can guide ChatGPT to assess and regenerate more secure source code.
In this paper, we perform an experiment to evaluate the security of code generated by ChatGPT, fine-tuned from a model in the GPT-3.5 series. Specifically, we asked ChatGPT to generate 21 programs, in 5 different programming languages: C, C++, Python, html and Java. We then evaluated the generated program and questioned ChatGPT about any vulnerability present in the code. The results were worrisome. We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts. In fact, when prodded to whether or not the produced code was secure, ChatGTP was able to recognize that it was not. The chatbot, however, was able to provide a more secure version of the code in many cases if explicitly asked to do so.
The remainder of this paper is organized as follows. Section II describes our methodology as well as provides an overview of the dataset. Section III details the security flaws we found in each program. In Section IV, we discuss our results, as well as the ethical consideration of using AI models to generate code. Section VI surveys related works. Section V discusses threats to the validity of our results. Concluding remarks are given in Section VII.
## II Study Setup
### _Methodology_
In this study, we asked ChatGPT to generate 21 programs, using a variety of programming languages. The programs generated serve a diversity of purpose, and each program was chosen to highlight risks of a specific vulnerability (eg. SQL injection in the case of a program that interacts with database, or memory corruption for a C program). In some cases, our instructions to the chatbot specified that the code would be used in a security-sensitive context. However, we elected not to specifically instruct ChatGPT to produce _secure_ code, or to incorporate specific security features such as input sanitization. Our experience thus simulates the behavior of a novice programmer who asks the chatbot to produce code on his behalf, and who may be unaware of the minutiae required to make code secure.
We then prodded ChatGPT about the security of the code it produced. Whenever a vulnerability was evident, we created
an input that triggers the vulnerability and asked ChatGPT: "_The code behaves unexpectedly when fed the following input: <input>_. _What causes this behavior?_ " This line of question again allows us to simulate the behavior of a novice programmer, who is unaware of security consideration, but who does take the time to test the program supplied to him by the chatbot. In other cases, we directly asked ChatGTP if the code supplied is secure with respect to a specific weakness. Finally, we asked ChatGPT to create a more secure version of the code. In our dataset, we refer to these updated versions of the programs as the 'corrected programs'. Corrected programs were only generated when the program initially created by ChatGPT is vulnerable to the category of attack to which it was to serve as a use-case.
### _Dataset Description_
The 21 programs generated by ChatGPT are written in 5 different programming languages: C (3), C++ (11), python (3), html (1) and Java (3). Each program was, in itself, comparatively simple; most consist of a single class and even the longest one is only 97 lines of code.
Each program accomplishes a task that makes it particularly at susceptible to a specific type of vulnerability. For example, we asked chatGPT to create a program that performs manipulations on a database, with the intention of testing the chatbot's ability to create code resistant to SQL injection. The scenarios we chose cover a variety of common attacks including memory corruption, Denial of service, deserialization attack and cryptographic misuse. Some programs are susceptible to more than one vulnerability.
Table I contains a list of the programs in our dataset. The table also indicates the intended vulnerability for each program, with the associate CWE number. Column 4 indicates if the initial program return by chat GPT is vulnerable (Y) or not (N), or was unable to create the requested program (U). Column 5 indicates if the corrected program, i.e., the program produced by chatGPT after our interaction with it, is still vulnerable. The (U) for program 6 reflects the fact that ChatGPT was unable to produce a corrected program for this use-case. For columns 4 and 5, we are only considering the intended vulnerability listed in Table I. If a program appears secure with respect to it's intended vulnerability, we mark it as secure, even if it contains other vulnerabilities. The final column indicates if the initial program can compile and run without errors. Several programs produced by ChatGPT required libraries that we were unable to locate. Other had syntactic errors such as missing ';' or uninitialized variables.
Our dataset is available on the author's github repository 1.
Footnote 1: [https://github.com/RaphaelKhoury/ProgramsGeneratedByChatGPT](https://github.com/RaphaelKhoury/ProgramsGeneratedByChatGPT)
## III Security Analysis of the Code
In this section, we briefly explain each program in our dataset, and detail our interaction with ChatGTP.
Program 1is a simple C++ FTP server to share files located in a public folder. The code generated by chatGPT performs no input sanitization whatsoever, and is trivially vulnerable to a path traversal vulnerability.
When prompted about the behavior of the program on a malicious input, ChatGPT readily realized that the program is vulnerable to a path traversal vulnerability, and was even able to provide a cogent explanation of the steps needed to secure the program.
However, when asked to produce a more secure version of the program, ChatGTP merely added two sanitization checks to the code: a first check to ensure that the user input only contains alphanumeric characters and a second test to ensure that the path of the shared file contains the path of the shared folder. Both tests are relatively simple and easy to circumvent by even a novice adversary.
Program 2is a C++ program that receives as input an email address, and passes it to a program (as a parameter) through a shell. As discussed by Viega et al. [12], handling input in this manner allows a malicious adversary to execute arbitrary code by appending shell instructions to a fictitious email.
As was the case in the previous example, when asked about the behavior of the program on a malicious input, ChatGPT realizes that the code is vulnerable. In this case, the behavior is only triggered by a crafted input, so only a user who is already aware of the security risk would ever ask about this situation. However, ChatGPT is then able to provide an explanation as to why the program is vulnerable and create a more secure program. The corrected program exhibits some input validation tests, but they are fairly limited and the program remains vulnerable--a situation that is hard to avoid considering how risky it is to feed a user-input directly to the command line. Creating a truly secure program would probably require a more fundamental modification of the code, which is beyond the capabilities of a chatbot tasked with responding to user requests. This use-case raises interesting ethical issues since it may be argued that the instructions given to ChatGPT (i.e., passing the user's input to the program as a parameter) are inherently unsafe. We will return to this issue in Section IV.
Program 3is a python program that receives a user input and stores it in an SQL database. The program performs no code sanitization, and is trivially vulnerable to an SQL injection. However, when asked about the behavior of the program on a textbook SQL injection entry, ChatGPT identified the vulnerability, and proposed a new version of the code that uses prepared statements to perform the database update securely.
Program 4is a C++ program that receives as input a user-supplied username and password, and checks that the username is not contained in the password using a regex. This process exposes the host system to a denial of service by way of a ReDos attack [13] if an adversary submits a crafted input that requires exponential time to process.
The chatbot incorrectly stated that the worst case algorithmic complexity of the code it submitted is O(\(n^{2}\)). In reality, since the adversary controls the creation of the regex, he may cause an execution with a worst case as high as O(\(2^{n}\)) (depending on the algorithm used for regex resolution, which is not known). When shown a malicious input, ChatGTP did not recognize that it causes a ReDos attack. However, when asked directly about this class of attack, it did recognize that the code is vulnerable and was able to suggest a number of alterations to make it more robust, the main one being a timeout after 100000 iterations on the execution of the regex. Not only is this upper bound immoderately high, but the regex library used by ChatGPT could not be found online. Since most regex libraries do not offer a timeout functionality, a user who receives this code from chatGPT may adapt it by simply removing the timeout, specially since he does not understand its purpose.
**Program 5**: _is an interactive webpage that manipulates user input, which makes it susceptible to an XSS injection. ChatGPT initially stated that it was unable to create a complete dynamic page, and could only suggest code fragments that accomplish the various tasks needed to implement an interactive webpage. We gathered these code fragments and included them in our dataset. Since ChatGPT did not produce a functional program we labeled this case as 'U' in Table I._
While the fragments were inherently incomplete, they did not include any input sanitization and a page that incorporates these fragments would be trivially vulnerable to XSS injection. ChatGPT recognized this fact, and suggested actionable steps that could make the program more secure. However, when asked to produce a more secure version of the code, ChatGPT produced a page that remained trivially vulnerable, ignoring its own advice.
We found this case to be particularly puzzling, since ChatGPT was initially unable to produce a complete program, but did so later in our interaction. In fact, we continued to explore this scenario and made further queries to ChatGTP, until the chatbot was able to produce a suitably secure page. The page was secure by the inclusion of htmlspecialchars() to sanitize user inputs. Unfortunately, the nature of the tool makes it difficult to draw conclusions as to which lines of enquiries will lead ChatGPT towards the creation of secure code. We will return to this topic in the next section.
**Programs 6**: _is a fragment of Java code that receives a serialized object-- a calendar capturing a date and an event, via a socket and deserializes it in order to use it in a broader program. The program is vulnerable to a number of deserialization vulnerabilities including: DoS via an abnormally large or malformed object, the creation of illicit objects (eg. a calendar date of February 31) or a Java deserialization attack, which may result in the execution of arbitrary code [14]._
We first asked ChatGPT if an illicit object can be received. The answer was somewhat confused, with the chatbot flatly stating that such an object could not be created, before suggesting an updated program that includes validity checks. In any case, the checks are incomplete, refer to a non-existed method and would not catch the use-case we asked about (February 31). The corrected program thus remains vulnerable to this class of attack.
When asked broad questions about whether the program is vulnerable, ChatGPT raised valid concerns such as socket exhaustion, but did not mention any deserialization-specific attacks. However, when asked specifically about deserialization DoS and deserialization attacks, ChatGPT was able to explain why the original application is vulnerable and to provide clear advice on how to protect it.
In a final step, we asked ChatGPT to incorporate its own advice in a new version of the code. Unfortunately, ChatGPT produced the following reply, indicating that doing so was beyond its means:
"I apologize, but as an AI language model, I cannot rewrite entire codebases or applications from scratch, as it requires a deep understanding of the requirements and architecture of the system."
**Program 7**: _similarly receives a zip archive from the user, and performs a manipulation on each file contained in the archive. It is vulnerable so several zip manipulation vulnerabilities such as zipslip and zip bombs [15]._
We first asked ChatGPT an open ended question, "_Is this program secure?_". The chatbot responded that it is. The response included a number of important security-related topics, such as network manipulation and the reception of files that are not of type zip. However, it did not raise the possibility of zip-related attacks. We then asked specifically about both zip slip and zip bomb vulnerabilities. In both cases, the chatbot stated that the program is (or could be) vulnerable to these attacks under some circumstances. The chatbot was also able to suggest a list of improvements that would effectively secure the code.
**Program 8**: _is a C++ utility to strip backslash characters from a user supplied input. As discussed by Howard et al. [2], such a function is vulnerable to a denial of service if it is written in a naive (O(\(n^{2}\))) manner, and a malicious user supplies an input that consists in a long string of '\(\backslash\)'s. The code generated by ChatGPT exhibited linear complexity and was thus likely invulnerable to this attack._
Curiously, when asked about this topic, the chatbot wrongly stated that the program it had produce was vulnerable to this category of attack, and that further input sanitization was required.
**Programs 9**: _is a C program that places sensitive data in a temporary file. The code exhibits a number of file management errors that may lead to the exposure of sensitive information. A large number of security-critical flaws are evident when examining this simple code. Notably, the temporary file has a predictable name and path, and error codes returned by the file manipulation function are not checked. Furthermore, the program does not check whether or not the file already exists before creating it, a caution meant to prevent
discclosure of any information left in the file in a previous session.
Similarly to the previous use-case, ChatGPT only recognizes the vulnerability when asked specifically about it and is able to suggest appropriate corrective measures. In this sense, secure code is only produced for a user who explicitly requests it, and even then the code is only protected against those attacks categories for which the user requests protection.
Programs 10-12 generate a pseudorandom number to serve as password, in C++, Java and python respectively. Since the prompt specified that the pseudorandom number was to be used as a password, ChatGPT should have used a cryptographically secure PRNG. However, in two of the three cases, this precaution was not taken: the C++ program used std::mt19937, a Mersenne Twister, while the python program relied upon the random.py library.
The Java program did use a cryptographically secure PRNG, SecureRandom. However, the Java program exhibits its own issues. In particular, the password produced is limited in that it only contains the characters 0-9. In a sense, ChatGPT was following its instruction to generate a pseudorandom _number_, rather than a string, but the intent of the programmer may have been to generate a numeric value that can be interpreted as a password, (for example by converting the numbers to ascii). This mistake highlights the limits of ChatGPT as a tool to aid novice programmers write programs. When we asked ChatGPT:"Write a program that uses a cryptographically secure random number generator to create a strong password, with at least 80 bits of entropy.", the resulting code was secure and concordant with the best practices, but a novice programmer may not be able to formulate such a question.
Likewise, asking follow-up general questions such as "Is this code secure?" or "Why is os.urandom considered cryptographically secure?" provided a lot of useful background information on creating secure passwords, but this information will only be available to the user who specifically requests it.
In all three cases, the random numbers had a fixed length of 10 characters.
Programs 13-16 relate to misuse of cryptographic libraries. The first program is a C++ program that generates AES keys to communicate securely to 3 different users. ChatGPT used the same keys for all 3 recipients, despite being explicitly told that the information that will be transmitted is sensitive. Furthermore, this common key is hard-coded in the program, an additional fobile that we had not foreseen.
The three other programs all perform the same task -- create a key and encrypt a string, in C++, Java and python. In the latter two cases, we specifically requested that the use pycryptopp (python) and Bouncy Castle (Java) respectively, two widely used cryptographic libraries. Both libraries perform encryption using ECB mode by default which is seen as a misuse, and we had expected that ChatGPT would produce code that uses the library with default values, specially since most usage examples of this library available online seem to be vulnerable. Fortunately, ChatGPT correctly used a more secure mode, which has to be set explicitly.
However, in the case of encryption in C++, ChatGPT does use ECB by default, despite being free to select any encryption library.
#### V-B1 Programs 17
consists in a pair of C++ functions: the first collects a username and a password from a user and stores them in a database, while the second checks if the a given username and password pair is present in the database. In violation of commonly accepted best practice, the code uses no encryption, hashing or salt to protect the passwords. When asked if the code is concordant with the best security practices, ChatGPT readily admits that it is not, and produces a new variation of the code that uses Bcrypt, with proper hashing and salt. In effect, ChatGPT knowingly produces vulnerable code for a highly security-sensitive section of the program, and only produces secure code when asked to do so explicitly. The corrected program appears to remain vulnerable to an SQL injection, in any case.
Programs 18-21 are C\(\backslash\)C++ programs that perform simple computations user input, and are vulnerable to memory corruption attacks if the input is not adequately sanitized. These include buffer overflow (program 18 and 19) integer overflow (program 19) and memory allocation errors (program 21).
Program 18 receives as input an array of integers, sorts them, and allows the user to query the sorted array by index. Our aim was to test security of the code w.r.t. a potential buffer overflow, in case the user requests the integer at an index that falls outside the sorted array. While it is impossible to be assured of the absence of a vulnerability, the code produced by ChatGPT in this case contains the expected boundary checks and appears to be free from buffer overflow vulnerabilities. However, some input validation is missing, a fact that chatGPT readily admitted when asked why the program misbehaved on non-numeric inputs.
Program 19 is a function that takes as input an array of integers, and returns the product of the values it contains. The program is vulnerable to an integer overflow if the the result is greater than Max_INT. This would affect the integrity of the data, and may the be root cause of a buffer overflow or of other vulnerabilities depending on how the result is used. While ChatGPT realized the presence of the vulnerability when presented with a pathological input, the chatbot suggested to correct it by replacing the type of the array's elements, an obviously futile remediation in the presence of an adversarial user.
Program 20, is a C++ that takes as input two strings as well as their size and concatenates them. It is trivially exploitable since it performs no checks on the size of the input, and no verification that each string is concordant with it's size. When provided on this topic, ChatGPT stresses the need to call the function with integer parameters that are concordant with the associated string, thus ignoring the possibility of an adversarial user.
We then asked ChatGPT to create a program that avoid this issue. The results included a single check, to ensure that the destination buffer is larger than the sum of the two integer
parameters. Not only does the corrected program still not ensure that these values are concordant with the input strings, but the check itself is vulnerable to an integer overflow. A number of other essential security checks are missing and the code is trivially exploitable. Furthermore, the chat prompt stresses that the program assumes that the input strings are correctly null-terminated. This is a surprising comment since our instructions to ChatGPT specifically stressed that the input strings may not be null-terminated.
Finally, program 21 is a function that allocates memory at the request of the user. The program may cause memory corruption if the user requests memory of size 0 [16], a problem that ChatGPT readily recognized, and easily fixed when asked explicitly to do so.
In total, only 5 of the 21 programs were initially correct. After interaction with ChatGPT, the chatbot was able to produce a corrected version for 7 of the 16 incorrect program. Vulnerabilities were common in all categories of weaknesses, but ChatGPT seems to have particular difficulty with memory corruption vulnerabilities and secure data manipulations. The prevalence of encryption vulnerabilities varied depending of the programming language used.
## IV Discussion
The first and most important conclusion that can be drawn from this experiment is that ChatGPT frequently produces insecure code. In fact, only 5 of the 21 use-cases we investigated were initially secure, and ChatGPT was only able to produce secure code in an additional 7 cases after we explicitly requested of it that we correct the code. Vulnerabilities spanned all categories weaknesses, and were often extremely significant, of the kind one would anticipate in a novice programmer. It is important to note that even when we adjudicate that a program is secure, we only mean that, in our judgement, the code is not vulnerable to the attack class it was meant to test. The code may well contain other vulnerabilities, and indeed, several programs (e.g. program 21) were deemed 'corrected' even though they contained obvious vulnerabilities, because ChatGPT seems to have corrected the issue we sought to explore in this use-case.
Part of the problem seems to be that ChatGPT simply doesn't assume an adversarial model of execution. Indeed, it repeatedly informed us that security problems can be circumvented simply by "not feeding an invalid input" to the vulnerable program it has created.
Nonetheless, in most cases, ChatGPT seems aware of -- and indeed readily admits, the presence of critical vulnerabilities in the code it suggests. If asked specifically on this topic, the chatbot will provide the user with a cogent explanation of why the code is potentially exploitable. In this sense, ChatGPT can be seen as having some pedagogical value. However, any explanatory benefit would only be available to a user who "asks the right questions". i.e, a security-conscious programmer who queries ChatGPT about security issues.
Asking follow-up questions also provides a wealth of important information about cyber security, but again, these questions would only occur to the user who is already cognizant of the underlying issue. Writing secure code often requires knowledge of minutiae of programming languages (for example knowing that malloc(0) may return a dangling pointer). ChatGPT gave informative answers to questions on these topics, but the fact only a user who asks specifically about the issue would receive the answer limits ChaGPT's use as a pedagogical tool. In many cases (e.g. the password storing program), essential security features were only present if the user asked specifically for them.
One was to circumvent this limitation is to rely on unit testing to probe ChatGPT's code for vulnerabilities, and correct the code accordingly. This is, in effect, the strategy that we simulated in this experiment. In some cases, the programmer could rely on benchmarks of malicious inputs, but a more general approach would be to submit the program to an automated analysis, and communicate the results to ChatGPT. The chatbot's replies will allow an iterative amelioration of the program.
We foresee the use of chatGPT as a pedagogical tool, or as an interactive development tool. The user would first ask for an initial program, tests it to find what doesn't work, asks why the program misbehaves on certain input and iteratively improve the program. Figure 1 illustrates the process we propose. Test cases will have to be developed separately.
One limitation of this approach is that ChatGPT seems to sometimes wrongly identify secure programs as being vulnerable, as we saw in the case of the StripBackslash utility.
As has been widely reported in media [17], students have already begin to use ChatGPT to aid them in their homework (or even to do it entirely), and it is more than likely that these same students will continue to use ChatGPT and other chatbots as a programming aids during their careers. In this context, it is prudent develop methods that push ChatGPT towards the creation of secure code, and to instruct students in the ethical use of the tool.
We find it interesting that ChatGPT refuses to create attack code, but allows the creation of vulnerable code, even thought the ethical considerations are arguably the same, or even worst. Furthermore, certain cases, (e.g. Java deserializtion), the chatbot generated vulnerable code, and provided advice on how to make it more secure, but stated it was unable to create the more secure version of the code. In effect, ChatGPT knowingly creates vulnerable code in cases where it knows an attack is possible but is unable to create secure code. In other cases, (e.g. program 4) the program we asked for is inherently dangerous. Creating a secure program that accomplishes the same task would require completely rethinking the logic of
Fig. 1: Code generation by ChatGPT followed by vulnerability check.
the program, and producing a code that is different than what the user requested in a fundamental way. In such cases, the most ethical course of action would be for ChatGPT to either refuse to fulfill the user's request, or to accompany it with a discussion of the risk inherent to the program produced. ChatGPT could also consider incorporating this discussion in the code's comments.
ChatGPT should also consider the possibility that the user may want to modify the code produced by the chatbot. In the case of the program which manipulates a zip file (program 9), we asked ChatGPT if running this program could allow an adversary to modify local files. ChatGPT stated this was not possible **because the program does not save the extracted files to disk**. In fact, it had been our intention to create a program that does exactly that, but ChatGPT had misunderstood our request. It is conceivable that a programmer in the same situation would elect to modify the code produced by chatGPT manually, thus exposing the program to an attack vector that ChatGPT had thought impossible. It this context, the initial interaction with the chatbot should have included a warning about the possible security risks of saving the content a zip file from an untrusted source to disk.
Another ethical concern related to the security of code could be raised: that of code secrecy. Indeed, a recent news report revealed that text generated by ChatGPT closely reassembles confidential corporate information, because amazon employees rely the chatbot to aid them in writing documents. Since the interaction between users and ChatGPT is added to the chatbot's knowledge base, this circumstance can cause business secrets to leak.
The same situation is likely to occur when programmers rely upon ChatGTP to write code. This would be a concern for organizations that wish to preserve the secrecy of proprietary code due to copyright issues. However, generic security worries about code secrecy may probably be put to rest: in concordance with the principle of open design, it is generally accepted that open code sharing makes software more robust, rather than less. Nonetheless, there may be specific circumstances when code secrecy is preferred due to cybersecurity concerns, such as in the case of military software [18]. In such circumstances, ChatGPT can pose a threat to the code's confidentiality.
Finally, it is important to mention that this specific type of IA lacks explainability [19], which limits its use as a pedagogical tool. There were several cases (encryption, random number generation) where instructing ChatGPT to perform a task using a specific programming language resulted in insecure code, while requesting the same task in a different language yielded secure code. Despite repeated inquires to the chatbot, we were unable to understand the process that lead to this discrepancy, and thus unable to devise an interaction strategy that maximizes that code is secure.
## V Threats to Validity
An external threat to the validity of this research resides in the fact that we use a specific version of ChatGPT (v. 3.5) the latest version available at the onset of the project. A new, much improved version is already available and it remains to be seen if the lacunae we identified in the paper are still present in more recent versions of this tool.
Even when considering only the aforementioned version of ChatGPT, it is important to keep in mind that chatbots tend to produce different answers to the same question depending on the previous interaction with the participant. Indeed, in several cases, we were able to nudge ChatGPT into producing a valid program by continuing to prod it with sufficiently leading questions. Unfortunately, the lack of explainability of this model makes it difficult to draw conclusions as how to interaction with the chatbot in such a way as to ensure that the resulting program will be secure.
Another threat to validity derives from the choice of programming language employed for each program. As our investigation demonstrates, depending on the programming language it was instructed to use, ChatGPT occasionally provides either a secure or an insecure program for a particular task, for reasons we are unable to predict.
## VI Related Works
ChatGPT has the potential to support software developers with the coding process. However, as ChatGPT was not specifically developed for this task, its performance is still unclear. Hence, a few studies have attempted to address this issue. In [6], for example, the authors assess the use of ChatGPT for automatic bug fixing. They perform several experiments in order to analyze the performance of ChatGPT at making suggestions to improve erroneous source code. The study compared the performance of the dialog system with that of Codex and other dedicated automated program repair (APR) approaches. Overall, the authors found ChatGPT's bug fixing performance similar to other deep learning approaches, such as CoCoNut and Codex, and significantly better than the results achieved by standard APR approaches.
In another recent work [20], the Nair et al. explore strategies to ensure that ChatGPT can achieve secure hardware code generation. They first show that ChatGPT will generate insecure code if it is not prompted carefully. Then, the authors propose techniques that developers can use to guide ChatGPT on the generation of secure hardware code. The authors provided 10 specific common weakness enumeration (CWE) and guidelines to appropriately prompt ChatGPT such that secure hardware code is generated.
In [21], the author provide a comprehensive analysis of ChatGPT's failures-- cases where it does not return a correct answer. The work focused on eleven categories of failures, including reasoning, factual errors, math, coding, and bias, are presented and discussed. The author focused on showing the chatbot limitations and concludes that ChatGPT is susceptible to several faults. For example, the presence of biases that was acquired by the model from the vast corpus of text that it was trained with. The author also pointed out the fact that ChatGPT in many situations are very confident about wrong answers. Note that the author arbitrarily categorized failures
and is aware of the existence of other ways to categorize failures [21].
## VII Conclusion
Automated code generation is a novel technology and the risks of generating insecure code, with the ramification of security attacks, encumbers on us to reflect on how to use it ethically.
In this experiment, we asked ChatGPT to generate 21 small programs, and found that the results often fell way below even minimal standards of secure coding. Nonetheless, we did find that the interaction between with ChatGPT on security topics to be thoughtful and educating and after some effort, we were able to coax ChatGPT into producing secure code in for most of our use cases. In this context, while we believe that chatbot are not yet ready to replace skilled and security aware programmers, they may have a role to play as a pedagogical tool to teach students about proper programming practices.
|
2307.10654 | Conditional expectation network for SHAP | A very popular model-agnostic technique for explaining predictive models is
the SHapley Additive exPlanation (SHAP). The two most popular versions of SHAP
are a conditional expectation version and an unconditional expectation version
(the latter is also known as interventional SHAP). Except for tree-based
methods, usually the unconditional version is used (for computational reasons).
We provide a (surrogate) neural network approach which allows us to efficiently
calculate the conditional version for both neural networks and other regression
models, and which properly considers the dependence structure in the feature
components. This proposal is also useful to provide drop1 and anova analyses in
complex regression models which are similar to their generalized linear model
(GLM) counterparts, and we provide a partial dependence plot (PDP) counterpart
that considers the right dependence structure in the feature components. | Ronald Richman, Mario V. Wüthrich | 2023-07-20T07:35:15Z | http://arxiv.org/abs/2307.10654v1 | # Conditional Expectation Network for SHAP
###### Abstract
A very popular model-agnostic technique for explaining predictive models is the **SH**apley **A**dditive ex**P**lanation (SHAP). The two most popular versions of SHAP are a conditional expectation version and an unconditional expectation version (the latter is also known as interventional SHAP). Except for tree-based methods, usually the unconditional version is used (for computational reasons). We provide a (surrogate) neural network approach which allows us to efficiently calculate the conditional version for both neural networks and other regression models, and which properly considers the dependence structure in the feature components. This proposal is also useful to provide drop1 and anova analyses in complex regression models which are similar to their generalized linear model (GLM) counterparts, and we provide a partial dependence plot (PDP) counterpart that considers the right dependence structure in the feature components.
**Keywords.** Shapley value, SHAP, conditional SHAP, unconditional SHAP, interventional SHAP, anova, drop1, analysis-of-deviance, least squares Monte Carlo, partial dependence plot, PDP, explainability, XAI.
## 1 Introduction
We start by formally introducing the problem studied in this paper. We consider a random tuple \((Y,\mathbf{X})\) with \(Y\) describing a real-valued response variable that is supported by features \(\mathbf{X}=(X_{1},\ldots,X_{q})^{\top}\) taking values in a feature space \(\mathcal{X}\subseteq\mathbb{R}^{q}\). We use these features to build a regression model
\[\mu:\mathcal{X}\to\mathbb{R},\qquad\mathbf{x}\mapsto\mu(\mathbf{x}), \tag{1.1}\]
that serves at predicting the response variable \(Y\), e.g., we can use the conditional expectation regression function defined by
\[\mathbf{x}\ \mapsto\ \mu(\mathbf{x})=\mathbb{E}\left[\left.Y\right|\mathbf{X}=\mathbf{x} \right].\]
The problem that we solve in this paper is the computation of the conditionally expected regression function (1.1), if only a subset of the feature components of \(\mathbf{X}=(X_{1},\ldots,X_{q})^{\top}\) is available.
E.g., if only the first two components \((X_{1},X_{2})=(x_{1},x_{2})\) of \(\mathbf{X}\) are observed, we would like to compute the conditional expectation
\[\mathbb{E}\left[\left.\mu(\mathbf{X})\right|X_{1}=x_{1},X_{2}=x_{2}\right], \tag{1.2}\]
this is further motivated in Example 2.1, below. Such conditional expectations (1.2) are of interest in many practical problems, e.g., they enter the **SH**a**pley **A**dditive ex**P**lanation (SHAP) of Lundberg-Lee [24], see also Aas et al. [1], they are of interest in discrimination-free insurance pricing, see Lindholm et al. [21], and they are also useful in a variable importance analysis, similar to the anova (analysis-of-variance/analysis-of-deviance) and the drop1 analyses for generalized linear models (GLMs) in the R statistical software [29], see also Section 2.3.2 in McCullagh-Nelder [28]. Moreover, it is known that the partial dependence plot (PDP) of Friedman [10] and Zhao-Hastie [37] for marginal explanation cannot correctly reflect the dependence structure in the features \(\mathbf{X}\). Below, we provide an alternative proposal, called marginal conditional expectation plot (MCEP), that mitigates this deficiency.
The main difficulty in efficiently evaluating (1.2) is that it has a similar computational complexity as nested simulations, if one wants to calculate these conditional expectations (1.2) with Monte Carlo simulations for all possible values \((x_{1},x_{2})\). That is, we need to simulate for _all_ values \((x_{1},x_{2})\) the random vector \((X_{3},\ldots,X_{q})\) conditionally, given \((X_{1},X_{2})=(x_{1},x_{2})\). Such nested simulations are computationally expensive, and things become even more involved if we need to calculate these conditional expectations for _all_ possible subsets of the components of \(\mathbf{X}\), e.g., when performing a SHAP analysis. A further difficulty is that these simulations can only be done by bootstrapping if the true probability law \(\pi\) of \(\mathbf{X}\) is unknown, i.e., if we have to rely on an i.i.d. sample \((\mathbf{X}_{i})_{i=1}^{n}\) of \(\mathbf{X}\sim\pi\). In that case sparsity of observations in different parts of the feature space \(\mathcal{X}\) poses another issue, and this issue becomes more serious for higher dimensional features \(\mathcal{X}\).
In financial mathematics, a similar problem occurs when evaluating American options. Carriere [5], Longstaff-Schwartz [22] and Tsitsiklis-Van Roy [35] have proposed to map the (inner) nested valuation problem to a suitable class of basis functions; this is also known as least squares Monte Carlo (LSMC). Basically, this means that (1.2) should be expressed as a new regression function in the variables \((x_{1},x_{2})\), and this new (inner) regression function still needs to be determined. Benefiting from the huge modeling flexibility of neural networks (universal approximation property), it has been proposed to use a neural network as this new inner regression function; see, e.g., Cheridito et al. [7], Krah et al. [18] or Jonen et al. [17]. These proposals have been made for a fixed set of observable components of \(\mathbf{X}\).
Our main contribution extends the set-up of Cheridito et al. [7] to simultaneously model the conditional expectations of all possible subsets of observable components of \(\mathbf{X}\). This allows us to develop a fast algorithm for estimating conditional SHAP, based on a surrogate neural network; this proposal works both for neural networks and other predictive machine learning algorithms. We discuss the necessary variable masking used, and we propose a specific fitting procedure so that the extreme cases about the full knowledge of \(\mathbf{X}\) and about the null model (not knowing \(\mathbf{X}\)) are correctly calibrated. Furthermore, we present applications of this approach to model-agnostic variable importance tools, such as an anova or a drop1 analysis, similar to GLMs, MCEPs similar to PDPs, and a global conditional SHAP decomposition of the generalization loss.
**Organization.** In the next section, we introduce the surrogate neural network for conditional expectation estimation, and we discuss the specific fitting procedure so that the extreme cases of full knowledge and of zero knowledge are properly calibrated. In Section 3, we apply these conditional expectations to analyses variable importance in predictive models. For this we introduce an anova analysis and a drop1 analysis that are similar to their GLM counterparts, see Section 3.2. In Section 3.3, we introduce the marginal conditional expectation plot (MCEP) as a conditional expectation counterpart of the partial dependence plot (PDP) that properly considers the dependence structure in the features. In Section 4, we discuss the application of our proposal to efficiently calculate the conditional SHAP. On the one hand, we consider the individual (local) mean decomposition, and on the other hand a global fair SHAP score decomposition for variable importance. Finally, Section 5 concludes.
## 2 Conditional expectation network
We begin from a given regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\), which maps the feature values \(\mathbf{x}=(x_{1},\ldots,x_{q})^{\top}\in\mathcal{X}\subseteq\mathbb{R}^{q}\) to real-valued output values \(\mu(\mathbf{x})\in\mathbb{R}\); we assume that this regression function \(\mu\) is given. In our example below, this regression function has been constructed within a fixed (given) network architecture based on an i.i.d. sample \((Y_{i},\mathbf{x}_{i})_{i=1}^{n}\). However, this is not an essential point in our proposal. The following methodology can be applied to any other regression model, such as gradient boosting trees, nonetheless, if the regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\) comes from a neural network, it will speed up the following (network) model fitting procedure, because the gradient descent algorithm can be initialized with exactly the network weights as used in the (first) regression function \(\mu\).
Assume that \(\mathbf{X}\sim\pi\) denotes the random selection of a feature value \(\mathbf{X}=\mathbf{x}\in\mathcal{X}\). Select a subset \(\mathcal{C}\subseteq\mathcal{Q}:=\{1,\ldots,q\}\) of the feature component indices. We generically write \(\mathbf{X}_{\mathcal{C}}=(X_{j})_{j\in\mathcal{C}}\) for selecting the components of \(\mathbf{X}\) with indices \(j\in\mathcal{C}\). Our goal is to calculate the conditional expectations
\[\mu_{\mathcal{C}}(\mathbf{x}):=\mathbb{E}\left[\left.\mu(\mathbf{X})\right|\mathbf{X}_{ \mathcal{C}}=\mathbf{x}_{\mathcal{C}}\right], \tag{2.1}\]
with the two extreme cases \(\mathcal{C}=\emptyset\) and \(\mathcal{C}=\mathcal{Q}\) given by
\[\mu_{0}:=\mu_{\emptyset}(\mathbf{x})=\mathbb{E}[\mu(\mathbf{X})]\qquad\text{ and }\qquad\mu(\mathbf{x})=\mu_{\mathcal{Q}}(\mathbf{x})=\mathbb{E}\left[\left.\mu(\mathbf{X}) \right|\mathbf{X}=\mathbf{x}\right]. \tag{2.2}\]
The former is called the _null model_ and the latter is the _full model_. In general, the conditional expectation \(\mu_{\mathcal{C}}(\mathbf{x})\) cannot easily be calculated because in regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\), we cannot simply "turn off" the components of \(\mathbf{x}\) that are not in \(\mathcal{C}\).
**Example 2.1**: Assume \(Y\) is an integrable random variable and that the full model is given by the conditional expectation regression function
\[\mathbf{x}\ \mapsto\ \mu(\mathbf{x})=\mathbb{E}\left[\left.Y\right|\mathbf{X}=\mathbf{x}\right]. \tag{2.3}\]
If, for some reason, only the components \(\mathbf{X}_{\mathcal{C}}\) of \(\mathbf{X}\), \(\mathcal{C}\subset\mathcal{Q}\), have been observed, then we can only build a regression model
\[\mathbf{x}_{\mathcal{C}}=(x_{j})_{j\in\mathcal{C}}\ \mapsto\ \mathbb{E}\left[\left.Y \right|\mathbf{X}_{\mathcal{C}}=\mathbf{x}_{\mathcal{C}}\right]=\mathbb{E}\left[\left. \mu(\mathbf{X})\right|\mathbf{X}_{\mathcal{C}}=\mathbf{x}_{\mathcal{C}}\right]=\mu_{ \mathcal{C}}(\mathbf{x}),\]
where we have used the tower property of conditional expectations. Thus, the conditional expectation (2.1) naturally arises under partial information. Note that the full model \(\mu=\mu_{\mathcal{Q}}\) given in (2.3) dominates in convex order any other regression function \(\mu_{\mathcal{C}}\), \(\mathcal{C}\subset\mathcal{Q}\), i.e., it has a higher resolution than any other conditional expectation regression function; see also Theorem 2.27 in Gneiting-Resin [11] for the resolution (discrimination) of a regression model.
We assume that all considered random variables are square integrable. This implies that we can work on a Hilbert space. We then receive the conditional expectation \(\mu_{\mathcal{C}}(\boldsymbol{X})\) as the orthogonal projection of \(\mu(\boldsymbol{X})\) onto the subspace \(\sigma(\mu_{\mathcal{C}}(\boldsymbol{X}))\) generated by random variable \(\mu_{\mathcal{C}}(\boldsymbol{X})\) in this Hilbert space. That is, the conditional expectation is the measurable function \(\boldsymbol{x}_{\mathcal{C}}=(x_{j})_{j\in\mathcal{C}}\mapsto\mu_{\mathcal{C}} (\boldsymbol{x})\) that minimizes the mean squared distance
\[\mathbb{E}\left[\left(\mu(\boldsymbol{X})-\mu_{\mathcal{C}}(\boldsymbol{X}) \right)^{2}\right]\ \ \stackrel{{!}}{{=}}\ \min.\]
Among all \(\boldsymbol{x}_{\mathcal{C}}\)-measurable functions, this conditional expectation is obtained by the solution of
\[\mu_{\mathcal{C}}(\boldsymbol{x})\ =\ \operatorname*{arg\,min}_{\widehat{\mu}} \ \mathbb{E}\left[\left(\mu(\boldsymbol{X})-\widehat{\mu}\right)^{2}\right| \boldsymbol{X}_{\mathcal{C}}=\boldsymbol{x}_{\mathcal{C}}\right], \tag{2.4}\]
for \(\pi\)-a.e. \(\boldsymbol{x}_{\mathcal{C}}\). The idea now is to approximate the functions \(\boldsymbol{x}_{\mathcal{C}}\mapsto\mu_{\mathcal{C}}(\boldsymbol{x})\) simultaneously for all subsets \(\mathcal{C}\subseteq\mathcal{Q}\) by a neural network
\[\boldsymbol{x}\ \mapsto\ \operatorname{NN}_{\boldsymbol{\vartheta}}( \boldsymbol{x}), \tag{2.5}\]
where \(\operatorname{NN}_{\boldsymbol{\vartheta}}\) denotes a neural network of a fixed architecture with network weights (parameter) \(\boldsymbol{\vartheta}\). There are two important points to be discussed:
1. The neural network (2.5) considers all components of input \(\boldsymbol{x}\). In order that this network can approximate \(\mu_{\mathcal{C}}(\boldsymbol{x})\), we need to mask all components in \(\boldsymbol{x}=(x_{1},\ldots,x_{q})^{\top}\) which are not contained in \(\mathcal{C}\). Choose a mask value \(\boldsymbol{m}=(m_{1},\ldots,m_{q})^{\top}\in\mathbb{R}^{q}\), the specific choice is going to be discussed below; for the moment, it should just be a sort of "neutral value". We set \[\boldsymbol{x}_{\mathcal{C}}^{(\boldsymbol{m})}:=\left(m_{1}+(x_{1}-m_{1}) \mathds{1}_{\{1\in\mathcal{C}\}},\ldots,m_{q}+(x_{q}-m_{q})\mathds{1}_{\{q \in\mathcal{C}\}}\right)^{\top}.\] (2.6) We then try to find an optimal network parameter \(\boldsymbol{\vartheta}\) such that \(\operatorname{NN}_{\boldsymbol{\vartheta}}(\boldsymbol{x}_{\mathcal{C}}^{( \boldsymbol{m})})\) is a good approximation to \(\mu_{\mathcal{C}}(\boldsymbol{x})\) for all features \(\boldsymbol{x}\in\mathcal{X}\) and all subsets \(\mathcal{C}\subseteq\mathcal{Q}\).
2. In our applications, we explore an empirical version of (2.4) to find the optimal network parameter \(\boldsymbol{\vartheta}\). Assume we have observed features \((\boldsymbol{x}_{i})_{i=1}^{n}\). Then, we solve \[\widehat{\boldsymbol{\vartheta}}\ =\ \operatorname*{arg\,min}_{\boldsymbol{ \vartheta}}\ \frac{1}{3n}\,\sum_{l=1}^{3n}\Big{(}\widetilde{\mu}(\boldsymbol{x}_{l}^{[3]}) -\operatorname{NN}_{\boldsymbol{\vartheta}}(\boldsymbol{x}_{l}^{[3]})\Big{)}^ {2},\] (2.7) where, to both calibrate the network and meet the logical constraints of the full and the null model, we triplicate the observed features \((\boldsymbol{x}_{i})_{i=1}^{n}\) as follows: 1. For \(1\leq l\leq n\), we set \(\boldsymbol{x}_{l}^{[3]}=\boldsymbol{x}_{l}\) and \(\widetilde{\mu}(\boldsymbol{x}_{l}^{[3]})=\mu(\boldsymbol{x}_{l})\). These instances are used to ensure that we can replicate the full model, see (2.2). 2. For \(n+1\leq l\leq 2n\), we set \(\boldsymbol{x}_{l}^{[3]}=\boldsymbol{m}\) and \(\widetilde{\mu}(\boldsymbol{x}_{l}^{[3]})=\mu_{0}\). These instances are used to ensure that we can replicate the null model, see (2.2). If \(\mu_{0}\) is not available we just take the empirical mean of \((\mu(\boldsymbol{x}_{i}))_{i=1}^{n}\) as its estimate.
3. For \(2n+1\leq l\leq 3n\), we set \(\mathbf{x}_{l}^{[3]}=\mathbf{x}_{l-2n,\mathcal{C}l}^{(\mathbf{m})}\), see (2.6), and \(\widetilde{\mu}(\mathbf{x}_{l}^{[3]})=\mu(\mathbf{x}_{l})\), where the sets \(\mathcal{C}_{l}\subseteq\mathcal{Q}\) are chosen randomly and independently such that they mask independently of all other components each component \(j\in\mathcal{Q}\) with probability \(1/2\).
**Remarks 2.2**:
* We use the above cases (a) and (b) with indices \(1\leq l\leq 2n\) to ensure that the extreme cases (2.2), the null model \(\mu_{0}\) and the full model \(\mu(\mathbf{x})\), can be approximated by the estimated neural network \(\mathrm{NN}_{\widetilde{\mathbf{\vartheta}}}\). These two cases serve as calibration of the conditional expectations.
* The above case (c) with indices \(2n+1\leq l\leq 3n\) models the conditional expectations (2.1), where the input data \(\mathbf{x}_{l}^{[3]}=\mathbf{x}_{l-2n,\mathcal{C}_{l}}^{(\mathbf{m})}\) has randomly masked components \(m_{j}\) with \(j\not\in\mathcal{C}_{l}\), and it tries to approximate the full model as well as possible.
* In relation to the previous item, there is some connection to masked auto-encoders which randomly mask part of the input images, which are then reconstructed by the auto-encoders; see He et al. [14]. These masked auto-encoders are used for denoising, in our application this denoising can be interpreted as taking conditional expectations.
* The network in (2.7) is fitted with gradient descent, and if the first model \(\mu(\mathbf{x})\) is also a network with the same architecture, we propose to initialize gradient descent optimization of (2.7) precisely with the network weights of the first model \(\mu(\mathbf{x})\).
* For stochastic gradient descent, one should randomize to order of the indices \(1\leq l\leq 3n\) in (2.7), so that all random mini-batches have instances of all three kinds.
* We mask the input data \(\mathbf{x}_{l-2n,\mathcal{C}l}^{(\mathbf{m})}\), see item (c) above, i.e., every component of \(\mathbf{x}_{l}\) is masked independently from the others with probability \(1/2\). This precisely corresponds to selecting each subset \(\mathcal{C}\subseteq\mathcal{Q}\) in the SHAP computation (4.3) with equal probability, which results in \(2^{q}\) subsets of equal probability. Alternatively, equivalently, one could also use a drop-out layer after the input with a drop-out probability of \(1/2\). However, this drop-out approach provides different difficulties. Drop-out uses the mask value \(0\), and it cannot easily calibrate the extreme cases of the null model and the full model. Moreover, drop-out may be more difficult in implementation if one uses entity embeddings for categorical feature components (because these act simultaneously on multiple embedding weights). Our example below will use entity embeddings for categorical feature components.
There remains the discussion of the mask value \(\mathbf{m}\in\mathbb{R}^{q}\) in (2.6). We start with the case where \(\mathbf{x}\) only has continuous components. In that case, it may happen that there is a feature \(\mathbf{x}\) that takes the same value as the mask \(\mathbf{m}\in\mathbb{R}^{q}\), i.e., \(\mathbf{x}=\mathbf{m}\). In the fully masked case we should obtain the null model
\[\mu_{0}=\mu_{\emptyset}(\mathbf{x})\ \stackrel{{!}}{{=}}\ \mathrm{NN}_{ \widetilde{\mathbf{\vartheta}}}(\mathbf{m}),\]
i.e., the estimated network \(\mathrm{NN}_{\widetilde{\mathbf{\vartheta}}}\) can perfectly replicate the null model if the feature \(\mathbf{X}=\mathbf{m}\) is fully masked. At the same time for the (non-masked) feature value \(\mathbf{x}=\mathbf{m}\), we should obtain in the full model
\[\mu(\mathbf{x})=\mu(\mathbf{m})=\mu_{\mathcal{Q}}(\mathbf{m})\ \stackrel{{!}}{{=}} \ \mathrm{NN}_{\widetilde{\mathbf{\vartheta}}}(\mathbf{m}).\]
These two requirement do not stay in conflict, if we choose the mask value \(\mathbf{m}\in\mathbb{R}^{q}\) such that
\[\mu(\mathbf{m})=\mu_{0}, \tag{2.8}\]
i.e., we choose the mask value \(\mathbf{m}\) such that the full model has the same prediction in \(\mathbf{x}=\mathbf{m}\) as the null model (not considering any features).
In practical neural network applications, we typically normalize the continuous feature components of \(\mathbf{x}\) to be centered and have unit variance. This is done to ensure efficient gradient descent fitting. As a consequence, the continuous feature components of \(\mathbf{x}\) fluctuate around zero. To select the mask value \(\mathbf{m}\) we proceed as follows in our application below. We choose a small tolerance level \(\delta>0\) (we choose \(\delta=0.1\%\) in our application), and we select the mask value \(\mathbf{m}\in\mathbb{R}^{q}\) as close as possible to the origin among all observed features \((\mathbf{x}_{i})_{i=1}^{n}\) whose regression value \(\mu(\mathbf{x}_{i})\) differs less than \(\delta\) from the null model \(\mu_{0}\). That is,
\[\mathbf{m}\ =\ \operatorname*{arg\,min}_{\mathbf{x}_{i}:\,|\mu(\mathbf{x}_{i})/\mu_{0}- 1|<\delta}\|\mathbf{x}_{i}\|, \tag{2.9}\]
where \(\|\cdot\|\) is the Euclidean norm. Having a mask value close to the origin ensures that the mask is in the main body of the (normalized) distribution of the continuous features.
**Remarks 2.3**:
* This proposal (2.9) has some similarity to the Baseline SHAP presented in Sundararajan-Najmi [33], where the masked values are set to a baseline feature value \(\mathbf{x}^{\prime}\). However, the crucial difference is that we do not explicitly use this baseline feature value in our calculation because we perform a full conditional expectation in (2.1), but we only use the mask to indicate the network which variables have not been observed. In some sense, this is equivalent to "turning off" some input components as in drop-out layers, except that our mask value \(\mathbf{m}\) is chosen such that in the fully masked (turned off) case, we rediscover the null model.
* The crucial property of the choice of the mask \(\mathbf{m}\) is that it can both reflect the null model \(\mu_{0}\) and the expected value \(\mu(\mathbf{m})\) in the mask \(\mathbf{x}=\mathbf{m}\) in the full model, see (2.8). A mask choice close to zero (2.9) has another nice interpretation, namely, that values close to zero do not have any big effects given the affine transformations in neural network layers.
* Remark that the mask choice (2.9) has provided better models than choosing a (remote) mask value, e.g., \(\mathbf{m}=(2,\ldots,2)^{\top}\), in the case of normalized features \(\mathbf{x}\). Theoretically, there should not be any difference between this choice and choice (2.9) for large neural networks. However, this latter choice has turned out to be more difficult in gradient descent fitting, therefore, we prefer (2.9). Intuitively, a remote mask value will mean in this set-up that the null model is not discovered within the full model, but it is rather modeled separately beside the full model.
For the implementation of categorical feature components we use the method of entity embedding. Assume that \(X_{j}\) is a categorical variable that takes values in the (nominal) set \(\mathcal{A}_{j}=\{a_{1},\ldots,a_{K}\}\), i.e., \(X_{j}\) takes \(K\) different levels \((a_{k})_{k=1}^{K}\). For entity embedding, one chooses an embedding dimension \(b\in\mathbb{N}\), and then each level \(a_{k}\in\mathcal{A}_{j}\) is assigned an embedding weight \(\mathbf{b}_{k}\in\mathbb{R}^{b}\). That is, we consider the embedding map
\[\mathbf{e}:\mathcal{A}_{j}\to\mathbb{R}^{b},\qquad X_{j}=a_{k}\mapsto\mathbf{e}(X_{j} )=\mathbf{b}_{k};\]
we refer to Brebisson et al. [3], Guo-Berkhahn [13], Richman [30, 31] and Delong-Kozak [8]. The embedding \(\mathbf{e}(X_{j})\in\mathbb{R}^{b}\) is then concatenated with the continuous components of \(\mathbf{X}\), and this
concatenation is used as input to the network. Entity embedding adds another \(Kb\) parameters \([\mathbf{b}_{1},\ldots,\mathbf{b}_{K}]\in\mathbb{R}^{b\times K}\) to the fitting procedure and these embedding parameters are also learned during gradient descent network training. In this categorical case, we propose for the masking of \(X_{j}\) to extend the levels \(\mathcal{A}_{j}\) by a fictitious level \(a_{K+1}\) whose embedding weight is initialized for gradient descent fitting by \(\mathbf{b}_{K+1}=0\in\mathbb{R}^{b}\).
## 3 Example: variable importance
### Data, predictive model and conditional expectation network
We apply this conditional expectation network proposal to the French motor third party liability (MTPL) claims frequency example studied, e.g., in Charpentier [6], Lindholm et al. [20] and Wuthrich-Merz [36]; this data set is available through the R package CASdatasets [9]. Listing 1 gives an excerpt of the data. We use the data pre-processing as described in Chapter 13 of [36], and we choose the same network architecture as presented in Example 7.10 of [36], with entity embeddings of dimension \(b=2\) for the categorical features VehBrand and Region, see Listing 7.4 of [36]. We fit this network architecture to the available data (using early stopping) which gives us the expected frequency regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\). Note that this data describes a claims frequency example, which is usually modeled with a Poisson rate regression. Therefore, we use the Poisson deviance loss, denoted by \(L\), for model fitting and evaluation. The time exposures are used as weights in model fitting and evaluation; for the Poisson deviance loss we refer to Example 2.24 in [36].
```
1'data.frame':678007obs.of12variables:
2$IDpol:num13510111315171821...
3$Exposure:num0.10.770.750.090.840.520.450.270.710.15...
4$Area:Factorw/6levels"A","B","C","D",...:4422255332...
5$VehPower:int5567766777...
6$VehAge:int0020022000...
7$DrivAge:int555555246438333341...
8$BonusMalus:int50550505050505068650...
9$VehFarnd:Factorw/11levels"B1","B2","B3",...:999999999999...
10$VehGas:Factorw/2levels"Diesel","Regular":2211122111...
11$Density:int1217121275476763003300313713760...
12$Region:Factorw/22levels"B11","R21","R22",...:18183151588202012...
13$ClaimNb:num00000000000...
```
Listing 1: Excerpt of the French MTPL claims frequency data set.
\begin{table}
\begin{tabular}{|c l||c c|} \hline & & \multicolumn{2}{c|}{Poisson deviance losses} \\ & & in-sample & out-of-sample \\ & & on \(\mathcal{L}\) & on \(\mathcal{T}\) \\ \hline \hline (0) & null model \(\mu_{0}\) (empirical mean on \(\mathcal{L}\)) & 25.213 & 25.445 \\ (1) & full neural network model \(\mu(\mathbf{x})\) & 23.777 & 23.849 \\ \hline (2) & approximation NN\({}_{\widehat{\mathbf{\vartheta}}}(\mathbf{m})\) of null model & 25.213 & 25.446 \\ (3) & approximation NN\({}_{\widehat{\mathbf{\vartheta}}}(\mathbf{x})\) full model & 23.802 & 23.847 \\ \hline \end{tabular}
\end{table}
Table 1: Conditional expectation network approximation; Poisson deviance losses \(L\) in \(10^{-2}\).
The fitted model \(\mu(\mathbf{x})\) and its out-of-sample performance are shown on line (1) of Table 1. These results are directly comparable to Table 7.4 in [36] because we use the same learning and test data split for the sets \(\mathcal{L}\) and \(\mathcal{T}\), respectively; see Listing 5.2 and Table 5.2 in [36].1 Line (0) of Table 1 shows the null model which uses the empirical mean for \(\mu_{0}\). The relative increase in out-of-sample Poisson deviance loss \(L\) on \(\mathcal{T}\) when moving from the full to the null model is
Footnote 1: There are small numerical differences between the results on line (1) of Table 1 and Table 7.4 in [36] because here we have standardized the continuous feature components to be centered and have unit variance, whereas as in [36] we have used the MinMaxScaler for pre-processing the continuous feature components, see (7.29)-(7.30) of [36]. Often standardization provides slightly superior results over the MinMaxScaler pre-processing.
\[\frac{\frac{1}{n}\sum_{i=1}^{n}L(Y_{i},\mu_{0})}{\frac{1}{n}\sum_{i=1}^{n}L(Y_ {i},\mu(\mathbf{x}_{i}))}-1\ =\ \frac{25.445}{23.849}-1\ =\ 6.70\%. \tag{3.1}\]
This is a comparably small value which expresses that we work in a low signal-to-noise ratio situation, which is rather typical in actuarial problems. This relative increase (3.1) is the benchmark that we are going to attribute to the different feature components.
We fit the same network architecture for the calculation of the conditional expectations (2.1), and we initialize the gradient descent model fitting with the weights of the first network \(\mu\). This is precisely done as in (2.7) with the random masking as described in the previous section. Figure 1 shows the results. The red dots in Figure 1 reflect the approximation of the full model \(\mu(\mathbf{x}_{i})\), and a perfect network fit of \(\text{NN}_{\widehat{\mathbf{g}}}(\mathbf{x}_{i})\), \(1\leq i\leq n\), will set all red dots precisely on the diagonal orange line. The black dot reflects the approximation of the null model \(\mu_{0}\), and \(\text{NN}_{\widehat{\mathbf{g}}}(\mathbf{m})\) lies perfectly on the diagonal line. The blue dots reflect the conditional expectations \(\mu_{\mathcal{C}_{i}}(\mathbf{x}_{i})\) estimated by \(\text{NN}_{\widehat{\mathbf{g}}}(\mathbf{x}_{i,\mathcal{C}_{i}}^{(\mathbf{m})})\), where the components \(\mathcal{C}_{i}^{c}=\mathcal{Q}\setminus\mathcal{C}_{i}\) have been masked. We
Figure 1: Conditional expectation network fitting results.
observe that these blue dots fluctuate quite wildly, of course, this is expected as we neglect the information in the components \(\mathbf{X}_{\mathcal{C}^{c}_{i}}\).
Lines (2) and (3) of Table 1 show the performance of this approximation \(\text{NN}_{\widehat{\mathbf{\vartheta}}}\) in the cases of the full model and of the null model. Note that the fitting procedure has only taken place on the learning data \(\mathcal{L}\), and the disjoint sample \(\mathcal{T}\) is only used for the out-of-sample performance assessment. We observe some in-sample differences, which is a sign that the first network \(\mu\) is in-sample overfitting, because the out-of-sample performance is equally good between \(\mu\) and \(\text{NN}_{\widehat{\mathbf{\vartheta}}}\). This is interpret that the conditional expectation network \(\text{NN}_{\widehat{\mathbf{\vartheta}}}\) is a regularized version of the first neural network \(\mu\), and the masked inputs act as drop-out regularization; indeed, besides for using the conditional expectation network for explanation, the good out-of-sample performance means that this network can also be used for prediction. Thus, the fitted network (2.5) seems to be a good approximation to the (conditional) means in the cases of the full and the null model.
### drop1 and anova analyses
We perform a drop1 analysis, i.e., we simply set one column of the design matrix to the mask value, similar to the one used for GLMs; differences are described at the end of this section. The full model \(\mu\), given by (2.3), dominates in convex order any model for \(j\in\mathcal{Q}\)
\[\mu_{\mathcal{Q}\setminus\{j\}}(\mathbf{x})=\mathbb{E}\left[\left.\mu(\mathbf{X}) \right|\mathbf{X}_{\mathcal{Q}\setminus\{j\}}=\mathbf{x}_{\mathcal{Q}\setminus\{j\}} \right]=\mathbb{E}\left[\left.Y\right|\mathbf{X}_{\mathcal{Q}\setminus\{j\}}=\mathbf{ x}_{\mathcal{Q}\setminus\{j\}}\right],\]
where we drop the \(j\)-th component from the information set. On the out-of-sample data \(\mathcal{T}\), we analyze the relative increase in Poisson deviance loss \(L\) using this more crude regression function; if we drop all components we obtain (3.1) for the null model. We define on the out-of-sample data \(\mathcal{T}\) the drop1 statistics for \(j\in\mathcal{Q}\)
\[\texttt{drop1}_{j}\ =\ \frac{\frac{1}{n}\sum_{i=1}^{n}L(Y_{i},\mu_{\mathcal{Q} \setminus\{j\}}(\mathbf{x}_{i}))}{\frac{1}{n}\sum_{i=1}^{n}L(Y_{i},\mu(\mathbf{x}_{i}) )}-1. \tag{3.2}\]
Figure 2 (lhs) shows the results. Dropping the variable BonusMalus leads to the biggest increase in out-of-sample loss of \(4.50\%\), compare to (3.1), and we conclude that this is the most important variable in this prediction problem (using a drop1 analysis). At the other end, Density and Area do not seem to be important, and may be dropped from the analysis. In fact, \(\texttt{drop1}_{\texttt{Density}}=0.04\%>0\) is slightly positive and \(\texttt{drop1}_{\texttt{Area}}=-0.01\%<0\) is negative (out-of-sample). The latter says that we should (clearly) drop the Area component from the feature \(\mathbf{X}\), because inclusion of Area negatively impacts the out-of-sample performance.
We compare these drop1 importances to the variable permutation importance (VPI) figures of Breiman [4]. VPI randomly permutes one component of the features \((\mathbf{x}_{i})_{i=1}^{n}\) at a time across the entire sample, and then studies the change in out-of-sample loss compared to the full model. The corresponding results are shown in Figure 2 (rhs); we keep the same order on the \(y\)-axis. We observe bigger magnitudes and also a slightly different ordering compared to the drop1 analysis. The difficulty with the VPI analysis is that it does not properly respect the dependence structure between the feature components of \(\mathbf{X}\sim\pi\), e.g., if two feature components are colinear we cannot randomly permute one component across the entire portfolio without changing the other one correspondingly. In Figure 3 we show the existing dependence in our example between DrivAge and BonusMalus (lhs) and between Area and Density (rhs). For instance, we cannot change
the Area code from A to D without changing Density correspondingly. This is precisely done in our drop1 analysis using the conditional expectations \(\mu_{\mathcal{Q}\setminus\{j\}}(\mathbf{x})\), but it is not done in VPI. Therefore, the conditional expectation results are more reliable to measure variable importance in this case.
Moreover, since there are no young car drivers (below age 25) who are on the lowest bonus-malus level of 50%, see Figure 3 (l lhs), the (fitted) regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\) is not (well-)specified
Figure 3: Dependence between the feature components of \(\mathbf{X}\sim\pi\): (lhs) DrivAge and BonusMalus, and (rhs) Area code and Density; this plot is taken from Figure 13.12 in [36]; the plots only show the whiskers but not the outliers.
Figure 2: (lhs) Conditional expectation network for drop1\({}_{j}\) importances of individual components \(j\in\{\texttt{BonusMalus},\ldots,\texttt{Area}\}\), (rhs) variable permutation importance (VPI) of Breiman [4]; the \(x\)-scale differs and the order on the \(y\)-axis is the same.
for such feature values, in fact it is undefined on this part of the feature space. Therefore, we can extrapolated \(\mu\) arbitrarily to this part of the feature space because this extrapolation cannot be back-tested on data. Precisely this problem questions the magnitudes in the VPI plot, because different extrapolations give us different magnitudes of increases in deviance losses.
Next, we study an anova analysis, similarly to the one offered in the R package for GLMs; we also refer to Section 2.3.2 in McCullagh-Nelder [28]. The anova analysis recursively adds feature components to the regression model, and analyzes the change of (out-of-sample) deviance loss provided by the inclusion of each new feature component. To have the same units as in the previous analyses we scale these changes of deviance losses with the loss of the full model, precisely as in (3.1) and (3.2). The anova analysis then gives us an additive decomposition of the total gap in losses between the full and null model, i.e., it explains the difference of 6.70% given in (3.1). For this we choose a sequence of mutually different indices \(j_{1},\ldots,j_{q}\in\mathcal{Q}\), and
Figure 4: anova analyses for different orderings for \(j\in\mathcal{Q}\) of the feature components.
we define the contribution of feature component \(X_{j_{k}}\) to the decrease in deviance loss by
\[\texttt{anova}_{j_{k}}\ =\ \frac{\frac{1}{n}\sum_{i=1}^{n}L\left(Y_{i},\mu_{\{j_{1}, \ldots,j_{k-1}\}}(\mathbf{x}_{i})\right)-L\left(Y_{i},\mu_{\{j_{1},\ldots,j_{k}\}}( \mathbf{x}_{i})\right)}{\frac{1}{n}\sum_{i=1}^{n}L(Y_{i},\mu(\mathbf{x}_{i}))}, \tag{3.3}\]
for \(k=1,\ldots,q\); for the first term \(k=1\) in (3.3) we set \(\mu_{\emptyset}(\mathbf{x})=\mu_{0}\). This gives us an additive decomposition of (3.1), i.e.,
\[\sum_{k=1}^{q}\texttt{anova}_{j_{k}}\ =\ \frac{\frac{1}{n}\sum_{i=1}^{n}L(Y_{i}, \mu_{0})}{\frac{1}{n}\sum_{i=1}^{n}L(Y_{i},\mu(\mathbf{x}_{i}))}-1\ =\ \frac{25.445}{23.849}-1\ =\ 6.70\%.\]
Moreover, we have \(\texttt{anova}_{j_{q}}=\texttt{drop1}_{j_{q}}\), this is the last included component \(j_{q}\in\mathcal{Q}\). Figure 4 (top) shows the waterfall graph of the anova analysis stating the corresponding decreases in losses in the plot, e.g., \(\texttt{anova}_{j_{1}}=0.89\%\) for \(j_{1}=\texttt{DrivAge}\). We observe that by far the biggest contribution is provided by BonusMalus, which gives us the same conclusion about variable importance as Figure 2. The difficulty with the anova analysis is that the contributions depend on the order \(j_{1},\ldots,j_{q}\) of the inclusion of the feature components. In Figure 4 (bottom-lhs) we exchange the order of DrivAge and BonusMalus, and then \(\texttt{anova}_{j_{2}}=0.65\%<0.89\%\) for \(j_{2}=\texttt{DrivAge}\), because part of the decrease of loss has already been explained by BonusMalus (through the dependence in \(\mathbf{X}\) and the interactions in the regression function \(\mu\)). This is also important for dropping variables. In Figure 2, the two variables Density and Area provide the smallest values for \(\texttt{drop1}_{j}\). However, we also know that these two components are almost colinear, see Figure 3 (rhs). Figure 4 (bottom) verifies that one of these two components should be included in the model, (bottom-lhs) considers the order Density-Area and (bottom-rhs) the order Area-Density. Integrating the first of these two variables in position \(q-1\) into the model gives a higher contribution than the two other variables VehPower or VehGas, being considered first. Therefore, Density/Area is more important than these two other variables, but integration of one of them is sufficient.
**Discussion and comparison to drop1 and anova analyses in GLMs.**
There is an important difference between the anova analysis offered by GLMs and our anova analysis. We discuss this. The anova analysis for GLMs compares (two) nested GLMs. The null hypothesis states that the smaller GLM is sufficient against the alternative hypothesis that we should work in a bigger GLM including more components. Having nested GLMs, this hypothesis can be tested using a likelihood ratio test (LRT). The starting point of this test is the hypothesis that the smaller GLM is sufficient.
Our anova analysis starts from the bigger model that specifies a regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\), and the smaller models are obtained by considering conditional expectations of that regression function. This provides (a sort of) nested models, but typically these nested models will not be of the same type. E.g., if we start with a GLM with log-link function we have regression function
\[\mathbf{x}\ \mapsto\ \mu(\mathbf{x})=\exp\left\{\beta_{0}+\sum_{j=1}^{q}\beta_{j}x_{j} \right\},\]
for regression parameter \(\mathbf{\beta}=(\beta_{0},\ldots,\beta_{q})^{\top}\in\mathbb{R}^{q+1}\). If we drop the last component we receive
\[\mu_{\mathcal{Q}\setminus\{q\}}(\mathbf{x}) = \mathbb{E}\left[\,\mu(\mathbf{X})\,|\,\mathbf{X}_{\mathcal{Q}\setminus\{q \}}=\mathbf{x}_{\mathcal{Q}\setminus\{q\}}\right]\] \[= \exp\left\{\beta_{0}+\sum_{j=1}^{q-1}\beta_{j}x_{j}\right\} \mathbb{E}\left[\,\exp\{\beta_{q}X_{q}\}|\,\mathbf{X}_{\mathcal{Q}\setminus\{q\}}= \mathbf{x}_{\mathcal{Q}\setminus\{q\}}\right].\]
This last conditional expectation can have any functional form in \(\mathbf{x}_{\mathcal{Q}\setminus\{q\}}\), and we do not necessarily have a GLM for this reduced model.
### Marginal conditional expectation plot
A very popular visual explainability tool in machine learning is the PDP. The PDP has been introduced and studied by Friedman [10] and Zhao-Hastie [37]. PDPs marginalize the regression function \(\mu(\mathbf{X})\) for all components \(X_{j}\) of \(\mathbf{X}\), \(j\in\mathcal{Q}\), by considering an unconditional expectation
\[x_{j}\ \mapsto\ \mathbb{E}\left[\mu\left(\mathbf{X}_{\mathcal{Q}\setminus\{j\}},x _{j}\right)\right]. \tag{3.4}\]
The unconditional expectation (3.4) sets the \(j\)-th component of the feature equal to \(x_{j}\), and averages over the remaining feature components \(\mathbf{X}_{\mathcal{Q}\setminus\{j\}}\) without considering the correct dependence structure between \(\mathbf{X}_{\mathcal{Q}\setminus\{j\}}\) and \(X_{j}=x_{j}\). That is, equivalently to VPI in Figure 2 (rhs), the true dependence structure is neglected in (3.4), and it precisely suffers the same deficiency because we may average over feature combinations that do not occur in the data, e.g., due to colinearity. This issue is generally criticized in the literature; see, e.g., Apley-Zhu [2]. Based on the estimated conditional expectation network \(\text{NN}_{\widehat{\mathbf{\vartheta}}}\), we can easily correct for this deficiency by considering the marginal conditional expectations, for \(j\in\mathcal{Q}\),
\[x_{j}\ \mapsto\ \mu_{\{j\}}(\mathbf{x})=\mathbb{E}\left[\,\mu(\mathbf{X})\,|\,X_{j}=x _{j}\right]=\mathbb{E}\left[\,Y|\,X_{j}=x_{j}\right]. \tag{3.5}\]
Figure 5 shows the PDPs (3.4) and the MCEPs (3.5) for the two different feature components \(j=\texttt{BonusMalus}\) on the (lhs) and \(j=\texttt{DrivAge}\) on the (rhs). The blue lines give the PDPs and
Figure 5: PDP (3.4) and MCEP (3.5): (lhs) variable BonusMalus, and (rhs) variable DrivAge.
the red lines the MCEPs. These lines are complemented by the empirical observations (black dots) and the average prediction (green lines) given by, respectively,
\[\overline{y}_{c_{j}}=\frac{\sum_{i=1}^{n}Y_{i}\,\mathds{1}_{\{x_{i,j}=c_{j}\}}} {\sum_{i=1}^{n}\mathds{1}_{\{x_{i,j}=c_{j}\}}}\qquad\text{ and }\qquad \overline{\mu}_{c_{j}}=\frac{\sum_{i=1}^{n}\mu(\mathbf{x}_{i})\,\mathds{1}_{\{x_{i, j}=c_{j}\}}}{\sum_{i=1}^{n}\mathds{1}_{\{x_{i,j}=c_{j}\}}},\]
where \(c_{j}\) runs over all levels that feature component \(X_{j}\) of \(\mathbf{X}\) can take. For the most important feature variable, \(j=\texttt{BonusMalus}\), the red and blue lines of the MCEP and PDP look fairly similar. However, for \(j=\texttt{DrivAge}\), the graphs look quite different for young driver ages. As described in the previous section, young car drivers cannot be on the lowest bonus-malus level of 50%, therefore, the PDP may not give reasonable results for young ages. Contrary, the MCEP can deal with such dependencies and it is verified by Figure 5 (rhs) that the red MCEP curve meets the empirical observations \(\overline{y}_{c_{j}}\) (black dots) quite well, which says that the marginalized conditional version (3.5) reflects a one-dimensional regression. The empirical version \(\overline{y}_{c_{j}}\) (black dots) is a noisy version of the average prediction \(\overline{\mu}_{c_{j}}\) (green line), and also the average prediction and the MCEP curve are rather similar. The MCEP curve is a conditional expectation (3.5), whereas the average prediction \(\overline{\mu}_{c_{j}}\) is a randomized version thereof, considering the empirical (realized) portfolio distribution of \((\mathbf{x}_{i})_{i=1}^{n}\).
## 4 Example: SHAP
### Additive and fair decomposition
We discuss SHAP in this section. SHAP is a popular model-agnostic explainability tool for (complex) regression models, see Lundberg-Lee [24] and Aas et al. [1], but it is also increasingly used to solve other decomposition and attribution problems, e.g., a risk allocation example in pension insurance is given by Godin et al. [12]. SHAP is motivated by cooperative game theory. Shapley [32] stated the following axioms for sharing common gains and costs in an additive and fair way within a cooperation of \(q\geq 2\) players. The anova analysis (3.3) has provided an additive decomposition of the total loss gap between the full and the null model, but the anova decomposition cannot be considered to be fair, because the order of the inclusion matters in anova, see Figure 4.
Assume there exists a value function \(\nu\) that maps from the power \(\sigma\)-algebra of \(\mathcal{Q}\) to the real line, i.e.,
\[\nu:\mathcal{C}\subseteq\mathcal{Q}\ \mapsto\ \nu(\mathcal{C})\in\mathbb{R}. \tag{4.1}\]
This value function \(\nu(\mathcal{C})\) measures the contribution of each coalition \(\mathcal{C}\subseteq\mathcal{Q}\) to the total payoff given by \(\nu(\mathcal{Q})\). Shapley [32] postulated the following four axioms to be desirable properties of an additive and fair distribution \((\mu_{j})_{j=1}^{q}=(\mu_{j}^{(\nu)})_{j=1}^{q}\) of the total payoff \(\nu(\mathcal{Q})\) among the \(q\) players; see also Aas et al. [1]:
1. _Efficiency:_\(\nu(\mathcal{Q})-\nu(\emptyset)=\sum_{j=1}^{q}\mu_{j}\). Set \(\mu_{0}=\nu(\emptyset)\).
2. _Symmetry:_ If \(\nu(\mathcal{C}\cup\{j\})=\nu(\mathcal{C}\cup\{k\})\) for every \(\mathcal{C}\subseteq\mathcal{Q}\setminus\{j,k\}\), then \(\mu_{j}=\mu_{k}\).
3. _Dummy player:_ If \(\nu(\mathcal{C}\cup\{j\})=\nu(\mathcal{C})\) for every \(\mathcal{C}\subseteq\mathcal{Q}\setminus\{j\}\), then \(\mu_{j}=0\).
4. _Linearity:_ Consider two cooperative games with value functions \(\nu_{1}\) and \(\nu_{2}\). Then, \(\mu_{j}^{(\nu_{1}+\nu_{2})}=\mu_{j}^{(\nu_{1})}+\mu_{j}^{(\nu_{2})}\) and \(\mu_{j}^{(\alpha\nu_{1})}=\alpha\mu_{j}^{(\nu_{1})}\) for all \(1\leq j\leq q\) and \(\alpha\in\mathbb{R}\).
The so-called _Shapley values_[32] are the only solution to distribute a total payoff \(\nu(\mathcal{Q})\) among the \(q\) players so that these four axioms (A1)-(A4) are fulfilled, and they are given for each \(j\in\mathcal{Q}\) by
\[\mu_{j}=\sum_{\mathcal{C}\subseteq\mathcal{Q}\setminus\{j\}}\frac{|\mathcal{C}|!\,(q-|\mathcal{C}|-1)!}{q!}\,\Big{[}\nu(\mathcal{C}\cup\{j\})-v(\mathcal{C}) \Big{]}; \tag{4.2}\]
we refer to formula (4) in Lundberg-Lee [24].
There remain two important questions:
1. How should the value function (4.1) be chosen if we translate the cooperative game theoretic result to regression modeling, meaning that we would like to "share" a prediction \(\mu(\mathbf{x})\) in a fair and additive way (axioms (A1)-(A4)) among the feature components of \(\mathbf{x}\)?
2. How can (4.2) be calculated efficiently?
Item (2) has been answered in Theorem 2 of Lundberg-Lee [24], namely, the Shapley value can be obtain by solving the following constraint weighted square loss minimization problem
\[\operatorname*{arg\,min}_{(\mu_{j})_{j=1}^{q}}\,\sum_{\emptyset\neq\mathcal{C }\subseteq\mathcal{Q}}\,\frac{q-1}{\binom{q}{|\mathcal{C}|}|\mathcal{C}|(q-| \mathcal{C}|)}\,\left(\nu_{0}(\mathcal{C})-\sum_{j\in\mathcal{C}}\mu_{j} \right)^{2},\qquad\text{subject to}\,\sum_{j=1}^{q}\mu_{j}=\nu_{0}(\mathcal{Q}), \tag{4.3}\]
where we define \(\nu_{0}(\mathcal{C})=\nu(\mathcal{C})-\nu(\emptyset)\), and where we set \(\mu_{0}=\nu(\emptyset)\). This approach is commonly known as KernelSHAP in the literature, and the term before the square bracket in (4.3) is called Shapley kernel weight. Optimization (4.3) states a convex minimization problem with a linear side constraint which can be solved with the method of Lagrange. For computing (4.3) simultaneously for different instances (different value functions \(\nu\), see also (4.5), below), a more efficient way is to include the side constraint in a different (approximate) way by extending the summation in (4.3) by the term \(\mathcal{C}=\mathcal{Q}\). This extension gives a Shapley kernel weight of \(+\infty\), and to deal with this undefined value, one simply sets the Shapley kernel weight for the term \(\mathcal{C}=\mathcal{Q}\) to a very large value; see, e.g., Section 2.3.1 of Aas et al. [1]. The optimal solution is in that case is given by
\[(\mu_{j})_{j=1}^{q}=\left(Z^{\top}WZ\right)^{-1}Z^{\top}W\mathbf{\nu}, \tag{4.4}\]
with diagonal Shapley kernel weight matrix \(W\in\mathbb{R}^{(2^{q}-1)\times(2^{q}-1)}\), vector \(\mathbf{\nu}\in\mathbb{R}^{2^{q}-1}\) containing all terms \(\nu_{0}(\mathcal{C})=\nu(\mathcal{C})-\nu(\emptyset)\) of all coalitions \(\emptyset\neq\mathcal{C}\subseteq\mathcal{Q}\), and design matrix \(Z\in\{0,1\}^{(2^{q}-1)\times q}\). Note that if one considers different instances (different value functions \(\nu\)), only the last term \(\mathbf{\nu}\) in (4.4) changes, and the remaining terms only need to be calculated once.
The summation in (4.3) involves \(2^{q}-2\) terms which can be large for high-dimensional features \(\mathbf{X}\). Therefore, in applications, one often uses a randomized version of (4.3) that randomly samples the terms of the summation in (4.3) with categorical probabilities determined by the Shapley kernel weights; we refer to Section 2.3.1 in Aas et al. [1]. This solves item (2) from above.
Item (1) is more controversial. The Shapley values (4.2) are unique for a given value function choice (4.1). Lundberg-Lee [24] have proposed to choose as value function the conditional expectations for a given instance \(\mathbf{x}\in\mathcal{X}\). That is, for a selected \(\mathbf{x}\), we define the value function
\[\mathcal{C}\subseteq\mathcal{Q}\;\mapsto\;\nu(\mathcal{C}):=\mu_{\mathcal{C}} (\mathbf{x})=\mathbb{E}\left[\,\mu(\mathbf{X})|\,\mathbf{X}_{\mathcal{C}}=\mathbf{x}_{ \mathcal{C}}\right], \tag{4.5}\]
see (2.1). In the case of tree based regressions, a version of these Shapley values can efficiently be calculated using the so-called TreeSHAP method of Lundberg et al. [23]. However, in the general case, there has not been any efficient way of calculating the conditional expectations (4.5) and the Shapley values, respectively. Therefore, the conditional expectations (4.5) have been replaced by approximations, see formula (11) in Lundberg-Lee [24],
\[\nu(\mathcal{C}):=\mathbb{E}\left[\mu\left(\mathbf{X}_{\mathcal{Q}\setminus\mathcal{ C}},\mathbf{x}_{\mathcal{C}}\right)\right], \tag{4.6}\]
i.e., similarly to VPI in Figure 2 (rhs) and the PDP (3.4), the true dependence structure between \(\mathbf{X}_{\mathcal{Q}\setminus\mathcal{C}}\) and \(\mathbf{X}_{\mathcal{C}}\) is neglected in (4.6); sometimes this is also called interventional SHAP, see Laberge-Pequignot [19]. In fact, (4.5) and (4.6) are equal if \(\mathbf{X}_{\mathcal{Q}\setminus\mathcal{C}}\) and \(\mathbf{X}_{\mathcal{C}}\) are independent. In our example, this is clearly not the case, see Figure 3. This is also the main issue raised in Aas et al. [1], and as an improvement, these authors propose Gaussian approximations to the true dependence structure. In our example, we directly approximate the conditional expectations using the estimated network \(\mathrm{NN}_{\widetilde{\mathbf{\vartheta}}}\), see (2.7), i.e., we perform a conditional SHAP using the surrogate network \(\mathrm{NN}_{\widetilde{\mathbf{\vartheta}}}\) for fast computation.
The concept of using the conditional expectations (2.1) has bee criticized as a whole in the paper of Sundararajan-Najmi [33] showing that in some situations this choice leads to unreasonable Shapley values \((\mu_{j})_{j=0}^{q}\), and these authors propose to use an unconditional expectation (4.6) in general. This proposal is also supported by causal arguments given in Janzing et al. [16]. However, causal arguments often use strong assumptions that cannot easily be verified, e.g., the exclusion of unmeasured confounders, and the general use of an unconditional expectation (4.6) cannot be supported in situations like the ones in Figure 3. Namely, in this example, there are no car drivers with DrivAge below 25 having a BonusMalus level of 50%. Therefore, the regression function \(\mu(\mathbf{x})\) is undetermined for such features \(\mathbf{x}\) and, henceforth, (4.6) cannot generally be calculated because the specific value of one variable leads to constraints in the support of the other variable. This problem can be circumvented by extending the regression function \(\mu\) to this part of the feature space, however, this extension is completely subjective because it cannot be supported and verified by data. In the examples in the next section, we compare the conditional and unconditional versions (4.5) and (4.6), respectively, and for the extrapolation, we simply use the one provided by the fitted neural network.
We remark that there is interesting work that extends Shapley values to higher order decompositions and representations; we refer to Tsai et al. [34] and Hiabu et al. [15]. The basic idea is to give a functional decomposition of the regression function by including higher interaction terms. This can partly mitigate the difficulty of the decision whether one should work with conditional or unconditional expectations, however, some issues remain, e.g., the above mentioned support constraints cannot be dealt with the (unconstrained) marginal identification given by formula (2) in Hiabu et al. [15].
### SHAP for mean decompositions
We apply the SHAP explanation to the regression value \(\mu(\mathbf{x})\) of a given instance \(\mathbf{x}\). We compare the conditional and unconditional versions (4.5) and (4.6), respectively. For the unconditional version and its graphical illustrations we use the R packages kernelshap[27] and shapviz[26]; we refer to Mayer et al. [25] for more description.
Figure 6 shows the waterfall graphs of the Shapley decomposition \((\mu_{j})_{j=0}^{q}\) of \(\mu(\mathbf{x})\) of a given instance with features \(\mathbf{x}\in\mathcal{X}\); the ordering on the \(y\)-axis is according to the sizes of these Shapley values \((\mu_{j})_{j=1}^{q}\). The left-hand side shows the conditional version (4.5) and the right-hand side the unconditional one (4.6). These conditional SHAP values with (4.5) are obtained by using the conditional expectation network \(\mathrm{NN}_{\widehat{\mathbf{\vartheta}}}\) for fast computation. That is, we only need to fit one single neural network that serves at simultaneously calculating the conditional expectations of all possible subsets \(\mathcal{C}\subseteq\mathcal{Q}\). A naive way would be to fit a network to each subset, which would require to fit \(2^{q}\) networks.
The results in Figure 6 are rather similar in this example, and there does not seem to be an issue with the colinearities illustrated in Figure 3, because Density/Area only has a marginal influence on regression value \(\mu(\mathbf{x})\) and \(\mathtt{DriVAE/BonusMalus}\) is not in the critical (undetermined) part of the feature space.
In Figure 7 we give a second example of a young car driver of age \(\mathtt{DriAge}=20\). Car drivers enter a bonus-malus scheme at level \(100\%\), and every year of accident-free driving decreases this level by \(5\%\) (and an accident increases the bonus-malus level by a fixed percentage). Thus, it takes at least \(10\) years of accident-free driving until a car driver can reach the lowest bonus-malus level of \(50\%\).2 As a result, the regression function \(\mu\) is undetermined for features having \(\mathtt{DriVAE}=20\) and \(\mathtt{BonusMalus}<90\%\), and we can assign any value to \(\mu\) for this feature as it does not occur in the data. This is precisely what is happening when using the unconditional version (4.6) for SHAP, and in Figure 7 (rhs) we observe that BonusMalus gets a large attribution if we just extrapolate the (first) neural network regression function \(\mu\) to that part of the feature space. Of course, this cannot be justified and supported by data, as it extrapolates \(\mu\) arbitrarily
Figure 6: Waterfall graphs of the Shapley decomposition \((\mu_{j})_{j=0}^{q}\) of \(\mu(\mathbf{x})\) of a selected instance \(\mathbf{x}\in\mathcal{X}\): (lhs) conditional expectation (4.5) for value function \(\nu\); (rhs) unconditional expectation (4.6) for value function \(\nu\); these waterfall graphs use shapviz[26].
to the undefined part of the feature space \(\mathcal{X}\). In such examples, we give clear preference to the conditional version (4.5) on the left-hand side of Figure 7.
### LightGBM surrogate model
We compare the SHAP mean decomposition results of the previous Section 4.2 to the corresponding TreeSHAP results by approximating the full model \(\mu(\mathbf{x})\) by a LightGBM surrogate tree regression model.3 Using this LightGBM surrogate model, we study the resulting TreeSHAP mean decomposition of Lundberg et al. [23] implemented in the R package shapviz [26].
Footnote 3: To fit the LightGBM surrogate regression model, we use the same parametrization as in the model on [https://github.com/JSchelldorfer/ActuarialDataScience/tree/master/14-SHAP](https://github.com/JSchelldorfer/ActuarialDataScience/tree/master/14-SHAP); we also refer to Mayer et al. [25].
Figure 8 gives a scatter plot of the two feature components DrivAge (top) and BonusMalus (bottom) on the \(x\)-axis vs. their SHAP attributions on the \(y\)-axis for 1000 randomly selected cases \(\mathbf{x}_{i}\); remark that the two selected components are dependent, see Figure 3 (lhs), and they are expected to interact in the regression model. In Figure 8 we show the following SHAP mean attributions of 1000 randomly selected cases \(\mathbf{x}_{i}\): (lhs) conditional mean (4.5) as value function, (rhs) unconditional mean (4.6) as value function, and (rhs) TreeSHAP LightGBM surrogate model decomposition. For the latter we use the R package shapviz[26], which performs the decomposition on the log-scale, therefore, we also choose the log-scale for the former two methods. The coloring in Figure 8 selects the feature component that shows the highest interaction with the selected one, i.e., explains best the vertical scattering: (top) for feature DrivAge this is the variable BonusMalus; (bottom) for feature BonusMalus there are different choices between the different SHAP methods. The conditional mean version (4.5) selects DrivAge, whereas the unconditional mean (4.6) and TreeSHAP versions select VehBrand; note that this categorical variable is treated differently in the network approach (embedding layers) and in the LightGBM
Figure 7: Waterfall graphs of the Shapley decomposition \((\mu_{j})_{j=0}^{q}\) of \(\mu(\mathbf{x})\) of a selected instance \(\mathbf{x}\in\mathcal{X}\): (lhs) conditional expectation (4.5) for value function \(\nu\); (rhs) unconditional expectation (4.6) for value function \(\nu\); these waterfall graphs use shapviz[26].
(exclusive feature bundling to bundle sparse (one-hot encoded) features). From these graphs, it seems that the unconditional mean version (4.6) and the TreeSHAP version using a surrogate LightGBM provide rather similar results, and they are different from the conditional mean version (4.5); the plots use the same scale on the \(y\)-axis. Since the unconditional version cannot cope with colinearity in feature components, we give preference to the results in the left column of Figure 8 using the conditional mean version (4.5). In particular, this is justified by the discussion in Section 4.2, namely, for small values of DrivAge we cannot have a low BonusMalus level, and any extrapolation to this part of the feature space is arbitrary (but it will impact the results of the unconditional version).
### SHAP for out-of-sample deviance loss attribution
We have seen in Section 3 that the anova analysis depends on the order of the inclusion of the feature components, i.e., we receive an additive loss decomposition which cannot be considered to be fair, because if we change the order of inclusion of the components their importance in the anova analysis may change. Instead of the anova decomposition, we consider a Shapley deviance loss attribution in this section. For this, we choose the value function of instance \(\mathbf{x}_{i}\) as
\[\mathcal{C}\ \mapsto\ \nu_{(Y_{i},\mathbf{x}_{i})}(\mathcal{C}):=L\left(Y_{i}, \mu_{\mathcal{C}}(\mathbf{x}_{i})\right)=L\Big{(}Y_{i},\,\mathbb{E}\left[\,\mu( \mathbf{X})\right|\mathbf{X}_{\mathcal{C}}=\mathbf{x}_{i,\mathcal{C}}\,\right]\Big{)}, \tag{4.7}\]
where \(L\) is the Poisson deviance loss used, e.g., in (3.1), and we add the specific choice of the observation \((Y_{i},\mathbf{x}_{i})\) as a lower index to the notation of the value function \(\nu_{(Y_{i},\mathbf{x}_{i})}\). Note that we
Figure 8: Dependence plots of SHAP mean decompositions: (top) DrivAge, (bottom) BonusMalus, (lhs) conditional expectation version (4.5), (middle) unconditional expectation version (4.6), (rhs) LightGBM surrogate model.
do not decompose the regression function \(\mu(\mathbf{x})\) in this section, but rather the resulting deviance loss \(L(Y,\mu(\mathbf{x}))\).
For \(\mathcal{C}=\emptyset\) we receive the average loss of the null model
\[\frac{1}{n}\sum_{i=1}^{n}L(Y_{i},\mu_{0})=\frac{1}{n}\sum_{i=1}^{n}\nu_{(Y_{i},\mathbf{x}_{i})}(\emptyset),\]
and for \(\mathcal{C}=\mathcal{Q}\) we obtain the average loss of the full model
\[\frac{1}{n}\sum_{i=1}^{n}L\left(Y_{i},\mu(\mathbf{x}_{i})\right)=\frac{1}{n}\sum_ {i=1}^{n}\nu_{(Y_{i},\mathbf{x}_{i})}(\mathcal{Q}),\]
this refers to lines (2) and (3) of Table 1. Remark that these quantities are empirical counterparts of the losses of the true random tuple \((Y,\mathbf{X})\), given by for the null and the full model, respectively,
\[\mathbb{E}\left[L(Y,\mu_{0})\right]=\mathbb{E}\left[\nu_{(Y,\mathbf{X})}(\emptyset )\right]\qquad\text{ and }\qquad\mathbb{E}\left[L(Y,\mu(\mathbf{X}))\right]=\mathbb{E}\left[\nu_{(Y, \mathbf{X})}(\mathcal{Q})\right].\]
Using the Shapley decomposition of Section 4.1, we can attribute the difference in these losses to the feature components \(X_{j}\) of \(\mathbf{X}\). In a first step, we therefore decompose the Poisson deviance loss \(L(Y_{i},\mu(\mathbf{x}_{i}))\) for each observation \((Y_{i},\mathbf{x}_{i})\) of the test sample \(\mathcal{T}\) using the value function (4.7). This provides us for all observations \((Y_{i},\mathbf{x}_{i})\) with an additive and fair decomposition giving the Shapley values \(\left(\phi_{j,(Y_{i},\mathbf{x}_{i})}\right)_{j=1}^{q}\) such that
\[L\left(Y_{i},\mu(\mathbf{x}_{i})\right)\ =\ \nu_{(Y_{i},\mathbf{x}_{i})}(\mathcal{Q})\ =\ L(Y_{i},\mu_{0})+\sum_{j=1}^{q}\phi_{j_{,}(Y_{i},\mathbf{x}_{i})}.\]
In a second step, we average over these decompositions to receive the average contribution (averaged over \(\mathcal{T}\)) of feature component \(X_{j}\), \(j\in\mathcal{Q}\), given by
\[\Phi_{j}=\frac{1}{n}\sum_{i=1}^{n}\phi_{j_{,}(Y_{i},\mathbf{x}_{i})}. \tag{4.8}\]
Since the Shapley decomposition is still computationally demanding, we consider (4.8) for a random sub-sample of \(\mathcal{T}\), otherwise one may use parallel computing.4
Footnote 4: To compute the Shapley decomposition of the Poisson deviance loss for 1000 observations \((Y_{i},\mathbf{x}_{i})\) on an ordinary laptop (based on a neural network NN\({}_{\widehat{\mathbf{g}}}\)) takes roughly 1 minute.
The following gives a pseudo-code for the SHAP deviance loss attribution.
* Select at random a fixed number \(m\leq 2^{q}-2\) of non-empty subsets \(\mathcal{C}\subset\mathcal{Q}\), and calculate for this random selection the matrix, see (4.4), \[A=\left(Z^{\top}WZ\right)^{-1}Z^{\top}W\ \in\ \mathbb{R}^{q\times(m+1)},\] where we additionally add the case \(\mathcal{C}=\mathcal{Q}\) with a large Shapley kernel weight.
1. Select at random a fixed number \(n\) of cases \(i\), and calculate for each case \(i\) and each selected subset \(\mathcal{C}\) from item (0) the individual deviance loss differences, see (4.7) and (2.6), \[\widehat{\nu}^{0}_{(Y_{i},\boldsymbol{x}_{i})}(\mathcal{C})\ =\ L\left(Y_{i}, \operatorname{NN}_{\widehat{\boldsymbol{\vartheta}}}(\boldsymbol{x}_{i, \mathcal{C}}^{(\boldsymbol{m})})\right)-L(Y_{i},\mu_{0}).\] (4.9) This provides us with vectors \(\widehat{\boldsymbol{\nu}}_{i}=(\widehat{\nu}^{0}_{(Y_{i},\boldsymbol{x}_{i} )}(\mathcal{C}))_{\mathcal{C}}\in\mathbb{R}^{m+1}\), considering all the subsets \(\mathcal{C}\subseteq\mathcal{Q}\) selected in item (0).
2. Compute the approximate individual Shapley deviance loss decompositions of all selected cases \(i\) \[(\widehat{\phi}_{j,(Y_{i},\boldsymbol{x}_{i})})_{j=1}^{q}=A\widehat{ \boldsymbol{\nu}}_{i}\ \in\ \mathbb{R}^{q}.\]
3. Return the estimated average attributions \(\widehat{\Phi}_{j}=\frac{1}{n}\sum_{i=1}^{n}\widehat{\phi}_{j,(Y_{i}, \boldsymbol{x}_{i})}\).
We give a few remarks. Matrix \(A\) in item (0) only considers the rows and columns of the Shapley kernel weight matrix \(W\) and the design matrix \(Z\) that have been chosen by the random selections \(\mathcal{C}\subseteq\mathcal{Q}\), we also refer to (4.4). This is an approximation that reduces the computational complexity for large \(q\). Item (1) uses the network approximation for the calculation of the conditional expectations (2.1). This step precisely reflects the efficiency gain of our approach because it requires only _one single_ fitted neural network \(\operatorname{NN}_{\widehat{\boldsymbol{\vartheta}}}\) to calculate the conditional expectations for all considered cases \(i\) and all selected subsets \(\mathcal{C}\), see (4.9). Item (2) are simple matrix multiplications that always rely on the same matrix \(A\).
Figure 9 shows the resulting (relative) SHAP Poisson deviance loss decomposition for, \(j\in\mathcal{Q}\),
\[\texttt{SHAP\_anova}_{j}\ =\ -\ \frac{\Phi_{j}}{\frac{1}{n}\sum_{i=1}^{n}L \left(Y_{i},\mu(\boldsymbol{x}_{i})\right)},\]
Figure 9: SHAP Poisson deviance loss decomposition \((\texttt{SHAP\_anova}_{j})_{j\in\mathcal{Q}}\).
compare to (3.3); the \(y\)-scale is in the order of the magnitudes of the decreases. This figure should be compared to the anova analyses of Figure 4. In contrast to these latter graphs, Figure 9 provides a variable importance ranking that is fair (in the Shapley sense and for the chosen value functions (4.7)), and it does not depend on the order of the inclusion of the feature components. E.g., we observe that the magnitudes of the contributions of BonusMalus and DrivAge are somewhere in between the values in Figure 4, where these two variables are included in different orders in this latter figure. From Figure 9 we mainly question the importance of VehPower, and we could rerun the models without this variable.
Another interesting observation is the importance of Density and Area which are highly colinear, see Figure 3 (rhs). Both variables receive a similar magnitude of importance, the more granular Density variable being slightly more important. We interpret these SHAP results in case of colinear variables as follows. These two variables share a (common) importance because they may equally contribute to the decrease in loss, i.e., we have (almost) equally behaved players in this cooperative game. Of course, this also means that the importance of a variable is diminished if we add a second colinear one to the model, and, in fact, we should work with the smaller model in that case. This is not in contradiction to the examples in Sundararajan-Najmi [33], it just says that colinearity needs to be assessed carefully before regression modeling, because the regression model may not detect this (and we should try to work in a parsimonious model already in the first place). A way of exploring colinearity is the anova graph of Figure 4 because changing the order of inclusion will also change the magnitudes of contribution, as can be seen from that figure for the variables Density and Area.
## 5 Conclusion
Starting from a regression function \(\mu(\mathbf{X})\) that is based on tabular input data \(\mathbf{X}\), we have proposed a neural network surrogate model that can calculate the conditional expectations of \(\mu(\mathbf{X})\) by conditioning on any subset of components of the tabular input data \(\mathbf{X}\). These conditional expectations are useful in different contexts. We present an anova and a drop1 variable importance analysis, respectively. These analyses are similar to their generalized linear model (GLM) counterparts, except that we do not require to have nested models, here, but we start from a bigger model and calculate the smaller model. Our second example modifies the partial dependence plot (PDP) by correcting for the deficiency that PDPs cannot cope with dependence structures in the features \(\mathbf{X}\). Our proposal, the marginal conditional expectation plot (MCEP), correctly considers these dependence structures and it provides convincing explainability results that reflect the empirical observations. Our third example concerns the **SH**apley **A**dditive ex**P**lanation (SHAP). We show that the neural network surrogate model for conditional expectations allows us to efficiently calculate the conditional SHAP decompositions both, for the mean but also for the decrease in deviance losses of the full model against the null model. The latter provides us with an interesting method for variable importance.
**Acknowledgment.** We kindly thank Michael Mayer for his methodological support and for supporting us in improving the figures. |
2306.07001 | Cancellation-Free Regret Bounds for Lagrangian Approaches in Constrained
Markov Decision Processes | Constrained Markov Decision Processes (CMDPs) are one of the common ways to
model safe reinforcement learning problems, where constraint functions model
the safety objectives. Lagrangian-based dual or primal-dual algorithms provide
efficient methods for learning in CMDPs. For these algorithms, the currently
known regret bounds in the finite-horizon setting allow for a "cancellation of
errors"; one can compensate for a constraint violation in one episode with a
strict constraint satisfaction in another. However, we do not consider such a
behavior safe in practical applications. In this paper, we overcome this
weakness by proposing a novel model-based dual algorithm OptAug-CMDP for
tabular finite-horizon CMDPs. Our algorithm is motivated by the augmented
Lagrangian method and can be performed efficiently. We show that during $K$
episodes of exploring the CMDP, our algorithm obtains a regret of
$\tilde{O}(\sqrt{K})$ for both the objective and the constraint violation.
Unlike existing Lagrangian approaches, our algorithm achieves this regret
without the need for the cancellation of errors. | Adrian Müller, Pragnya Alatur, Giorgia Ramponi, Niao He | 2023-06-12T10:10:57Z | http://arxiv.org/abs/2306.07001v2 | # Cancellation-Free Regret Bounds for Lagrangian Approaches in Constrained Markov Decision Processes
###### Abstract
Constrained Markov Decision Processes (CMDPs) are one of the common ways to model safe reinforcement learning problems, where constraint functions model the safety objectives. Lagrangian-based dual or primal-dual algorithms provide efficient methods for learning in CMDPs. For these algorithms, the currently known regret bounds in the finite-horizon setting allow for a _cancellation of errors_; one can compensate for a constraint violation in one episode with a strict constraint satisfaction in another. However, we do not consider such a behavior safe in practical applications.
In this paper, we overcome this weakness by proposing a novel model-based dual algorithm OptAug-CMDP for tabular finite-horizon CMDPs. Our algorithm is motivated by the augmented Lagrangian method and can be performed efficiently. We show that during \(K\) episodes of exploring the CMDP, our algorithm obtains a regret of \(\tilde{O}(\sqrt{K})\) for both the objective and the constraint violation. Unlike existing Lagrangian approaches, our algorithm achieves this regret without the need for the cancellation of errors.
## 1 Introduction
In classical reinforcement learning (RL, Sutton and Barto, 2018), the goal is to learn an optimal policy when interacting with an unknown Markov decision process (MDP, Bellman, 1957). In MDPs, an agent aims to minimize the expected cumulative cost incurred during an episode. However, the learned policy must often adhere to certain safety constraints in practical scenarios. For example, when navigating a car on a race track, one would want to avoid crossing the boundaries of the track too often. Such safety requirements are commonly modeled via constrained Markov decision processes (CMDPs, Altman, 1999). We consider the problem of learning an optimal feasible policy in a CMDP. That is, the goal of the agent is to minimize the cost while satisfying the constraints1. Since the CMDP is unknown, we formalize these desiderata by considering the regret with respect to an optimal feasible solution for the cost and the constraint violation, respectively.
Footnote 1: i.e., being feasible for the CMDP, which we also refer to as being _safe_
Importantly, we do not consider it sufficient to provide an agent whose cumulative cost suboptimality and cumulative constraint violation are sublinear. This is because an agent can have a negative
constraint violation (by being very safe but incurring a higher cost than an optimal safe policy) or a positive constraint violation (by being unsafe but incurring a lower cost than an optimal safe policy). Thus, terms from these two cases can cancel each other out, which we refer to as the so-called _cancellation of errors_(Efroni et al., 2020). An agent for which these cumulative terms are sublinear may violate the safety constraints heavily during learning by oscillating around an optimal safe policy. While such a method converges on average to an optimal safe policy2, it neither allows for directly extracting an optimal feasible policy nor does it guarantee safety during learning. We consider a stronger notion of regret, which overcomes this issue by considering the sum of the _positive parts_ of the error terms instead. As pointed out by Efroni et al. (2020), it is of major theoretical interest whether Lagrangian approaches can achieve sublinear bounds for this notion of regret.
Footnote 2: Here, we refer to the value functions for the underlying CMDP.
The approaches to learning CMDPs are split into linear programming (LP) and Lagrangian approaches3(Altman, 1999; Efroni et al., 2020). While LP-based algorithms generally allow for sublinear regret bounds without the need for cancellations (Efroni et al., 2020), they can be expensive when dealing with large state-action spaces. In contrast, in Lagrangian methods, we can solve the optimization problem arising in each episode using dynamic programming (DP), offering a computational benefit over solving LPs. However, the currently known bounds for Lagrangian approaches only concern a weaker form of regret that allows for the aforementioned cancellation of errors. As Efroni et al. (2020) pointed out, this is due to the underlying optimization methods rather than a weakness of the analysis. The main goal of this paper is to provide a Lagrangian-based algorithm that guarantees sublinear regret without the cancellation of errors. To achieve this, the key problem we solve is stopping the agent from oscillating around an optimal safe policy. Our contributions can be summarized as follows:
Footnote 3: i.e., dual and primal-dual algorithms
* We propose a novel model-based dual algorithm, OptAug-CMDP, for learning an optimal feasible policy in an unknown CMDP (Section 3). The algorithm is split into a model pre-training phase and an optimistic exploration phase motivated by the augmented Lagrangian method.
* We show that a sub-problem required for OptAug-CMDP can be reformulated as a convex optimization problem. We provide an efficient algorithm to solve it (Section 3) despite the non-linearity introduced by considering the augmented Lagrangian.
* We prove that with high probability, during \(K\) episodes OptAug-CMDP achieves regrets for the cost and the constraint violations of \(\tilde{O}(\sqrt{K})\) when only highlighting the dependency on \(K\). Notably, we achieve this bound for the stronger notion of regret, which does not allow for the cancellation of errors. This partly settles the open problem posed by Efroni et al. (2020).
### Related Work
The most relevant foundation for our work is the work by Efroni et al. (2020), which reviews model-based algorithms for CMDPs and establishes regret bounds for them. The authors analyze the LP-based algorithms OptCMDP and OptCMDP-bonus that achieve sublinear regret without cancellations. However, the Lagrangian-based algorithms they analyze, OptDual-CMDP and OptPrimalDual-CMDP, only achieve sublinear regret with cancellations. This is because the oscillatory behavior of dual and primal-dual descent methods prevents the individual iterates from being approximately feasible. Therefore, the authors pose the open question of whether one can devise Lagrangian-based algorithms that do not suffer from this issue.
The majority of relevant work providing guarantees for Lagrangian approaches to CMDPs is concerned with model-free primal-dual algorithms (Ding et al., 2020; Bai et al., 2022; Ding et al., 2022, 2022) or model-based dual algorithms (Liu et al., 2021, 2021). However, in both cases, the existing literature does not address the issue of the cancellation of errors when exploring the CMDP and thus does not provide a method for _safely_ finding an optimal feasible policy.4 While there is work on analyzing different forms of regularization to the Lagrangian-based algorithms, their guarantees either require the cancellation of errors (Liu et al., 2021, 2021) or assume access to exact value functions (Ying et al., 2022). Moskovitz et al. (2023) propose a first approach to address the cancellation of errors by replacing gradients with their optimistic gradient counterparts in well-known Lagrangian-based RL algorithms. While they show the empirical success of their methods, their theoretical analysis only covers a hypothetical algorithm with implicit
updates and requires full knowledge of the CMDP. Stooke et al. (2020) address the underlying problem of oscillations of Lagrangian methods for CMDPs via PID control in the context of deep RL, providing experimental successes but no guarantees. Thus, to the best of our knowledge, none of the existing works address the open question of Efroni et al. (2020) in the setup of an unknown CMDP.
While there are mentions of using the augmented Lagrangian method for CMDPs (Li et al., 2021; Lu, 2022; Krishnamurthy, 2003; Krishnamurthy and Abad, 2011, 2012), all such works are concerned with research questions rather different from ours. The only one similar to ours is that of Li et al. (2021). The authors propose a surrogate reward inspired by the augmented Lagrangian to promote safety during learning. However, their method significantly differs from ours as it is concerned with _instantaneous_ constraints in an infinite-horizon CMDP. Moreover, their analysis only shows that an optimal policy for their surrogate MDP is optimal for the original CMDP (under certain assumptions).
## 2 Background and Problem Formulation
**Notation:** For any \(n\in\mathbb{N}\), we use the short-hand notation \([n]\) to refer to the set of integers \(\{1,\ldots,n\}\). For any finite set \(X\), we denote by \(\Delta\left(X\right)\) the probability simplex over \(X\), i.e., \(\Delta\left(X\right)=\{v\in[0,1]^{X}|\sum_{x\in X}v(x)=1\}\). For \(a\in\mathbb{R}\), we set \([a]_{+}:=\max\{0,a\}\) to be the positive part of \(a\). For a vector \(b\in\mathbb{R}^{n}\), we write \([b]_{+}\) for the vector whose entries are the positive parts of the corresponding entries of \(b\). Similarly, for two vectors \(a,b\in\mathbb{R}^{n}\), we write \(a\leq b\) as a short-hand for \(a_{i}\leq b_{i}\), for all \(i\in[n]\). Throughout the paper, we denote the Euclidean norm by \(\|\cdot\|\).
We define a finite-horizon CMDP as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},H,p,c,(d_{i})_{i\in[I]},(\alpha_{i})_{i \in[I]},s_{1})\) with the following components. \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action space, respectively, and \(H>0\) denotes the horizon. Every episode consists of \(H\) steps, starting from the initial state \(s_{1}\in\mathcal{S}\). At every step \(h\in[H]\), \(p_{h}(s^{\prime}|s,a)\) denotes the probability of transitioning to state \(s^{\prime}\) if the current state and action are \(s\) and \(a\). Moreover, \(c_{h}\colon\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) denotes the objective cost function at step \(h\). For \(i\in[I]\), \(d_{i,h}\colon\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) refers to the cost function of the \(i\)-th constraint at step \(h\), and \(\alpha_{i}\in[0,H]\) denotes the threshold for the \(i\)-th constraint. We assume the state and action space are finite, with cardinalities \(S\) and \(A\), respectively. Furthermore, we assume the agent does not know the transition probabilities, objective costs, or constraint costs beforehand. Whenever the agent takes an action \(a\) in state \(s\) at time \(h\), it observes costs sampled from random variables \(C_{h}(s,a)\in[0,1]\) and \((D_{i,h}(s,a))_{i\in[I]}\in[0,1]^{I}\) such that \(\mathbb{E}[C_{h}(s,a)]=c_{h}(s,a)\) and \(\mathbb{E}[D_{i,h}(s,a)]=d_{i,h}(s,a)\), for all \(i\in[I]\). The agent interacts with the CMDP by playing a policy \(\pi=(\pi_{h})_{h\in[H]}\), meaning that if in state \(s\) at step \(h\in[H]\), the agent samples its next action from \(\pi_{h}(\cdot|s)\in\Delta\left(\mathcal{A}\right)\). For an arbitrary cost function \(l=(l_{h})_{h\in[H]}\) and transition probabilities \(p^{\prime}=(p^{\prime}_{h})_{h\in[H]}\), the expected cumulative cost under policy \(\pi\) is measured by the value function defined as follows:
\[V^{\pi}(l,p^{\prime}):= \mathbb{E}\bigg{[}\sum_{h=1}^{H}l_{h}(s_{h},a_{h})\mid s_{1},\pi,p^{\prime}\bigg{]},\]
where \((s_{h},a_{h})\) denotes the state-action pair at step \(h\) under transitions \(p^{\prime}\) and policy \(\pi\). We fix an optimal solution of the CMDP, given by a policy \(\pi^{*}\), defined as follows:
\[\pi^{*}\in\arg\min_{\pi\in\Pi}\quad V^{\pi}(c,p)\quad\text{s.t.}\quad V^{\pi} (d_{i},p)\leq\alpha_{i}\quad(\forall i\in[I]). \tag{1}\]
For brevity, we write \(V^{\pi}((l_{i})_{i\in[I]},p^{\prime}):=(V^{\pi}(l_{1},p^{\prime}),\ldots,V^{\pi }(l_{I},p^{\prime}))^{T}\in\mathbb{R}^{I}\) in the presence of \(I\) different cost functions \(l_{i}=(l_{i,h})_{h\in[H]}\) (\(i\in[I]\)), and \(\alpha:=(\alpha_{1},\ldots,\alpha_{I})^{T}\in\mathbb{R}^{I}\). Furthermore, we denote by \(\Pi:=\{\pi=(\pi_{h})_{h\in[H]}|\pi_{h}:\mathcal{S}\rightarrow\Delta\left( \mathcal{A}\right)\}\) the entire policy space.
**Strong duality and dual methods:**Altman (1999); Paternain et al. (2019) proved that CMDPs possess the _strong duality_ property; i.e., given a feasible CMDP \(\mathcal{M}\), the following relation holds:
\[V^{\pi^{*}}(c,p)=\underbrace{\min_{\pi\in\Pi}\max_{\lambda\in\mathbb{R}^{I}_{+} }\mathcal{L}(\pi,\lambda)}_{\text{Primal problem}}=\underbrace{\max_{\lambda\in \mathbb{R}^{I}_{+}}\min_{\pi\in\Pi}\mathcal{L}(\pi,\lambda)}_{\text{Dual problem}}, \tag{2}\]
where \(\mathcal{L}(\pi,\lambda):=V^{\pi}(c,p)+\lambda^{T}\left(V^{\pi}((d_{i})_{i\in[I] },p)-\alpha\right)\) denotes the Lagrangian. The strong duality property gives theoretical justification to dual methods (Altman, 1999; Efroni et al., 2020;
Paternain et al., 2019). These methods are popular, as the dual problem can be solved via a sequence of (extended5) _unconstrained_ MDPs, each of which can be solved efficiently via DP (as opposed to using LPs for solving a sequence of CMDPs, for which the Bellman optimality principle does not hold).
Footnote 5: If the CMDP is unknown, backward induction involves an extra optimization step over the possible transitions (Jin et al., 2019).
In this work, we solve the min-max problem in Eq. (2) using the augmented Lagrangian method. This is beneficial since the analysis of the augmented Lagrangian method allows for convergence guarantees concerning the last iterate and not just the averaged iterates. Since the occurring subproblems are not MDPs anymore, we justify in Section 3 how they can still be solved efficiently by leveraging a Frank-Wolfe scheme and DP. We are now ready to state the main problem formulation of our work.
**Problem formulation:** In our setting, the agent interacts with the unknown CMDP over a fixed number of \(K>0\) episodes. In every episode \(k\in[K]\), the agent plays a policy \(\pi_{k}\in\Pi\) and its goal is to (simultaneously) minimize its regrets, defined as follows:
\[\mathcal{R}(K;c) :=\sum_{k\in[K]}[V^{\pi_{k}}(c,p)-V^{\pi^{*}}(c,p)]_{+},\] (Objective strong regret) \[\mathcal{R}(K;d) :=\max_{i\in[I]}\sum_{k\in[K]}\left[V^{\pi_{k}}(d_{i},p)-\alpha_ {i}\right]_{+}.\] (Constraint strong regret)
For simplicity, we will write _regret_ when referring to the strong regret throughout the paper. As we pointed out, existing works on Lagrangian-based algorithms (Liu et al., 2021; Efroni et al., 2020; Bai et al., 2022; Ding et al., 2022, 2022) only prove sublinear guarantees on a _weaker_ notion of regret, defined as follows:
\[\mathcal{R}_{\pm}(K;c) :=\sum_{k\in[K]}(V^{\pi_{k}}(c,p)-V^{\pi^{*}}(c,p)),\] (Objective weak regret) \[\mathcal{R}_{\pm}(K;d) :=\max_{i\in[I]}\sum_{k\in[K]}\left(V^{\pi_{k}}(d_{i},p)-\alpha_ {i}\right).\] (Constraint weak regret)
The weak regrets allow for the aforementioned cancellation of errors; i.e., even if they are sublinear in \(K\), the agent can continue compensating for a constraint violation in one episode with strict constraint satisfaction in another. On the other hand, a sublinear bound on the stronger notion of regret guarantees that the agent achieves a low constraint violation in most episodes.6 While this is crucial for practical applications, providing a bound for the strong regrets is strictly more challenging than for the weaker notion.
Footnote 6: Indeed, fix \(\epsilon>0\) and suppose \(\mathcal{R}(K;d)\leq\tilde{O}(\sqrt{K})\). Then there exist at most \(\tilde{O}(\sqrt{K}/\epsilon)\) episodes with a constraint violation of at least \(\epsilon\). In other words, only a small fraction \(\tilde{O}(1/(\epsilon\sqrt{K}))\) of the iterates is not \(\epsilon\)-safe. In comparison, this is by no means guaranteed by a sublinear bound on \(\mathcal{R}_{\pm}(K;d)\).
## 3 Algorithm and Main Result
In this section, we introduce our algorithm OpAug-CMDP (see Algorithm 1) and state its regret guarantees in Theorem 1. In OpAug-CMDP, the agent interacts with the unknown CMDP over a fixed number of \(K>0\) episodes. To encourage exploration of the CMDP, the agent follows the well-known _optimism in the face of uncertainty_ principle (Auer et al., 2008) and builds an optimistic estimate of the CMDP in every episode \(k\in[K]\). That is, in every episode \(k\in[K]\), the agent builds optimistic estimates \(\tilde{c}_{k}\) for the objective cost \(c\), optimistic estimates \(\tilde{d}_{i,k}\) for the constraint costs \(d_{i}\), and a set of plausible transition probabilities \(B_{k}^{p}\), which we specify in the following paragraph.
**Optimistic estimates:** Let \(n_{h}^{k-1}(s,a):=\sum_{l=1}^{k-1}\mathbb{1}_{\{s_{h}^{l}=s,\ a_{h}^{l}=a\}}\) count the number of times that the state-action pair \((s,a)\) is visited at step \(h\) before episode \(k\). Here, (\(s_{h}^{l}\), \(a_{h}^{l}\)) denotes the state-action pair visited at step \(h\) in episode \(l\). First, we compute the empirical averages of the cost and transition
probabilities as follows:
\[\bar{c}_{h}^{k-1}(s,a):= \frac{\sum_{l=1}^{k-1}C_{h}^{l}(s,a)\operatorname{\mathbb{1}}_{\{s _{h}^{l}=s,\ a_{h}^{l}=a\}}}{n_{h}^{k-1}(s,a)\lor 1},\] \[\bar{d}_{i,h}^{k-1}(s,a):= \frac{\sum_{l=1}^{k-1}D_{i,h}^{l}(s,a)\operatorname{\mathbb{1}}_ {\{s_{h}^{l}=s,\ a_{h}^{l}=a\}}}{n_{h}^{k-1}(s,a)\lor 1}\quad(\forall i\in[I]),\] \[\bar{p}_{h}^{k-1}(s^{\prime}|s,a):= \frac{\sum_{l=1}^{k-1}\operatorname{\mathbb{1}}_{\{s_{h}^{l}=s, \ a_{h}^{l}=a,\ s_{h+1}^{l}=s^{\prime}\}}}{n_{h}^{k-1}(s,a)\lor 1},\]
where \(a\lor b:=\max\{a,b\}\). With this, we define the optimistic costs and the set of plausible transition probabilities as
\[\tilde{c}_{k,h}(s,a) :=\bar{c}_{h}^{k-1}(s,a)-\beta_{k,h}^{c}(s,a),\] \[\tilde{d}_{i,k,h}(s,a) :=\bar{d}_{i,h}^{k-1}(s,a)-\beta_{i,k,h}^{d}(s,a)\quad(\forall i \in[I]), \tag{3}\] \[B_{k,h}^{p}(s,a) :=\{\tilde{p}_{h}(\cdot|s,a)\in\Delta\left(\mathcal{S}\right)\mid \forall s^{\prime}\in\mathcal{S}\colon|\tilde{p}_{h}(s^{\prime}|s,a)-\bar{p}_ {h}^{k-1}(s^{\prime}|s,a)|\leq\beta_{k,h}^{p}(s,a,s^{\prime})\},\] \[B_{k}^{p} :=\{\tilde{p}\mid\forall s,a,h\colon\tilde{p}_{h}(\cdot|s,a)\in B _{k,h}^{p}(s,a)\}.\]
Here, \(\beta_{k,h}^{c}(s,a)=\beta_{i,k,h}^{d}(s,a)>0\) denote the exploration bonus for the costs and \(\beta_{k,h}^{p}(s,a,s^{\prime})>0\) denotes the confidence threshold for the transitions. For any \(\delta\in(0,1)\), we specify the correct values for those quantities in Appendix E.1 to obtain our regret guarantees with probability at least \(1-\delta\). In the next paragraph, we describe how the agent computes its policy in episode \(k\).
**Policy update:** Given the optimistic CMDP at episode \(k\), we derive the next policy \(\pi_{k}\) using a scheme motivated by the augmented Lagrangian method (cf. Eqs. (4) and (5)). At the end of this section, we explain how we can perform the optimization step in Eq. (4) efficiently, up to a specified accuracy \(\epsilon_{k}\). For now, we treat this part of the algorithm as a black-box subroutine.
Optimistic exploration alone with the augmented Lagrangian, however, is insufficient to obtain sublinear regret guarantees for our algorithm. For technical reasons, our analysis also requires the optimistic CMDPs with costs \(\tilde{c}_{k}\), \((\tilde{d}_{i,k})_{i\in[I]}\) and transitions \(\tilde{p}_{k}\in B_{k}^{p}\) (cf. Eq. (4)) to be _strictly_ feasible, in every episode \(k\in[K]\). Our analysis in Section 4.2 explains the need for this technical assumption. To address this issue, we propose a pre-training phase _before_ the optimistic exploration phase, which we describe in the following paragraph.
**Pre-training phase:** In this phase, the agent repeatedly executes a fixed policy \(\bar{\pi}\) for \(K^{\prime}\leq K\) episodes. The policy \(\bar{\pi}\) must be _strictly_ feasible for the true CMDP, which we formally state in the following assumption.
**Assumption 1** (Strictly feasible policy).: _We have access to a policy \(\bar{\pi}\) such that \(V^{\bar{\pi}}(d_{i},p)<\alpha_{i}\) for all \(i\in[I]\). Furthermore, we assume that the slack \(\gamma\), defined below, is known7:_
Footnote 7: In other words, there is a Slater point for the constraint set of the true CMDP. Note that knowing a lower bound instead of the exact slack \(\gamma\) is sufficient as well.
\[\gamma:=\min_{i\in[I]}\left(\alpha_{i}-V^{\bar{\pi}}(d_{i},p)\right)\in(0,H].\]
Note that this is stronger than only assuming the _existence_ of a strictly feasible policy. However, making this assumption is realistic in many practical setups (Liu et al., 2021; Bura et al., 2022). For example, in the case of a race car that should not exceed the boundary of a track, it would be sufficient to have access to the policy of a car that strictly stays within the boundaries but may be arbitrarily slow. To address the technical issue mentioned earlier, we need to set \(K^{\prime}\) such that the following condition holds for some \(\nu\in(0,1)\), with high probability:
\[V^{\bar{\pi}}(\tilde{d}_{i,k},\tilde{p}_{k})\leq\alpha_{i}-\nu\gamma\quad( \forall i\in[I]\;\forall k\in\{K^{\prime},\ldots,K\}),\]
where \(\tilde{p}_{k}\) is defined by the update in Eq. (4).8 In particular, the fixed policy \(\bar{\pi}\) is strictly feasible for the optimistic CMDP at every episode \(k\geq K^{\prime}\). Indeed, if the agent plays \(\bar{\pi}\) for the first \(K^{\prime}\) episodes
of the algorithm with a large enough constant \(K^{\prime}\), then for all future episodes, the constraint value function of \(\bar{\pi}\) under the estimated model is close to the constraint value function of \(\bar{\pi}\) under the true model. Thus, we can ensure the above condition. Leveraging an adaption of an on-policy error bound (see Appendix C), we prove that it is sufficient to set \(K^{\prime}\) as follows:
**Lemma 1**.: _Suppose that Assumption 1 holds, i.e., the agent has access to a strictly feasible \(\bar{\pi}\) and its slack \(\gamma>0\). Fix any \(\nu\in(0,1)\), and suppose the agent executes \(\bar{\pi}\) for \(K^{\prime}=\tilde{O}\left(\max\left\{\frac{S^{2}AH^{3}}{(1-\nu)\gamma},\frac{ \mathcal{NSAH^{4}}}{(1-\nu)^{2}\gamma^{2}}\right\}\right)\) episodes, where \(\mathcal{N}:=\max_{s,a,h}|\{s^{\prime}\mid p_{h}(s^{\prime}|s,a)>0\}|\) denotes the maximum number of transitions. Then, if the agent updates the optimistic CMDP based on the observations from those episodes (cf. Eq. (3)), with probability at least \(1-\delta\) the following condition is satisfied for every \(k\in\{K^{\prime},\ldots,K\}\):_
\[V^{\bar{\pi}}(\tilde{d}_{i,k},\tilde{p}_{k})\leq\alpha_{i}-\nu\gamma\quad( \forall i\in[I]).\]
We present the resulting OptAug-CMDP algorithm in Algorithm 1.
```
0:\(K\) (total number of episodes), \(K^{\prime}\leq K\) (number of pre-training episodes), \((\eta_{k})_{k\geq K^{\prime}+1}\) (step sizes), \((\epsilon_{k})_{k\geq K^{\prime}+1}\) (accuracies), \(\bar{\pi}\) (strictly feasible policy), \(\alpha\) (constraint thresholds), \(\lambda_{K^{\prime}+1}:=0\in\mathbb{R}^{I}\)// Phase 1: Pre-training the model for\(k=1,\ldots,K^{\prime}\)do Play policy \(\pi_{k}=\bar{\pi}\), update estimates of the costs \(\tilde{c}_{k+1}\), \((\tilde{d}_{i,k+1})_{i\in[I]}\) and transitions \(B^{p}_{k+1}\) (Eq. (3)). // Phase 2: Optimistic exploration with pre-trained model for\(k=K^{\prime}+1,\ldots,K\)do Update policy (by finding \(\pi_{k}\), \(\tilde{p}_{k}\) such that the objective is \(\epsilon_{k}\)-close to the minimum): \[\pi_{k},\tilde{p}_{k}:=\arg\min_{\begin{subarray}{c}\pi\in B^{p}_{k}\\ p^{\prime}\in B^{p}_{k}\end{subarray}}\left(V^{\pi}(\tilde{c}_{k},p^{\prime}) +\frac{1}{2\eta_{k}}\|[\lambda_{k}+\eta_{k}(V^{\pi}((\tilde{d}_{i,k})_{i\in[ I]},p^{\prime})-\alpha)]_{+}\|^{2}\right)\] (4) Update dual variables: \[\lambda_{k+1}:=[\lambda_{k}+\eta_{k}(V^{\pi_{k}}((\tilde{d}_{i,k})_{i\in[ I]},\tilde{p}_{k})-\alpha)]_{+}\] (5) Play \(\pi_{k}\), update estimates of the costs \(\tilde{c}_{k+1}\), \((\tilde{d}_{i,k+1})_{i\in[I]}\) and transitions \(B^{p}_{k+1}\) (Eq. (3)).
```
**Algorithm 1**OptAug-CMDP
We are now ready to state the regret guarantees for OptAug-CMDP.
**Theorem 1**.: _Suppose that Assumption 1 holds, let \(\delta\in(0,1)\) and \(\nu>0\). Then there exist \(K^{\prime}=\tilde{O}\left(\max\left\{\frac{S^{2}AH^{3}}{(1-\nu)\gamma},\frac{ \mathcal{NSAH^{4}}}{(1-\nu)^{2}\gamma^{2}}\right\}\right)\) and \(\eta_{k}\), \(\epsilon_{k}\) such that with probability at least \(1-\delta\), OptAug-CMDP achieves a total regret of_
\[\mathcal{R}(K;c) =\tilde{O}\left(\sqrt{\mathcal{NSAH^{4}K}}+S^{2}AH^{3}+K^{\prime} H\right),\] \[\mathcal{R}(K;d) =\tilde{O}\left(\sqrt{\mathcal{NSAH^{4}K}}+S^{2}AH^{3}\right).\]
We remark that to achieve this bound, using step sizes \(\eta_{K^{\prime}+k}=\Theta(k^{2.5})\) and accuracies \(\epsilon_{K^{\prime}+k}=\Theta(1/\eta_{K^{\prime}+k})\) (when only highlighting the dependency on \(k\)) is sufficient, as we discuss in Appendix D.3.
**Comparison with OptDual-CMDP:** Crucially, our bound holds for the stronger notion of regret. In contrast, the one for the related OptDual-CMDP algorithm (see Appendix A.2) by Efroni et al. (2020) only concerns the weak regret, which allows for the cancellation of errors. Apart from this, the bound we obtain is similar in spirit. However, our regret bound does not depend on the number
\(I\) of constraints up to polylogarithmic factors. Moreover, we get slightly different (but not worse) constants and the constant extra term \(K^{\prime}H\) due to the model pre-training phase. In addition, it is important to note that we can choose \(\eta_{k}\), \(\epsilon_{k}\) in terms of \(\gamma\) (Assumption 1) such that in the leading term of the regret bound, there is no dependency on \(\min_{i\in[I]}(\alpha_{i}-V^{\bar{\pi}}(d_{i},p))\), as opposed to OptDual-CMDP.
**Solving the inner problem:** We now elaborate on the subroutine for solving the optimization problem in Eq. (4) that defines the policy update in Algorithm 1. Importantly, we can reformulate Eq. (4) as a constrained optimization problem that is convex in the state-action-state occupancy measure (see Appendix B.1). However, the resulting problem is neither an LP nor an extended MDP (due to the nonlinear objective), which prevents solving it with a single DP or LP solver call. Albeit related, it is not a standard convex RL problem either (due to the additional optimization over the constraint set \(B_{k}^{p}\)). Moreover, computing projections onto the high-dimensional domain is prohibitive, making it impossible to run projected gradient descent.
The projection-free method we propose in Appendix B.2 overcomes this difficulty by combining a Frank-Wolfe scheme with DP in a sequence of (extended) MDPs. In every iteration of this inner method, we consider the linear minimization step needed for a Frank-Wolfe iteration. When switching back to optimization over \(\Pi\) and \(B_{k}^{p}\), we can then perform this minimization step by solving an extended but unconstrained MDP via DP.9 The smoothness properties of the objective of Eq. (4) then determine the iteration complexity of the Frank-Wolfe scheme. Formally, we have the following.
Footnote 9: Formally, this is because a version of the Bellman optimality principle applies after dualizing the constraints, even if we need to optimize over the confidence intervals for the transitions during backward induction.
**Proposition 1**.: _In episode \(k\), fix any accuracy of \(\epsilon_{k}>0\). There exists an algorithm for solving Eq. (4) such that the objective at its output \((\pi_{k},\tilde{p}_{k})\) is \(\epsilon_{k}\)-close to the optimum of Eq. (4), by solving \(O\left(\frac{\eta_{k}IS^{2}AH}{\epsilon_{k}}\right)\) (extended) MDPs via DP._
## 4 Sketch of the Regret Analysis
In this section, we outline the key steps in our proof of Theorem 1 and defer the detailed proofs to Appendix E. We will condition our regret analysis on a _success event_\(G\), which we formally define in Appendix E.1. \(G\) ensures that (a) the optimistic cost estimates in Eq. (3) are, in fact, optimistic and (b) the true transitions are contained in the set of plausible models from Eq. (3), i.e.:
\[\tilde{c}_{k}\leq c,\quad\tilde{d}_{i,k}\leq d_{i}\ (\forall i\in[I]), \quad p\in B_{k}^{p},\]
for every episode \(k\in[K]\). In the following lemma, we prove that \(G\) occurs with high probability.
**Lemma 2**.: _Fix \(\delta\in(0,1)\) and define the optimistic model in Eq. (3) accordingly. Then, the success event \(G\) occurs with probability at least \(1-\delta\), i.e., \(P[G]\geq 1-\delta\)._
We proceed with the regret analysis and first split the regrets between the two phases of the algorithm:
\[\mathcal{R}(K;c) =\underbrace{\sum_{k=1}^{K^{\prime}}[V^{\pi_{k}}(c,p)-V^{\pi^{*} }(c,p)]_{+}}_{\text{Pre-Training}}+\underbrace{\sum_{k=K^{\prime}+1}^{K}[V^{ \pi_{k}}(c,p)-V^{\pi^{*}}(c,p)]_{+}}_{\text{Optimistic Exploration}},\] \[\mathcal{R}(K;d) \leq\underbrace{\max_{i\in[I]}\sum_{k=1}^{K^{\prime}}[V^{\pi_{k} }(d_{i},p)-\alpha_{i}]_{+}}_{\text{Pre-Training}}+\underbrace{\max_{i\in[I]} \sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k}}(d_{i},p)-\alpha_{i}]_{+}}_{\text{ Optimistic Exploration}}.\]
Then, applying Lemma 1, we can trivially bound the objective regret during the pre-training phase by \(K^{\prime}H\). Since \(\bar{\pi}\) is strictly feasible, there is no constraint regret during pre-training. We now focus on the regrets incurred in the optimistic exploration phase. For this, we further decompose the
regrets as follows (see Appendix E.1):
\[\mathcal{R}(K;c)\leq K^{\prime}H+\underbrace{\sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k} }(c,p)-V^{\pi_{k}}(\tilde{c}_{k},\tilde{p}_{k})]_{+}}_{\text{Estimation Error}}+ \underbrace{\sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k}}(\tilde{c}_{k},\tilde{p}_{k})- V^{\pi^{*}}(c,p)]_{+}}_{\text{Optimization Error}},\] \[\mathcal{R}(K;d)\leq\underbrace{\max_{i\in[I]}\sum_{k=K^{\prime}+ 1}^{K}[V^{\pi_{k}}(d_{i},p)-V^{\pi_{k}}(\tilde{d}_{i,k},\tilde{p}_{k})]_{+}}_{ \text{Estimation Error}}+\underbrace{\max_{i\in[I]}\sum_{k=K^{\prime}+1}^{K}[V^ {\pi_{k}}(\tilde{d}_{i,k},\tilde{p}_{k})-\alpha_{i}]_{+}}_{\text{Optimization Error}}.\]
We have thus decomposed the regrets into
1. _estimation errors_ that are due to the estimated model, and
2. _optimization errors_ that we can analyze via the underlying optimization method.
Conditioning on the success event \(G\), we will obtain bounds sublinear in \(K\) for both parts of the decomposition. Note that we cannot adapt the analysis by Efroni et al. (2020) to achieve this goal since it only allows for bounds on the averages of the _signed_ optimization errors. We proceed with bounding the estimation errors in the next section.
### Estimation Errors (Optimistic Exploration)
Leveraging on-policy error bounds for optimistic exploration in MDPs (Appendix C), we establish the desired bound on the estimation errors.
**Lemma 3** (Estimation errors).: _Let \((\pi_{k})_{k=K^{\prime}+1}^{K}\) be the sequence of policies obtained by OptAug-CMDP. Then, conditioned on \(G\), we can bound the estimation errors as follows:_
\[\sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k}}(c,p)-V^{\pi_{k}}(\tilde{c}_{ k},\tilde{p}_{k})]_{+}\leq \tilde{O}\left(\sqrt{\mathcal{N}SAH^{4}K}+S^{2}AH^{3}\right),\] \[\max_{i\in[I]}\sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k}}(d_{i},p)-V^{ \pi_{k}}(\tilde{d}_{i,k},\tilde{p}_{k})]_{+}\leq \tilde{O}\left(\sqrt{\mathcal{N}SAH^{4}K}+S^{2}AH^{3}\right).\]
We refer to Appendix E.2 for the proof. Lemma 3 proves that the estimation errors for both the objective and constraints can indeed be bounded by a term that is sublinear in \(K\). In the next section, we provide a bound for the optimization errors.
### Optimization Errors (Optimistic Exploration)
Recall that by Proposition 1, our solver for the inner problem (Eq. (4)) has the following guarantee, for every episode \(k\geq K^{\prime}+1\):
\[\mathcal{L}_{k}(\pi_{k},\tilde{p}_{k})\leq\min_{\begin{subarray}{c}\pi\in \Pi\\ p^{\prime}\in B_{k}^{\mathrm{P}}\end{subarray}}\mathcal{L}_{k}(\pi,p^{\prime}) +\epsilon_{k},\]
where \(\mathcal{L}_{k}(\pi,p^{\prime}):=V^{\pi}(\tilde{c}_{k},p^{\prime})+\frac{1}{2 \eta_{k}}\|[\lambda_{k}+\eta_{k}(V^{\pi}((\tilde{d}_{i,k})_{i\in[I]},p^{\prime })-\alpha)]_{+}\|^{2}\) denotes the objective at episode \(k\). If the true CMDP is known, i.e., no exploration is required, then Xu (2021) proves a sublinear regret bound for the optimization error if the step sizes \(\eta_{k}\) and accuracies \(\epsilon_{k}\) are chosen suitably.10 They obtain this result by bounding the dual variables \(\lambda_{k}\) across the iterations \(k\). In our setting, however, since the objective and constraint set of the optimization problem (Eq. (4)) _change_ in every episode, we require a novel type of analysis.
Footnote 10: That is, such that \(\sum_{k=K^{\prime}+1}^{K}1/\eta_{k}=o(K)\) and \(\sum_{k=K^{\prime}+1}^{K}\epsilon_{k}=o(K)\), see Xu (2021, Remark 7).
As a first step, we show that we can bound the optimization errors in episode \(k\) by expressions that depend on the dual variables \(\lambda_{k}\) and \(\lambda_{k+1}\).
**Lemma 4**.: _Conditioned on G, for each \(k\in\{K^{\prime}+1,\ldots,K\}\), in OptAug-CMDP we have_
\[V^{\pi_{k}}(\tilde{c}_{k},\tilde{p}_{k})-V^{\pi^{*}}(c,p) \leq\epsilon_{k}+\frac{\|\lambda_{k}\|^{2}-\|\lambda_{k+1}\|^{2}}{ 2\eta_{k}},\] \[V^{\pi_{k}}(\tilde{d}_{i,k},\tilde{p}_{k})-\alpha_{i} \leq\frac{\lambda_{k+1}(i)-\lambda_{k}(i)}{\eta_{k}}\quad(\forall i \in[I]).\]
To further bound the norm of the dual iterates, for each episode \(k\geq K^{\prime}\), we consider the \(k\)-th optimistic CMDP, which we define as follows:
\[\min_{\pi\in\Pi}\quad V^{\pi}(\tilde{c}_{k},\tilde{p}_{k})\quad\text{s.t.} \quad V^{\pi}(\tilde{d}_{i,k},\tilde{p}_{k})\leq\alpha_{i}\quad(\forall i\in [I]). \tag{6}\]
Note that Eq. (6) indeed is a CMDP. By Lemma 1, \(\bar{\pi}\) is strictly feasible for Eq. (6) for all \(k\geq K^{\prime}\), with a slack of \(\geq\nu\gamma\) uniformly bounded away from zero. By strong duality (Paternain et al., 2019), there exist primal-dual pairs \((\pi_{k}^{*},\lambda_{k}^{*})\) satisfying
\[V^{\pi_{k}^{*}}(\tilde{c}_{k},\tilde{p}_{k})=\min_{\pi\in\Pi}\left(V^{\pi}( \tilde{c}_{k},\tilde{p}_{k})+(\lambda_{k}^{*})^{T}(V^{\pi}((\tilde{d}_{i,k})_{ i\in[I]},\tilde{p}_{k})-\alpha)\right).\]
We formalize this with Lemma 18 in Appendix E.3.2 by using the fact that we can formulate Eq. (6) as a convex optimization problem using the LP formulation of CMDPs (Appendices A.3 and A.4). With this, we can establish the following bound on the dual iterates.
**Lemma 5**.: _Let \(k\in\{K^{\prime}+1,\ldots,K\}\) and suppose Eq. (6) is strictly feasible for every \(k^{\prime}\in\{K^{\prime},\ldots,K\}\). Let \((\pi_{k^{\prime}}^{*},\lambda_{k^{\prime}}^{*})\) be pairs of primal-optimal and dual-optimal solutions for Eq. (6). Then the iterates of OptAug-CMDP satisfy_
\[\|\lambda_{k+1}\|\leq 2\sum_{t=K^{\prime}}^{k}\|\lambda_{t}^{*}\|+\sum_{t=K^{ \prime}+1}^{k}\sqrt{2\eta_{t}\epsilon_{t}}.\]
Having achieved a bound on the dual iterates \(\lambda_{k+1}\) in terms of the dual maximizers \(\lambda_{k^{\prime}}^{*}\) (\(k^{\prime}\in\{K^{\prime},\ldots,k\}\)), we can now aim to provide bounds for the latter. Indeed, we can leverage results from constrained convex optimization (Appendix A.4) to arrive at the following lemma.
**Lemma 6**.: _Suppose Assumption 1 holds. Let \(\nu\in(0,1)\) and choose \(K^{\prime}\) as in Lemma 1. Let \(k\in\{K^{\prime},\ldots,K\}\), and let \((\pi_{k}^{*},\lambda_{k}^{*})\) be a pair of primal-optimal and dual-optimal solutions for Eq. (6). Then, conditioned on \(G\), we have_
\[\|\lambda_{k}^{*}\|\leq\|\lambda_{k}^{*}\|_{1}\leq\frac{H}{\nu\gamma}.\]
Plugging Lemma 5 into the bounds from Lemma 4 and replacing the norms of the \(\lambda_{k}^{*}\) using the bound from Lemma 6, we obtain sublinear optimization errors when choosing \(\eta_{k}\), \(\epsilon_{k}\) correctly:
**Lemma 7** (Optimization errors).: _Suppose Assumption 1 holds. Let \(\nu\in(0,1)\) and choose \(K^{\prime}\) as in Lemma 1. Suppose that the event \(G\) occurs. When using step sizes \(\eta_{K^{\prime}+k}=\Theta(k^{2.5})\) and \(\epsilon_{K^{\prime}+k}=\Theta(1/\eta_{K^{\prime}+k})\), we have_
\[\sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k}}(\tilde{c}_{k},\tilde{p}_{k })-V^{\pi^{*}}(c,p)]_{+} \leq\sum_{k=K^{\prime}+1}^{K}\left(\frac{(O(\sigma k)+\sum_{t=K^{ \prime}+1}^{k}\sqrt{2\eta_{t}\epsilon_{t}})^{2}}{2\eta_{k}}+\epsilon_{k} \right)\leq O(\sqrt{K}),\] \[\max_{i\in[I]}\sum_{k=K^{\prime}+1}^{K}[V^{\pi_{k}}(\tilde{d}_{i,k},\tilde{p}_{k})-\alpha_{i}]_{+} \leq\sum_{k=K^{\prime}+1}^{K}\frac{O(\sigma k)+\sum_{t=K^{\prime }+1}^{k}\sqrt{2\eta_{t}\epsilon_{k}}}{\eta_{k}}\leq O(\sqrt{K}),\]
_where \(\sigma=\frac{H}{\nu\gamma}\) and in fact \(O(\sigma k)\) can be replaced by \((2+2(k-K^{\prime}))\sigma\)._
**Remark:** We need to choose \(\eta_{k}\) large enough (increasing) and \(\epsilon_{k}\) small enough (decreasing) to ensure a sublinear error bound. At the same time, we do not want to choose \(\eta_{k}\) larger than necessary or \(\epsilon_{k}\) smaller than necessary for computational reasons (see Proposition 1). We refer to Appendix D.3 for a discussion. In the case of an exact subroutine, we can plug in \(\epsilon_{k}=0\) to achieve an analogous result.
According to our regret decomposition, by adding up the errors of the pre-training phase, the estimation errors in the second phase, and the optimization errors in the second phase, we can indeed deduce our main result (Theorem 1). We showed how to bound the estimation errors using the optimism paradigm (Lemma 3). For the analysis of the optimization errors, we had to generalize the convergence analysis of the inexact augmented Lagrangian method (Lemma 7). We showed that if we have access to a safe baseline policy, the pre-training phase guarantees all assumptions required for this and adds a constant term to the regret (Lemma 1).
## 5 Conclusion
In this work, we showed how to overcome the problem of the _cancellation of errors_, i.e., the oscillation of standard Lagrangian-based algorithms for CMDPs around an optimal safe policy. We leveraged the augmented Lagrangian method to design our algorithm OptAug-CMDP. Unlike the related OptDual-CMDP algorithm of Efroni et al. (2020), this requires a subroutine that solves a non-linear optimization problem in each episode. We devised an efficient algorithm for this, avoiding projections or LP. We then provided a regret analysis that, unlike previous works, does not require the cancellation of errors to arrive at sublinear regret guarantees. This means that in contrast to existing Lagrangian-based algorithms, our algorithm is provably safe _while exploring_ the unknown CMDP.
This first partial answer to the open problem posed by Efroni et al. (2020) leads to several further questions: Can we obtain tighter bounds for the inner sub-routine and the regret, as our problem has a richer structure than the general convex optimization setup? While OptAug-CMDP enjoys stronger regret guarantees, the proposed inner subroutine has a higher computational cost than the one in OptDual-CMDP, which may be possible to improve. Moreover, it remains open whether one can remove the requirement of access to a strictly feasible policy. Finally, we aim to extend our approach to the more practical function approximation setup. |
2305.11333 | Sequences with increasing subsequence | We study analytic and Borel subsets defined similarily to the old example of
analytic complete set given by Luzin. Luzin's example, which is essentially a
subset of the Baire space, is based on the natural partial order on naturals,
i.e. division. It consists of sequences which contain increasing subsequence in
given order. We consider a variety of sets defined in a similar way. Some of
them occurs to be Borel subsets of the Baire space, while others are analytic
complete, hence not Borel. In particular, we show that an analogon of Luzin
example based on the natural linear order on rationals is analytic complete. We
also characterise all countable linear orders having such property. | Łukasz Mazurkiewicz, Szymon Żeberski | 2023-05-18T22:42:46Z | http://arxiv.org/abs/2305.11333v1 | # Sequences with increasing subsequence
###### Abstract.
We study analytic and Borel subsets defined similarily to the old example of analytic complete set given by Luzin. Luzin's example, which is essentially a subset of the Baire space, is based on the natural partial order on naturals, i.e. division. It consists of sequences which contain increasing subsequence in given order.
We consider a variety of sets defined in a similar way. Some of them occurs to be Borel subsets of the Baire space, while others are analytic complete, hence not Borel.
In particular, we show that an analogon of Luzin example based on the natural linear order on rationals is analytic complete. We also characterise all countable linear orders having such property.
The work has been partially financed by grant **8211204601, MPK: 9130730000** from the Faculty of Pure and Applied Mathematics, Wroclaw University of Science and Technology.
AMS Classification: Primary: 03E75, 28A05, 54H05; Secondary: 03E17
Keywords: analytic set, Borel set, Polish space, analytic-complete set, Borel reduction, partial order, linear order.
Given an analytic set, proving its analytic completeness is a fundamental way of showing that it is not Borel. As shown in [6], \(\mathbf{\Sigma}^{1}_{1}\)-complete (or, in this case, rather \(\mathbf{\Pi}^{1}_{1}\)-complete) sets can be used in not necessarily set theoretic context. In the paper, authors investigate properties of regular languages of thin trees. In particular, they are interested in descriptive properties of such languages. One of their result is that regular language, which does not fulfil some definability condition (so called not WMSO-definable language), is \(\mathbf{\Pi}^{1}_{1}\)-complete.
Naturally, \(\mathbf{\Sigma}^{1}_{1}\)-complete sets can be useful in more set theory-related research. Like in [5], where class of all Banach spaces isomorphic to \(c_{0}\) is considered. The main result of the work states that this class is a complete analytic set (with respect to Effros Borel structure), so it can not be Borel.
In [4], a class of coloring problems induced by actions of countable group on Polish spaces is studied. It is shown, that the set of such coloring problems, which additionally have Baire measurable solution for a particular free action \(\alpha\), is \(\mathbf{\Sigma}^{1}_{1}\)-complete (when \(\alpha\) is not trivial).
In this paper we would like to examine descriptive complexity of sequences with increasing subsequence, seen as a subset of \(\omega^{\omega}\) (or other space homeomorphic to it). The motivation comes from classical example of Lusin (which can be found in [2, 27.2]):
**Theorem 1.2** (Lusin).: _Let \(|\) be a division of positive natural numbers \(\mathbb{N}\). Set_
\[L=\{y\in\mathbb{N}^{\omega}:(\exists k_{0}<k_{1}<\ldots)(\forall i\in\omega) (y(k_{i})\mid y(k_{i+1}))\}.\]
\(L\) _is a \(\mathbf{\Sigma}^{1}_{1}\)-complete subset of \(\mathbb{N}^{\omega}\)._
We want to study the descriptive complexity of sets defined in a similar fashion. Assume that \(X\) is countable set and \(R\) is a relation on \(X\). Define
\[L_{(X,R)}=\{y\in X^{\omega}:(\exists k_{0}<k_{1}<\ldots)(\forall i\in\omega)( y(k_{i})Ry(k_{i+1}))\}.\]
In the next section we will provide some basic facts and discuss the complexity of \(L_{(X,R)}\) for various examples of \((X,R)\). We will focus mainly on the case of posets, i.e sets equipped with a relation which is reflexive, symmetric and transitive. Later we will consider linear orders and give a characterization of those for which the set \(L_{(X,R)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete.
## 2. Basic examples
First we shall observe that projective class of \(L_{(X,R)}\) can not exceed \(\mathbf{\Sigma}^{1}_{1}\).
**Fact 2.1**.: _Assume that \(R\subseteq X\times X\) and \(|R|\leq\aleph_{0}\). Then the set \(L_{(X,R)}\) is analytic._
Proof.: Let us define
\[B_{(X,R)}=\{(k,y)\in\omega^{\omega}\times X^{\omega}:(\forall i\in\omega)(k_{i }<k_{i+1}\wedge y(k_{i})Ry(k_{i+1}))\}.\]
Notice that \(B_{(X,R)}\) is Borel. Indeed,
\[B_{(X,R)}= \bigcap_{i\in\omega}\{(k,y)\in\omega^{\omega}\times X^{\omega}:k_{ i}<k_{i+1}\wedge y(k_{i})Ry(k_{i+1})\}\] \[= \bigcap_{i\in\omega}\bigcup_{a\in\omega}\bigcup_{b>a}\left(\{k\in \omega^{\omega}:k_{i}=a,k_{i+1}=b\}\times\{y\in X^{\omega}:y(k_{i})Ry(k_{i+1}) \}\right)\] \[= \bigcap_{i\in\omega}\bigcup_{a\in\omega}\bigcup_{b>a}\left[\{k\in \omega^{\omega}:k_{i}=a,k_{i+1}=b\}\times\left(\bigcup_{(y_{1},y_{2})\in R}\{y \in X^{\omega}:y(a)=y_{1},y(b)=y_{2}\}\right)\right]\]
and \(R\) is countable. Clearly, \(L_{(X,R)}=\pi_{X^{\omega}}[B_{(X,R)}]\) is a projection of a Borel set. So \(L_{(X,R)}\) is analytic.
In case when \(X\) is finite, every sequence of elements of \(X\) contains a constant subsequence. This observation gives us following:
**Fact 2.2**.: _If \(X\) is finite and \(R\) is a reflexive relation on \(X\), then \(L_{(X,R)}=X^{\omega}\)._
**Fact 2.3**.: _For a countable set \(X\) define \(\Delta_{X}=\{(x,x):x\in X\}\). Then \(L_{(X,\Delta_{X})}\) is Borel._
Proof.: \[L_{(X,\Delta_{X})} =\{y\in X^{\omega}:(\exists k_{0}<k_{1}<\ldots)(\forall i\in\omega )(y(k_{i})=y(k_{i+1}))\}\] \[=\{y\in X^{\omega}:(\exists x\in X)(\exists k_{0}<k_{1}<\ldots)( \forall i\in\omega)(y(k_{i})=x)\}\] \[=\{y\in X^{\omega}:(\exists x\in X)(\forall n\in\omega)(\exists k >n)(y(k)=x)\},\]
what clearly gives us that \(L_{(X,\Delta_{X})}\) is \(G_{\delta\sigma}\).
**Question 1**.: _What is the precise complexity of \(L_{(X,\Delta_{X})}\)? Is it not \(F_{\sigma\delta}\)?_
Notice that for any poset \((X,\leq_{X})\) above result shows that, in order to identify projection class of \(L_{(X,\leq_{X})}\), we can focus on analyzing strictly increasing sequences.
\[L_{(X,\leq_{X})}=L_{(X,\Delta_{X})}\cup\{y\in X^{\omega}:(\exists k_{0}<k_{1}< \ldots)(\forall i\in\omega)(y(k_{i})<_{X}y(k_{i+1}))\}.\]
Now we can move to classification of linear orders in this problem. Because in well orderings there are no infinite decreasing subsequences, below fact follows:
**Fact 2.4**.: _Assume that \(\leq_{X}\) is a well ordering on (countable) \(X\). Then \(L_{(X,\leq_{X})}=X^{\omega}\)._
Now let us consider the set of integers equipped with a standard order \(\leq\). It is probably one of the simplest linear orders which is not a well ordering.
**Fact 2.5**.: _The set \(L_{(\mathbb{Z},<)}\) is \(G_{\delta}\) and not \(F_{\sigma}\)._
Proof.: Observe that every strictly increasing sequence of integers is unbounded. So we can write
\[L_{(\mathbb{Z},<)}=\{y\in\mathbb{Z}^{\omega}:(\exists k_{0}<k_{1 }<\ldots)(\forall i\in\omega)(y(k_{i})<y(k_{i+1}))\}\] \[=\{y\in\mathbb{Z}^{\omega}:(\forall n\in\mathbb{Z})(\exists k\in \omega)(y(k)>n)\}\] \[=\bigcap_{n\in\mathbb{Z}}\bigcup_{k\in\omega}\{y\in\mathbb{Z}^{ \omega}:y(k)>n\}\] \[=\bigcap_{n\in\mathbb{Z}}\bigcup_{k\in\omega}\bigcup_{m>n}\{y\in \mathbb{Z}^{\omega}:y(k)=m\},\]
which is clearly a \(G_{\delta}\) set as \(\{y\in\mathbb{Z}^{\omega}:y(k)=m\}\) is clopen.
Now note that both \(L_{(\mathbb{Z},<)}\) and \(L_{(\mathbb{Z},<)}^{c}\) have empty interiors (since they cannot include any base open set). Therefore \(L_{(\mathbb{Z},<)}^{c}\) is meager (as an \(F_{\sigma}\) set without interior). If \(L_{(\mathbb{Z},<)}\) is an \(F_{\sigma}\) set, it is also meager contradicting Baire category theorem.
From the observation made after Fact 2.3 and above fact we obtain following corollary:
**Corollary 2.6**.: _The set \(L_{(\mathbb{Z},\leq)}\) is Borel._
## 3. Main results
One of the tools in recognizing \(\mathbf{\Sigma}^{1}_{1}\)-complete sets among the sets of the form \(L_{(X,R)}\) is the following observation.
**Theorem 3.1**.: _Suppose \((X,\leq_{X})\), \((Y,\leq_{Y})\) are posets and \(\varphi:X\to Y\) satisfies the following condition for every \((x_{n})_{n\in\omega}\in X^{\omega}\): \((x_{n})_{n\in\omega}\) contains \(\leq_{X}\)-increasing subsequence \(\Leftrightarrow(\varphi(x_{n}))_{n\in\omega}\) contains \(\leq_{Y}\)-increasing subsequence. If \(L_{(X,\leq_{X})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete, then \(L_{(Y,\leq_{Y})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete too._
Proof.: Let \(Z\) be a Polish space and \(A\subseteq Z\) be an analytic set. There is a Borel map \(f:Z\to X^{\omega}\) such that \(f^{-1}[L_{X}]=A\). We need a Borel map \(h:Z\to Y^{\omega}\) satisfying \(h^{-1}[L_{Y}]=A\).
Define \(g:X^{\omega}\to Y^{\omega}\) with formula
\[g(x)(n)=\varphi(x_{n}).\]
Clearly, \(g\) is continuous, so \(h=g\circ f\) is Borel. For the thesis it is sufficient to show that \(g^{-1}[L_{Y}]=L_{X}\).
\[x\in L_{(X,\leq_{X})} \Leftrightarrow x\text{ contains a $\leq_{X}$-increasing subsequence}\] \[\Leftrightarrow g(x)\text{ contains a $\leq_{Y}$-increasing subsequence}\] \[\Leftrightarrow g(x)\in L_{(Y,\leq_{Y})}\Leftrightarrow x\in g^{-1 }[L_{(Y,\leq_{Y})}]\]
**Corollary 3.2**.: _Assume that \(X\subseteq Y\), \(S\subseteq Y\times Y\), \(R=S\cap(X\times X)\) and \(L_{(X,R)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete. Then \(L_{(Y,S)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete, too._
Proof.: It is enough to take \(\varphi(x)=x\) in Theorem 3.1.
**Corollary 3.3**.: _Assume that \((X,\leq_{X})\) and \((Y,\leq_{Y})\) are isomorphic posets and \(L_{(X,\leq_{X})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete. Then \(L_{(Y,\leq_{Y})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete, too._
Proof.: To see this, put order isomorphism as \(\varphi\) in Theorem 3.1.
Let us now show an example of \(\mathbf{\Sigma}^{1}_{1}\)-complete set based on a space of finite sequences of naturals.
**Theorem 3.4**.: _The set \(L_{(\omega^{<\omega},\subseteq)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete._
Proof.: To prove that \(L_{(\omega^{<\omega},\subseteq)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete we will construct a continuous function \(f:\operatorname{Tr}_{\omega}\to(\omega^{<\omega})^{\omega}\) such that \(f^{-1}[L_{(\omega^{<\omega},\subseteq)}]=\operatorname{IF}_{\omega}\) First, fix an enumeration \(\{\sigma_{n}:n\in\omega\}\) of \(\omega^{<\omega}\) satisfying the following condition
\[\sigma_{n}\subseteq\sigma_{m}\Rightarrow n\leq m.\]
Now we can define the function \(f\):
\[f(T)(n)=\left\{\begin{array}{ll}\sigma_{n},&\sigma_{n}\in T\\ 1^{n}0,&\sigma_{n}\notin T\end{array}\right..\]
Clearly, if \(T\in\operatorname{IF}_{\omega}\), then \(f(T)\) contains \(\subseteq\)-increasing subsequence, hence \(f(T)\in L_{(\omega^{<\omega},\subseteq)}\). To prove the opposite implication, let \(a\in L_{(\omega^{<\omega},\subseteq)}\), \(a_{i_{0}}\subseteq a_{i_{1}}\subseteq a_{i_{2}}\subseteq\ldots\), \(i_{0}<i_{1}<i_{2}<\ldots\). Take any \(T\in f^{-1}(a)\). Notice that at most one of \(a_{i_{0}},a_{i_{1}},\ldots\) can be of the form \(1^{n}0\) for some \(n\in\omega\), so without loss of generality all of them are elements of \(T\) and form a strictly increasing sequence. But such a sequence of elements of \(T\) builds a branch in \(T\), so \(T\in\operatorname{IF}_{\omega}\).
**Theorem 3.5**.: _The set \(L_{(2^{<\omega},\subseteq)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete._
Proof.: We will use Theorems 3.1 and 3.4. First, define function \(f:\omega\to 2^{<\omega}\) with formula
\[f(n)=a_{0}a_{0}a_{1}a_{1}\ldots a_{m}a_{m},\]
where \(n=(a_{0}a_{1}\ldots a_{m})_{2}\), i.e. \(a_{0}a_{1}\ldots a_{m}\) is a binary reprezentation of \(n\). Now consider a function \(\varphi:\omega^{<\omega}\to 2^{<\omega}\) defined as:
\[\varphi(b_{0}b_{1}\ldots b_{n})=f(b_{0})\char 101\char 101\char 101\dots\char 101 \char 101\dots\char 101\char 101.\]
\(\varphi\) and \(\varphi^{-1}\) are both increasing (with respect to ordering defined by \(\subseteq\)), so \(\varphi\) fulfills requirements of Theorem 3.1. Hence, the thesis holds.
On \(\omega^{<\omega}\) let us define an ordering \(\leq_{\mathrm{RL}}\) with the formula
\[x\!\leq_{\mathrm{RL}}y\iff(\exists n\in\omega)(x=y\upharpoonright n\vee(x\upharpoonright n= y\upharpoonright n\wedge x(n)>y(n))).\]
Relation \(\leq_{\mathrm{RL}}\) can be seen as the lexicographical order on \(\omega^{<\omega}\) with modification, that order on \(\omega\) is reversed.
**Theorem 3.6**.: \(L_{(\omega^{<\omega},\leq_{\mathrm{RL}})}\) _is \(\mathbf{\Sigma}^{1}_{1}\)-complete._
Proof.: We will construct a continuous function \(f:\mathrm{Tr}_{\omega}\to(\omega^{<\omega})^{\omega}\) such that \(f^{-1}[L_{X}]=\mathrm{IF}_{\omega}\). First, fix an enumeration \(\{\sigma_{n}:n\in\omega\}\) of \(\omega^{<\omega}\) like in proof of Theorem 3.4. Now we can define function \(f\):
\[f(T)(n)=\left\{\begin{array}{ll}\sigma_{n},&\sigma_{n}\in T\\ 1^{n}0,&\sigma_{n}\notin T\end{array}\right..\]
Clearly, if \(T\in\mathrm{IF}_{\omega}\), then \(f(T)\) contains \(\leq_{\mathrm{RL}}\)-increasing subsequence, hence \(f(T)\in L_{X}\). Let \(a\in L_{X}\), \(a_{i_{0}}\!<_{\mathrm{RL}}a_{i_{1}}\!<_{\mathrm{RL}}a_{i_{2}}\!<_{\mathrm{RL}}\ldots\), \(i_{0}<i_{1}<i_{2}<\ldots\). Take any \(T\in f^{-1}(a)\). Notice that at most one of \(a_{i_{0}},a_{i_{1}},\ldots\) can be of the form \(1^{n}0\) for some \(n\in\omega\) (as \(0\!>_{\mathrm{RL}}10\!>_{\mathrm{RL}}110\!>_{\mathrm{RL}}\ldots\)), so without loss of generality all of them are elements of \(T\) and \(|a_{i_{0}}|>0\).
Since \(a_{i_{0}}\) is \(\leq_{\mathrm{RL}}\)-smallest of \(a_{i_{0}},a_{i_{1}},\ldots\), it must be the case that \(a_{i_{0}}(0)\geq a_{i_{j}}(0)\) for all \(j\in\omega\). Therefore there are only finitely many possible values for \(a_{i_{j}}(0)\), so infinitely many of them start with the same number, say \(\tau(0)\). Analogically, from all \(a_{i_{j}}\) which start with \(\tau(0)\) infinitely many have the same number at position \(1\), say \(\tau(1)\). Continuing this way we obtain \(\tau\in\omega^{\omega}\) such that for every \(n\in\omega\) there is \(j\in\omega\) satisfying
\[\tau\upharpoonright n\preceq a_{i_{j}},\]
so (because \(T\) is a tree and \(a_{i_{j}}\in T\)) \(\tau\upharpoonright n\in T\). It follows that \(\tau\) is an infinite branch of \(T\).
Now let us focus on rational numbers with standard ordering. Notice that this poset can be seen as "the most complicated" among countable linear orderings, since it contains an isomorphic copy of any countable linear order. Firstly, we shall see that \((\mathbb{Q},\leq)\) generates \(\mathbf{\Sigma}^{1}_{1}\)-complete set, opposed to linear orderings investigated in Section 2.
**Theorem 3.7**.: _The set \(L_{(\mathbb{Q},\leq)}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete._
Proof.: Define a function \(\varphi:\omega^{<\omega}\to\mathbb{Q}\) with formula (\(\varphi(\varepsilon)=0\))
\[\varphi(a_{0}a_{1}a_{2}\ldots a_{n})=(0.\underbrace{00\ldots 0}_{a_{0}}1 \underbrace{00\ldots 0}_{a_{1}}1\underbrace{00\ldots 0}_{a_{2}}1\ldots \underbrace{00\ldots 0}_{a_{n}}1)_{2}.\]
Considering \(\leq_{\mathrm{RL}}\) on \(\omega^{<\omega}\), \(\varphi\) and \(\varphi^{-1}\) are clearly increasing. Therefore \(\varphi\) is an order isomorphism between \((\omega^{<\omega},\leq_{\mathrm{RL}})\) and \((\varphi(\omega^{<\omega}),\leq)\). Thus, the thesis follows from Corollaries 3.2 and 3.3.
Next, we would like to characterize all linear ordering which yields an \(\mathbf{\Sigma}^{1}_{1}\)-complete set. The following theorem, as explained later, will serve as a main tool in our task.
**Theorem 3.8**.: _Suppose \(X\subseteq\mathbb{Q}\cap[0,1]\), \(\leq_{X}=\leq\cap(X\times X)\). Let \(\overline{X}\) be the closure of \(X\) in the Euclidean topology. We have two possible cases._
1. _If_ \(|\overline{X}|=\omega\)_, then_ \(L_{(X,\leq_{X})}\) _is Borel._
2. _If_ \(|\overline{X}|=\mathfrak{c}\)_, then_ \(X\) _contains_ \(\leq\)_-dense subset._
Proof.: Firstly, consider the case \(|\overline{X}|=\omega\). For \(g\in[0,1]\) define
\[L_{g}=\{y\in X^{\omega}:(\forall a\in X)((a<g)\to(\forall N\in\omega)(\exists n >N)(a<y_{n}\leq g))\}.\]
We want to show that
\[L_{(X,\leq_{X})}=\bigcup\{L_{g}:\:g\in\overline{X}\}.\]
Take any \(g\in[0,1]\), for which \(L_{g}\neq\emptyset\), and \(y\in L_{g}\). Let \(N_{0}=\max\{n:y_{n}=g\}\) (if such \(n\) does not exist, \(y\) contains a constant subsequence, hence \(y\in L_{(X,\leq_{X})}\)) and take \(k_{0}=N_{0}+1\)
From the definition of \(L_{g}\) there is \(k_{1}>k_{0}\) such that \(y_{k_{0}}<y_{k_{1}}<g\) (because \(y_{k_{0}}\in X\), \(y_{k_{0}}<g\)). Analogically we can find \(k_{2}>k_{1}\) satisfying \(y_{k_{1}}<y_{k_{2}}<g\). Continuing this way we obtain a sequence \(k_{0}<k_{1}<k_{2},\ldots\) defining an increasing subsequence \((y_{k_{i}})_{i\in\omega}\) of \(y\).
On the other hand, when \(y\in L_{(X,\leq_{X})}\), it contains a non-decreasing subsequence. But this subsequence is bounded (like the whole \(X\subseteq[0,1]\)), so it converges to some \(g\in\overline{X}\). Thus, \(y\in L_{g}\).
Note that, since \(X\) is countable, \(L_{g}\) is a Borel set for every \(g\in[0,1]\). So \(L_{(X,\leq_{X})}\), as a countable union of \(L_{g}\)'s, is also Borel.
Now let us focus on the second case, i.e. \(|\overline{X}|=\mathfrak{c}\). First, observe that if \([a,b]\in\overline{X}\) for some \(a,b\in[0,1]\), \(a<b\), then \(X\cap[a,b]\) is \(\leq\)-dense. Thus, assume that \(\overline{X}\) does not contain an interval. There is a perfect nowhere dense set \(C\subseteq\overline{X}\). Without loss of generality we can presume that \(0,1\in C\) (otherwise we consider interval \([a,b]\), where \(a=\min C\), \(b=\max C\)). We will represent \(C\) i more convenient way. To do this we will inductively construct a family \(\{C_{\sigma}:\,\sigma\in 2^{<\omega}\}\) of closed intervals and a family \(\{U_{\sigma}:\,\sigma\in 2^{<\omega}\}\) of open intervals (similarly to the classical construction of the Cantor set).
We start with \(C_{\varepsilon}=[0,1]\). Since \(C\) is nowhere dense, we can take a maximal open interval \(U_{\varepsilon}=(a_{\varepsilon},b_{\varepsilon})\), \(U_{\varepsilon}\subseteq[0,1]\), disjoint with \(C\). Hence, \(C\subseteq[0,a_{\varepsilon}]\cup[b_{\varepsilon},1]\). Next we see that \(a_{\varepsilon}\neq 0\) (and \(b_{\varepsilon}\neq 1\)), because otherwise \(0\in C\) would be an isolated point of perfect set \(C\). Moreover, from maximality of \(U_{\varepsilon}\), \(a_{\varepsilon},b_{\varepsilon}\in C\). Let us denote \([0,a_{\varepsilon}]=C_{(0)}\), \([b_{\varepsilon},1]=C_{(1)}\), \(l_{(0)}=0\), \(p_{(0)}=a_{\varepsilon}\), \(l_{(1)}=b_{\varepsilon}\), \(p_{(1)}=1\).
Assume now that \(C_{\sigma}=[l_{\sigma},p_{\sigma}]\) has been already constructed for some \(\sigma\in 2^{<\omega}\). Analogically as in the previous point we choose a maximal open interval \(U_{\sigma}=(a_{\sigma},b_{\sigma})\subseteq[l_{\sigma},p_{\sigma}]\) disjoint with \(C\). We denote \(l_{\sigma^{\cdot}0}=l_{\sigma}\), \(p_{\sigma^{\cdot}0}=a_{\sigma}\), \(l_{\sigma^{\cdot}1}=b_{\sigma}\), \(p_{\sigma^{\cdot}1}=p_{\sigma}\).
Taking
\[C_{n}=\bigcup\{C_{\sigma}:\,\sigma\in 2^{<\omega}|\sigma|=n\}\]
it is clear that \(C=\bigcap_{n\in\omega}C_{n}\).
Therefore, if we put \(\mathcal{U}=\{U_{\sigma}:\sigma\in 2^{<\omega}\}\),
\[C=[0,1]\backslash\bigcup\mathcal{U}. \tag{1}\]
Furthermore
\[\{l_{\sigma}:\sigma\in 2^{<\omega}\}\cup\{p_{\sigma}:\sigma\in 2^{<\omega}\} \subseteq C. \tag{2}\]
We will now consider two possibilities. First, \(X\) contains dense-in-itself set and second, \(X\) does not contain dense-in-itself set.
In the first situation, \(X\) contains a dense-in-itself set. Without loss of generality \(X\) is dense-in-itself (otherwise we repeat above construction for closure of this dense-in-itself subset of \(X\)). \(\overline{X}\) is then a perfect set and does not contain an interval, thus is nowhere dense. Hence, we can put \(C=\overline{X}\). Consider a set
\[P=X\backslash\{p_{\sigma}:l_{\sigma+1}\in X\},\]
where \(\sigma+1\) is a successor of \(\sigma\in 2^{n}\) in lexicographical order on \(2^{n}\) (in other words binary adding \(1\) to \(\sigma\) and \(111\ldots 11+1\) does not exist). We claim that \(P\) is \(\leq\)-dense. Take any \(a,b\in P\), \(a<b\). From the above construction there is \(\sigma\in 2^{<\omega}\) such that
\[a\leq p_{\sigma}<l_{\sigma+1}\leq b.\]
First, assume that \(a<p_{\sigma}\) and \(l_{\sigma+1}<b\). If \(p_{\sigma}\in X\) or \(l_{\sigma+1}\in X\), claim clearly holds. Otherwise \(p_{\sigma}\in\overline{X}\), so there is \(x\in X\) close to \(p_{\sigma}\). Therefore
\[a<x<b.\]
Second, presume that \(a=p_{\sigma}\). From definition of \(P\), \(l_{\sigma+1}<b\). Since \(X\) is dense-in-itself, there is \(x\in X\) satisfying \(l_{\sigma+1}<x<b\). If \(x\in P\), claim holds. If not, \(x=p_{\tau}<l_{\tau+1}\leq b\) for some \(\tau\in 2^{<\omega}\). Again, there is \(y\in X\) such that \(a<y<x\). When \(y\in P\), claim holds. Otherwise \(y=p_{\psi}\) and \(l_{\psi+1}\in P\) for some \(\psi\in 2^{<\omega}\). But then
\[a<p_{\psi}<\underbrace{l_{\psi+1}}_{\in P}<x<b.\]
The case \(a<p_{\sigma}\), \(l_{\sigma+1}=b\) is analogous to previous one.
Finally, consider a situation when \(X\) does not contain any dense-in-itself set. We start by proving that
\[\overline{X\backslash C}\supseteq C. \tag{3}\]
Suppose not, so there is \(z\in C\) such that \(z\notin\overline{X\backslash C}\). There exists an open interval \(U=(l,p)\ni z\) disjoint with \(X\backslash C\). As \(z\in C\subseteq\overline{X}\), \(X\cap U\neq\emptyset\) and \(X\cap U\subseteq C\cap U\). We claim that \(X\cap U\) is dense-in-itself. Take any \(x\in X\cap U\) and \(\varepsilon>0\). We want to find \(y\in(x-\varepsilon,x+\varepsilon)\cap U\cap X\). Since \(x\in C\cap U\), it exists \(c\in(x-\varepsilon,x+\varepsilon)\cap U\cap C\). Because \(c\in C\), we can find \(y\in X\) close to \(c\), especially \(y\in(x-\varepsilon,x+\varepsilon)\cap U\cap X\). Therefore \(X\cap U\) is dense-in-itself, which contradicts assumption that \(X\) does not contain such a set.
From equation (1) we see that
\[X\backslash C=X\cap\bigcup\mathcal{U}=\bigcup\{X\cap U_{\sigma}:U_{\sigma}\in \mathcal{U}\}.\]
Let \(Y\) be a selector of family \(\{X\cap U_{\sigma}:U_{\sigma}\in\mathcal{U}\}\backslash\{\emptyset\}\). We claim that \(Y\) is \(\leq\)-dense. Take any \(a,b\in Y\), \(a<b\). Take \(\sigma,\psi\in 2^{<\omega}\), \(\sigma\neq\psi\), such that \(a\in U_{\sigma}\), \(b\in U_{\psi}\). There is \(\tau\in 2^{<\omega}\) satisfying \(C_{\tau}=[l_{\tau},p_{\tau}]\subseteq[l_{\sigma^{\char 1}},p_{\psi^{\char 1}}0]\) and \(C_{\tau}\neq[l_{\sigma^{\char 1}},p_{\psi^{\char 1}}0]\). Suppose that \(l_{\tau}\neq l_{\sigma^{\char 1}}\) (case when \(p_{\tau}\neq p_{\psi^{\char 1}}0\) is analogous). From 2 it follows that \(l_{\tau}\in C\), so (from 3) \(l_{\tau}\in\overline{X\backslash C}\). Thus, there is a sequence from \(X\cap\bigcup\mathcal{U}\) convergent to \(l_{\tau}\). Hence, there is \(\phi\in 2^{<\omega}\) satisfying
\[U_{\phi}\subseteq[l_{\sigma^{\char 1}},p_{\psi^{\char 1}}0],X\cap U_{\phi}\neq\emptyset.\]
Therefore there exists \(x\in Y\cap X\cap U_{\phi}\). Clearly, \(a<x<b\).
**Theorem 3.9**.: _Let \((X,\leq_{X})\) be a linear order. \(L_{(X,\leq_{X})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete if and only if \(X\) contains \(\leq_{X}\)-dense subset._
Proof.: First, note that every linear order can be embedded into \((\mathbb{Q}\cap[0,1],\leq)\). Therefore, we can assume without loss of generality that \(X\subseteq\mathbb{Q}\cap[0,1]\), \(\leq_{X}=\leq\).
Suppose that \(L_{(X,\leq_{X})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete. From Theorem 3.8, \(X\) contains \(\leq\)-dense subset. On the other hand, if \(X\) contains \(\leq\)-dense subset \(Y\), \(Y\) is order-isomorphic to \(\mathbb{Q}\) (since \(\mathbb{Q}\) is the only, up to isomorphism, countable dense linear order). From Corollaries 3.2 and 3.3, \(L_{(X,\leq_{X})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete.
**Corollary 3.10**.: _Let \(\leq_{\rm lex}\) be the lexicographical order on \(2^{<\omega}\). Then \(L_{(2^{<\omega},\leq_{\rm lex})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete._
Proof.: Consider set \(A=\{x\char 1}:x\in 2^{<\omega}\}\), i.e. the set of all sequences ending with 1. We claim that this set is \(\leq_{\rm lex}\)-dense. Take any \(\sigma\char 1}<_{\rm lex}\tau\char 1}\), \(\sigma,\tau\in 2^{<\omega}\).
If \(\sigma\char 1}\subseteq\tau\char 1}\), then
\[\sigma\char 1}<_{\rm lex}\sigma\char 1}}0^{|\tau|\tau|\tau|}1<_{\rm lex} \tau\char 1}.\]
Otherwise
\[\sigma\char 1}<_{\rm lex}\sigma\char 1}1}1<_{\rm lex}\tau\char 1}1}1.\]
Hence, by Theorem 3.9, \(L_{(2^{<\omega,\leq_{\rm lex})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete.
**Question 2**.: _What is the characterisation of countable posets \((X,\leq_{X})\) such that \(L_{(X,\leq_{X})}\) is \(\mathbf{\Sigma}^{1}_{1}\)-complete?_ |
2307.07328 | Boosting Backdoor Attack with A Learnable Poisoning Sample Selection
Strategy | Data-poisoning based backdoor attacks aim to insert backdoor into models by
manipulating training datasets without controlling the training process of the
target model. Existing attack methods mainly focus on designing triggers or
fusion strategies between triggers and benign samples. However, they often
randomly select samples to be poisoned, disregarding the varying importance of
each poisoning sample in terms of backdoor injection. A recent selection
strategy filters a fixed-size poisoning sample pool by recording forgetting
events, but it fails to consider the remaining samples outside the pool from a
global perspective. Moreover, computing forgetting events requires significant
additional computing resources. Therefore, how to efficiently and effectively
select poisoning samples from the entire dataset is an urgent problem in
backdoor attacks.To address it, firstly, we introduce a poisoning mask into the
regular backdoor training loss. We suppose that a backdoored model training
with hard poisoning samples has a more backdoor effect on easy ones, which can
be implemented by hindering the normal training process (\ie, maximizing loss
\wrt mask). To further integrate it with normal training process, we then
propose a learnable poisoning sample selection strategy to learn the mask
together with the model parameters through a min-max optimization.Specifically,
the outer loop aims to achieve the backdoor attack goal by minimizing the loss
based on the selected samples, while the inner loop selects hard poisoning
samples that impede this goal by maximizing the loss. After several rounds of
adversarial training, we finally select effective poisoning samples with high
contribution. Extensive experiments on benchmark datasets demonstrate the
effectiveness and efficiency of our approach in boosting backdoor attack
performance. | Zihao Zhu, Mingda Zhang, Shaokui Wei, Li Shen, Yanbo Fan, Baoyuan Wu | 2023-07-14T13:12:21Z | http://arxiv.org/abs/2307.07328v1 | # Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
###### Abstract
Data-poisoning based backdoor attacks aim to insert backdoor into models by manipulating training datasets without controlling the training process of the target model. Existing attack methods mainly focus on designing triggers or fusion strategies between triggers and benign samples. However, they often randomly select samples to be poisoned, disregarding the varying importance of each poisoning sample in terms of backdoor injection. A recent selection strategy filters a fixed-size poisoning sample pool by recording forgetting events, but it fails to consider the remaining samples outside the pool from a global perspective. Moreover, computing forgetting events requires significant additional computing resources. Therefore, how to efficiently and effectively select poisoning samples from the entire dataset is an urgent problem in backdoor attacks. To address it, firstly, we introduce a poisoning mask into the regular backdoor training loss. We suppose that a backdoored model training with hard poisoning samples has a more backdoor effect on easy ones, which can be implemented by hindering the normal training process (\(i.e.\), maximizing loss \(w.r.t.\) mask). To further integrate it with normal training process, we then propose a learnable poisoning sample selection strategy to learn the mask together with the model parameters through a min-max optimization. Specifically, the outer loop aims to achieve the backdoor attack goal by minimizing the loss based on the selected samples, while the inner loop selects hard poisoning samples that impede this goal by maximizing the loss. After several rounds of adversarial training, we finally select effective poisoning samples with high contribution. Extensive experiments on benchmark datasets demonstrate the effectiveness and efficiency of our approach in boosting backdoor attack performance.
## 1 Introduction
Training large-scale deep neural networks (DNNs) often requires massive training data. Considering the high cost of collecting or labeling massive training data, users may resort to downloading publicly free data from an open-sourced repository or buying data from a third-party data supplier. However, these unverified data may expose the model training to a serious threat of data-poisoning based backdoor attacks. Through manipulating a few training samples, the adversary could insert the malicious backdoor into the trained model, which performs well on benign samples, but will predict any poisoned sample with trigger as the target class.
Several seminal backdoor attack methods (\(e.g.\) BadNets [13], Blended [7], SSBA [28], SIG [1], TrojanNN [30]\(etc.\)) have shown good attack performance (\(i.e.\), high attack success rate while keeping high clean accuracy) on mainstream DNNs. Most of these attack methods focus on designing diverse triggers (\(e.g.\) patch trigger [13], or signal trigger [1]), or the fusion strategy of inserting the trigger into the benign sample (\(e.g.\) alpha-blending adopted in Blended [7], digital steganography adopted in SSBA [28]), to make the poisoned samples stealthy and effective. However, it is often assumed that a few benign samples are randomly selected from the benign training dataset to generate poisoned samples. Some recent works [22; 17; 35] suggest that not all data are equally useful for training DNNs -- some have greater importance for the task at hand or are more rich in informative content than others. Several selection strategies, such as uncertainty-based [8], influence function [22], forgetting events [40], have been proposed to mine important samples for coreset selection [2; 20; 19; 31; 21], data valuation [50; 34; 16] and active learning [38; 18; 5].
It inspires us to explore whether the backdoor performance could be boosted if the samples to be poisoned are selected according to some strategies rather than randomly, especially depending on the trigger and benign data. This underappreciated problem has rarely been studied in the backdoor learning community, and there is only one attempt [49] try to solve it. A filtering-and-updating strategy (FUS) [49] has been proposed to filter poisoning samples within a fixed-size sample pool based on forgetting events [40], while disregarding the remaining samples beyond the pool, which is a local perspective. Besides, computing forgetting events for each updating step requires the same number of epochs as the full training process, resulting in a significant increase in computational cost, which is impractical in real-world scenarios. Hence, how to efficiently and effectively select samples to be poisoned with a global perspective from the entire dataset, while maintaining general applicability to diverse backdoor attacks is still an urgent problem to be solved.
To address the aforementioned issue, we propose a **Le**arnable **P**oisoning sample **S**election strategy (LPS) that depends on triggers, poisoned fusion strategies, and benign data. The key idea behind it is that if we can successfully implant the backdoor into the model through hard poisoning samples, the backdoor behavior can be effectively generalized to other easier samples at the inference stage. A learnable binary poisoning mask \(m\) is first introduced into the regular backdoor training loss (Eq. (2)). Then finding hard samples can intuitively be obtained by hindering backdoor training process (\(i.e.\), maximize loss \(w.r.t.\)\(m\)). In order to further fuse it with normal backdoor training, we consequently formulate the poisoning sample selection as a min-max optimization via an adversarial process. During the min-max two-player game, the inner loop optimizes the mask to identify hard poisoning sample, while the outer loop optimizes the model's parameters to train a backdoored model based on the selected samples. By adversarially training the min-max problem over multiple rounds, we finally obtain the high-contributed poisoning samples that serve the malicious backdoor objective. The proposed LPS strategy can be naturally adopted in any off-the-shelf data-poisoning based backdoor attacks. Extensive evaluations with state-of-the-art backdoor attacks are conducted on benchmark datasets. The results demonstrate the superiority of our LPS strategy over both the random selection and the FUS strategy [49], while resulting in significant computational savings.
The main contributions of this work are three-fold. **1)** We propose a general backdoor training loss that incorporates a binary poisoning mask. **2)** We propose a learnable poisoning sample selection strategy by formulating it as a min-max optimization problem. **3)** We provide extensive experiments to verify the effectiveness of the proposed selection strategy on significantly boosting existing data-poisoning backdoor attacks.
## 2 Related work
**Backdoor attack.** According to the threat model, existing backdoor attacks can be partitioned into two categories: _data-poisoning based_[13; 7; 28; 33; 1; 30; 43] and _training-controllable based_[32; 9; 10; 44]. In this work, we focus on the former threat model, where the adversary can only manipulate the training dataset and the training process is inaccessible. Thus, here we mainly review the related data-poisoning based attacks, and we refer readers to recent surveys [48; 27; 47] for a detailed introduction to training-controllable attacks. BadNets [13] was the first attempt to stamp a patch on the benign image as the poisoned image, revealing the existence of backdoor in deep learning. Blended [7] used the alpha blending strategy to make the trigger invisible to evade human inspection. SIG [1] generated a ramp or triangle signal as the trigger. TrojanNN attack [30] optimized the trigger by maximizing its activation on selected neurons related. SSBA [28] adopted a digital stenography to
use a specific string into images by autoencoder, to generate sample-specific triggers. Subsequently, more stealthy and effective attacks [52; 55; 37; 42; 39; 32; 11] have been successively proposed. Meanwhile, some defense methods [41; 15; 45; 4; 6; 12; 54; 53; 57] have been proposed as shields to resist attacks. The commonality of above attacks is that they focused on designing triggers or the fusion strategy, while overlooking how to select benign samples to generate poisoned samples, and simply adopted the random selection strategy. Instead, we aim to boost existing data-poisoning backdoor attacks through a learnable poisoning sample selection strategy depending on the trigger and benign data. The filtering step is based on the forgetting events [40] recorded on a small number of adversaries, which ensures that the differences between samples can emerge. Afterwards, some new poisoned samples are sampled randomly from the candidate set to update the pool. The above two steps are iterated several times to find a suitable solution.
**Poisoning sample selection in backdoor attack.** To the best of our knowledge, there is only one work [49] focusing on poisoning sample selection for backdoor attack. A filtering-and-updating strategy (FUS) has been proposed in [49] to iteratively filter and update a sample pool. The filtering step filters easily forgotten poisoning samples based forgetting events [40], which are recorded by the same number of epochs as the full training process. Afterwards, some new poisoned samples are sampled randomly from the candidate set to update the pool. The above two steps are iterated several times to find a suitable solution. As the pioneering work, FUS shows good improvement in backdoor effect compared to the random selection strategy. However, FUS requires tens of times more computing resources, which is not acceptable in practice.
## 3 Preliminary
**Threat model.** We consider the threat model that the adversary can only manipulate the training dataset with the training process inaccessible, dubbed _data-poisoning based backdoor attack_. It applies to the scenario in which the user trains a neural network based on an unverified dataset.
**General procedure of data-poisoning based backdoor attacks.** Here we describe the general procedure of data-poisoning based backdoor attacks. As shown in Fig. 1, it consists of 5 steps:
**Design trigger (by adversary).** The first step of backdoor attack is to design a trigger \(\epsilon\), of which the format could be diverse in different applications, such as one image with particular textures in computer vision tasks, as shown in the right part of Fig. 1.
**Select samples to be poisoned (by adversary).** Let \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{|\mathcal{D}|}\) denote the original benign training dataset that contains \(|\mathcal{D}|\)_i.i.d._ samples, where \(\mathbf{x}_{i}\in\mathcal{X}\) denotes the input feature, \(y_{i}\in\mathcal{Y}=\{1,\ldots,K\}\) is the ground-truth label of \(\mathbf{x}_{i}\). There are \(K\) candidate classes, and the size of class \(k\) is denoted as \(n_{k}\). For clarity, we assume that all training samples are ranked following the class indices, _i.e._, (samples of class \(1\)), (samples of class \(2\)), \(\ldots,(\)samples of class \(K\)). To ensure stealthiness and avoid harm to clean accuracy, the adversary often selects a small fraction of benign samples to be poisoned. Here we define a binary vector \(\mathbf{m}=\big{[}m_{1},m_{2},\ldots,m_{|\mathcal{D}|}\big{]}\in\{0,1\}^{| \mathcal{D}|}\) to represent the poisoning mask, where \(m_{i}=1\) indicates that \(\mathbf{x}_{i}\) is selected to be poisoned and \(m_{i}=0\) means not selected. We denote \(\alpha:=\sum_{i=1}^{|\mathcal{D}|}m_{i}/|\mathcal{D}|\) as the poisoning ratio. Note that most existing backdoor attack methods randomly select \(\alpha\cdot|\mathcal{D}|\) samples to be poisoned.
Figure 1: The general procedure of data-poisoning based backdoor attack and examples of representative triggers.
** Generate poisoned samples (by adversary).** Given the trigger \(\mathbf{\epsilon}\) and the selected sample \(\mathbf{x}_{i}\) (\(i.e.\), \(m_{i}=1\)), the adversary will design some strategies to fuse \(\mathbf{\epsilon}\) into \(\mathbf{x}_{i}\) to generate the poisoned sample \(\tilde{\mathbf{x}}_{i}\), \(i.e.\), \(\tilde{\mathbf{x}}_{i}=g(\mathbf{x}_{i},\mathbf{\epsilon})\), with \(g(\cdot,\cdot)\) denoting the fusion operator (\(e.g.\) the alpha-blending used in Blended [7]). Besides, the adversary has authority to change the original ground-truth label \(y_{i}\) to the target label \(\tilde{y}_{i}\). If target labels remain the same for all poisoning samples (\(i.e.\), \(\tilde{y}_{i}=y_{t}\)), it is called _all-to-one_ attack. If target labels have differnt types (\(e.g.\), \(\tilde{y}_{i}=y_{i}+1\)), it is called _all-to-all_ attack. If adversary does not change the ground-truth label (\(i.e.\), \(\tilde{y}_{i}=y_{i}\)), it is called _clean label_ attack. Thus, the generated poisoned training dataset could be denoted as \(\tilde{\mathcal{D}}=\{(\mathbf{x}_{i},y_{i})|_{\text{if }m_{i}=0},\text{ or }(\tilde{\mathbf{x}}_{i},\tilde{y}_{i})|_{\text{if }m_{i}=1}\}_{i=1}^{|\mathcal{D}|}\).
**Claim the target model (by user).** Given the poisoned training dataset \(\tilde{\mathcal{D}}\), the user trains the target model \(f_{\mathbf{\theta}_{t}}\) by minimizing the following loss function:
\[\mathcal{L}(\mathbf{\theta}_{t};\tilde{\mathcal{D}})=\frac{1}{| \tilde{\mathcal{D}}|}\sum_{(\mathbf{x},y)\in\tilde{\mathcal{D}}}\ell(f_{\mathbf{ \theta}_{t}}(\mathbf{x}),y)) \tag{1}\] \[\equiv \mathcal{L}(\mathbf{\theta}_{t};\mathcal{D},\mathbf{m},\mathbf{\epsilon},g)= \frac{1}{|\mathcal{D}|}\sum_{i=1}^{|\mathcal{D}|}\Big{[}(1-m_{i})\cdot\ell(f_ {\mathbf{\theta}_{t}}(\mathbf{x}_{i}),y_{i}))+m_{i}\cdot\ell(f_{\mathbf{\theta}_{t}}(\tilde {\mathbf{x}}_{i}),y_{t})\Big{]}, \tag{2}\]
where \(\ell(\cdot,\cdot)\) is the loss function for an individual sample, such as cross-entropy loss. In Eq. (2), we extend Eq. (1) by introducing binary poisoning mask \(\mathbf{m}\) that described in step 2.
**C Activate the backdoor using the trigger during the inference stage (by the adversary)** Given the trained model \(f_{\mathbf{\theta}_{t}}\), the adversary expects to activate the injected backdoor using the trigger \(\mathbf{\epsilon}\), \(i.e.\), fooling \(f_{\mathbf{\theta}_{t}}\) to predict any poisoned sample \(g(\mathbf{x}_{i},\mathbf{\epsilon})\) as the target label \(\tilde{y}_{i}\).
Most backdoor attack methods concentrate on designing diverse triggers (\(i.e.\), step 1) or the fusion strategy (\(i.e.\), step 3). These attacks typically randomly select samples for poisoning (\(i.e.\), step 2), neglecting the unequal influence of each poisoning samples to the backdoor injection. Recent FUS strategy [49], as shown in Fig. 2, filters unimportant poisoning samples in a pool based on forgetting events [40], while ignoring the rest of the samples outside the pool, which is a local perspective. Besides, since the characteristics of poisoning samples vary from different attacks, the selected samples that succeed in one attack may not be effective in others. Therefore, it is a challenging task to develop a poisoning sample selection strategy that can select poisoning samples from the entire dataset and be generally applicable to various backdoor attacks.
## 4 Methodology: learnable poisoning sample selection strategy
This work aims to design a novel sample selection strategy to enhance the impact of a backdoor in the trained target model, denoted as \(f_{\mathbf{\theta}t}\). As the target model \(f\mathbf{\theta}t\) is agnostic to adversaries, we adopt a surrogate model \(f\mathbf{\theta}_{s}\) as an alternative. In order to select poisoning samples from the entire dataset with a global perspective, we opt to directly generate the poisoning mask \(\mathbf{m}\) in step 2. We suppose that if backdoor can been implanted into the model through training with _hard_ poisoning samples, the backdoor can be generally activated by other _easy_ samples during the inference stage. To achieve this, an intuitive way is to hinder the normal backdoor training from an opposite direction, \(i.e.\), maximize the loss in Eq. (2) given the surrogate model. To combine it with the normal training process (\(i.e.\), minimize Eq. (2)), we propose a **L**earnable **P**oisoning sample **S**election (LPS) strategy to learn the poisoning mask \(\mathbf{m}\) along with the surrogate model's parameters \(\mathbf{\theta}_{s}\) through a min-max optimization:
\[\min_{\mathbf{\theta}_{s}}\max_{\mathbf{m}\in\{0,1\}^{|\mathcal{D}|}}\Big{\{}\mathcal{ L}(\mathbf{\theta}_{s},\mathbf{m};\mathcal{D},\mathbf{\epsilon},g)\quad\text{s.t.}\;\mathbf{H}\mathbf{m}= \tilde{\alpha}\cdot\mathbf{\mu}\Big{\}}, \tag{3}\]
Figure 2: Different poisoning sample selection strategies.
where \(\mathcal{L}\) is extended loss including poisoning mask that defined in Eq. (2). \(\mathbf{H}\in\{0,1\}^{K\times|\mathcal{D}|}\) is defined as: in the \(k\)-th row, the entries \(\mathbf{H}(k,\sum_{j=1}^{k-1}n_{j}+1:\sum_{j=1}^{k}n_{j})=1\), while other entries are 0. \(\tilde{\alpha}=\frac{\alpha\cdot|\mathcal{D}|}{\sum_{k\neq y_{t}}n_{k}}\) and \(\tilde{\alpha}n_{k}\) is integer for all \(k\). \(\mathbf{\mu}=[\mu_{1};\mu_{2};\ldots;\mu_{K}]\in\mathbb{N}^{K}\) is defined as: if \(k\neq y_{t}\), then \(\mu_{k}=n_{k}\), otherwise \(\mu_{k}=0\). This equation captures three constraints, including: **1)**\(\alpha\cdot|\mathcal{D}|\) samples are selected to be poisoned; **2)** the target class samples cannot be selected to be poisoned; **3)** each non-target class has the same selected ratio \(\tilde{\alpha}\) to encourage the diversity of selected samples. Note that here we only consider the setting of _all-to-one_ attack, but the constraint can be flexibly adjusted for _all-to-all_ and _clean label_ settings.
**Remark.** This min-max objective function (3) is designed for finding hard poisoning samples with high-contribution for backdoor injection via an adversarial process. Specifically, the inner loop encourages to select hard samples for the given model's parameters \(\mathbf{\theta}_{s}\) by maximizing the loss \(w.r.t.\)\(\mathbf{m}\), while the outer loop aims to update \(\mathbf{\theta}_{s}\) by minimizing the loss \(w.r.t.\)\(f_{\mathbf{\theta}_{s}}\) to ensure that a good back-doored model can be still learned, even based on the hard poisoning mask \(\mathbf{m}\). Thus, the two-player game between \(\mathbf{m}\) and \(\mathbf{\theta}_{s}\) is expected to encourage the selected samples to bring in good backdoor effect, while avoiding over-fitting to the surrogate model \(f_{\mathbf{\theta}_{s}}\).
**Optimization.** As summarized in Algorithm 1, the min-max optimization (3) could be efficiently solved by alternatively updating \(\mathbf{m}\) and \(\mathbf{\theta}_{s}\) as follows:
\(\blacklozenge\)**Outer minimization**: given \(\mathbf{m}\), \(\mathbf{\theta}_{s}\) could be updated by solving the following sub-problem:
\[\mathbf{\theta}_{s}\in\arg\min_{\mathbf{\theta}_{s}}\ \mathcal{L}(\mathbf{\theta}_{s};\mathbf{ m},\mathcal{D},\mathbf{\epsilon},g). \tag{4}\]
It could be optimized by the standard back-propagation method with stochastic gradient descent (SGD) [3]. Here we update \(\mathbf{\theta}_{s}\) for one epoch in each iteration.
\(\blacklozenge\)**Inner maximization**: given \(\mathbf{\theta}_{s}\), \(\mathbf{m}\) could be achieved by solving the maximization problem as:
\[\mathbf{m}\in\arg\max_{\mathbf{m}\in\{0,1\}^{|\mathcal{D}|}}\Big{\{} \mathcal{L}(\mathbf{m};\mathbf{\theta}_{s},\mathcal{D},\mathbf{\epsilon},g),\ \mathrm{s.t.}\ \mathbf{H}\mathbf{m}= \tilde{\alpha}\cdot\mathbf{\mu}\Big{\}}. \tag{5}\]
Although it is a constrained binary optimization problem, it is easy to obtain the optimal solution. Specifically, given the hard constraint \(\mathbf{H}\mathbf{m}=\tilde{\alpha}\cdot\mathbf{\mu}\), the above problem could be separated into \(K\) independent sub-problems, \(i.e.\),
\[\max_{\mathbf{m}_{k}\in\{0,1\}^{n_{k}}}\frac{1}{|\mathcal{D}|}\left\{ \sum_{i=1}^{|\mathcal{D}|}\mathbb{I}(y_{i}=k)\cdot m_{i}\cdot\big{[}\ell(f_{ \mathbf{\theta}_{s}}(\tilde{\mathbf{x}}_{i}),y_{t})\!-\!\ell(f_{\mathbf{\theta}_{s}}(\bm {x}_{i}),y_{i})\big{]},\ \mathrm{s.t.}\ \mathbf{1}_{n_{k}}^{\top}\mathbf{m}_{k}\!=\! \tilde{\alpha}\cdot n_{k}\right\}, \tag{6}\]
for \(\forall k\in\{1,2,\ldots,K\}\) except \(k=y_{t}\). \(\mathbf{m}_{k}\) denotes the sub-mask vector of \(\mathbf{m}\) corresponding to samples of class \(k\), and \(\mathbb{I}(a)=1\) if \(a\) is true, otherwise 0. Note that some constant terms \(w.r.t.\)\(\mathbf{m}_{k}\) have been abandoned in the above sub-problem. And, since it is constrained that only non-target class samples can be selected, \(\mathbf{m}_{y_{t}}\) is always a zero vector. It is easy to obtain the optimal solution by firstly calculating \(\ell(f_{\mathbf{\theta}_{s}}(\tilde{\mathbf{x}}_{i}),y_{t})-\ell(f_{\mathbf{\theta}_{s}}( \mathbf{x}_{i}),y_{i})\) for all samples satisfying \(\mathbb{I}(y_{i}=k)=1\) and ranking them in descending order, then picking the top-(\(\tilde{\alpha}\cdot n_{k})\) indices to set the corresponding \(m_{i}\) as 1, while others as 0.
```
0: Benign training dataset \(\mathcal{D}\), architecture of the surrogate model \(f_{\mathbf{\theta}_{s}}\), maximal iterations \(T\), poisoning ratio \(\alpha\), trigger \(\mathbf{\epsilon}\), fusion operator \(g\)
0: poisoning mask \(\mathbf{m}\)
1: Randomly initialize \(\mathbf{m}_{s}^{(0)}\), \(\mathbf{\theta}_{s}^{(0)}\)
2:for each iteration \(t=0\) to \(T-1\)do
3:\(\triangleright\) Given \(\mathbf{m}^{(t)}\), update \(\mathbf{\theta}_{s}^{(t+1)}\) by solving outer sub-problem in Eq. (4).
4:\(\triangleright\) Given \(\mathbf{\theta}_{s}^{(t+1)}\), update \(\mathbf{m}^{(t+1)}\) by solving inner sub-problem in Eq. (5).
5:endfor
6:return\(\mathbf{m}_{T}\)
```
**Algorithm 1** LPS strategy via min-max optimization
## 5 Experiments
### Experimental settings
**Implementation details.** For the training of both surrogate models and target models, we adopt SGD optimizer with weight decay \(5e{-4}\), the batch size 128, the initial learning rate 0.01 and reduced by
10 after 35 and 55 epochs, respectively. The training epoch for target models is 100. The maximal iteration \(T\) is set as \(15\). All experiments are conducted on NVIDIA GTX 3090 GPUs.
**Datasets and models.** We evaluate on three commonly used benchmark datasets: CIFAR-10 [23], CIFAR-100 [23] and Tiny-ImageNet [24]. The surrogate model and target model are ResNet-18[14] and ResNet-34, respectively.
**Baselines of poisoning sample selection.** We compare our proposed LPS strategy with two existing poisoning sample selection strategies: _Random_ and _FUS_[49]. Random strategy selects benign samples following a uniform distribution. FUS [49] selects samples according to the sample importance measured by forgetting events2. Following the original setting in [49], we set \(10\) overall iterations and \(60\) epochs for updating the surrogate model in each iteration.
Footnote 2: Note that in the experiments reported in [49], FUS appended the generated poisoned samples onto the original benign dataset, rather than replacing the selected benign samples, leading to \(|\tilde{\mathcal{D}}|\geq|\mathcal{D}|\). To ensure fair comparison, we change it to the traditional setting in existing attacks that the selected benign samples to be poisoned are replaced by the generated samples, thus \(|\tilde{\mathcal{D}}|=|\mathcal{D}|\).
**Backdoor attacks.** We consider 5 representative backdoor attacks: 1) visible triggers: BadNets [13], Blended [7]; SIG [1]; 2) optimized triggers: Trojan-Watermark (Trojan-WM) [30]; 3) sample-specific triggers: SSBA [28]. In addition, we consider 3 poisoning label types: all-to-one, all-to-all and clean label. We visualize different triggers with the same benign image in Fig. 1. The detailed settings of each attack can been found in **supplement materials**.
**Backdoor defenses.** We select 6 representative backdoor defenses to evaluate the resistance of above attack methods with different poisoning sample selection strategies, including Fine-Tuning (FT), Fine-Pruning (FP) [29], Anti-Backdoor Learning (ABL) [26], Channel Lipschitzness Pruning (CLP) [56], Neural Attention Distillation (NAD) [25], Implicit Backdoor Adversarial Unlearning (I-BAU) [51]. The detailed settings of each defense can been found in **supplement materials**.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{8}{c}{**Dataset: CIFAR-10**} & **Surrogate: ResNet-18 \(\Longrightarrow\) Target: ResNet-34**} \\ \hline Attack & Prato (\#Img/Cls) & 0.054\% (\#3) & 0.108\% (\#6) & 0.216\% (\#12) & 0.432\% (\#24) & 0.864\% (\#48) \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Random & **0.86**\(\pm\) 0.09 & 1.71 \(\pm\) 0.48 & 62.57 \(\pm\) 5.15 & 81.71 \(\pm\) 1.51 & 89.21 \(\pm\) 1.05 \\ & FUS [40] & 0.75 \(\pm\) 0.08 & 1.37 \(\pm\) 0.22 & 64.67 \(\pm\) 5.58 & 83.41 \(\pm\) 2.09 & 90.05 \(\pm\) 0.34 \\ & LPS (Ours) & 0.77 \(\pm\) 0.04 & **5.70**\(\pm\) 1.77 & **76.41**\(\pm\) 5.03 & **85.77**\(\pm\) 5.43 & **91.62**\(\pm\) 1.25 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Random & 0.69 \(\pm\) 0.05 & 0.73 \(\pm\) 0.06 & 0.93 \(\pm\) 0.22 & 39.91 \(\pm\) 14.35 & 75.54 \(\pm\) 2.84 \\ & FUS [40] & **0.72**\(\pm\) 0.01 & 0.75 \(\pm\) 0.02 & 1.03 \(\pm\) 0.13 & 33.37 \(\pm\) 2.60 & 76.76 \(\pm\) 0.24 \\ & LPS (Ours) & 0.70 \(\pm\) 0.10 & **0.76**\(\pm\) 0.05 & **36.64**\(\pm\) 2.71 & **66.95**\(\pm\) 1.02 & **80.18**\(\pm\) 2.08 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Random & 8.87 \(\pm\) 2.75 & 23.69 \(\pm\) 3.09 & 50.65 \(\pm\) 4.05 & 75.67 \(\pm\) 1.51 & 89.47 \(\pm\) 0.93 \\ & FUS [40] & **10.51**\(\pm\) 2.01 & 22.29 \(\pm\) 0.81 & 51.13 \(\pm\) 2.83 & 80.46 \(\pm\) 1.18 & 92.11 \(\pm\) 0.79 \\ & LPS (Ours) & 0.99 \(\pm\) 3.54 & **29.84**\(\pm\) 3.36 & **64.6**\(\pm\) 4.51 & **87.16**\(\pm\) 0.84 & **97.53**\(\pm\) 0.19 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Random & 2.48 \(\pm\) 0.11 & 4.05 \(\pm\) 0.66 & 9.32 \(\pm\) 1.25 & 37.33 \(\pm\) 5.01 & 67.54 \(\pm\) 1.12 \\ & FUS [40] & 2.40 \(\pm\) 0.16 & 4.17 \(\pm\) 0.60 & 6.67 \(\pm\) 0.49 & 29.54 \(\pm\) 2.34 & 64.90 \(\pm\) 2.02 \\ & LPS (Ours) & **3.35**\(\pm\) 0.45 & **7.37**\(\pm\) 0.78 & **34.6**\(\pm\) 2.34 & **60.12**\(\pm\) 1.60 & **72.92**\(\pm\) 1.00 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Random & 3.48 \(\pm\) 0.74 & 6.16 \(\pm\) 1.74 & 11.98 \(\pm\) 0.75 & 18.72 \(\pm\) 3.18 & 36.46 \(\pm\) 5.34 \\ & FUS [40] & 3.30 \(\pm\) 0.59 & 8.67 \(\pm\) 2.10 & 16.06 \(\pm\) 3.16 & 28.50 \(\pm\) 1.14 & 46.99 \(\pm\) 8.77 \\ & LPS (Ours) & **11.38**\(\pm\) 1.50 & **19.09**\(\pm\) 1.166 & **32.67**\(\pm\) 3.06 & **51.32**\(\pm\) 4.17 & **65.77**\(\pm\) 5.80 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Random & 1.01 \(\pm\) 0.12 & **1.05**\(\pm\) 0.04 & 2.06 \(\pm\) 0.15 & 20.34 \(\pm\) 5.88 & 60.36 \(\pm\) 2.42 \\ & FUS [40] & **1.10**\(\pm\) 0.16 & 1.04 \(\pm\) 0.28 & 2.02 \(\pm\) 0.45 & 16.81 \(\pm\) 3.47 & 60.64 \(\pm\) 3.29 \\ & LPS (Ours) & 0.98 \(\pm\) 0.17 & 1.03 \(\pm\) 0.05 & **2.30**\(\pm\) 0.51 & **22.92**\(\pm\) 2.74 & **64.39**\(\pm\) 2.96 \\ \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & Random & 3.39 \(\pm\) 1.37 & 23.26 \(\pm\) 11.74 & 80.04 \(\pm\) 7.19 & 94.69 \(\pm\) 1.92 & 98.27 \(\pm\) 0.34 \\ & FUS [40] & 3.07 \(\pm\) 1.62 & 21.22 \(\pm\) 6.12 & 78.85 \(\pm\) 4.70 & 96.59 \(\pm\) 1.57 & 99.25 \(\pm\) 0.38 \\ \cline{1-1} & LPS (Ours) & **3.66**\(\pm\) 0.33 & **33.77**\(\pm\) 10.47 & **34.32**\(\pm\) 0.81 & **99.77**\(\pm\) 0.06 & **99.97**\(\pm\) 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Attack success rate (%) on CIFAR-10, where the surrogate and target model are ResNet-18 and ResNet-34 respectively. **Bold** means the best.
Figure 3: Clean accuracy of Blended attack with different backdoor sample selection strategies.
### Main results
We evaluate our LPS strategy under various experimental settings, including comparisons with baseline strategies on various attacks and poisoning ratios, comparisons on different datasets and resistance to defenses. The attack results on CIFAR-10, CIFAR-100, and Tiny-ImageNet can be found in Tab. 1,2,3 respectively. Additionally, Tab. 4 presents the defense results on CIFAR-10. Besides, we find that due to the low poisoning ratios, the impacts of different poisoning sample selection strategies on the clean accuracy are almost similar (as shown in Fig. 3). Thus, for clarity, we omit ACC in most result tables, except for Tab. 4. Three random trials are conducted for the main experiments to report the mean and standard deviation. More results about different models can be found in **supplement materials**.
**Compare with state-of-the-art baselines.** To verify the effectiveness of our proposed LPS strategy, we first compare with two existing strategies on CIFAR-10, in which the surrogate model is ResNet-18 and the target model is ResNet-34. Different from [49], we conduct experiments under low poisoning ratios (\(<1\%\)), which is more stealthy and more likely to escape human inspection. The attack success rate is shown in Tab. 1, where _#Img/Cls_ denotes the number of samples to be poisoned per class for all-to-one setting, and _pratio_ is short for poisoning ratio. **1) From a global view**, we observe that LPS strategy outperforms the baselines under most of the settings. For example, with \(0.216\%\) poisoning ratio, LPS strategy can boost BadNets (all-to-all) by \(30.61\%\) compared to FUS, and Blended (all-to-one) can be improved by \(13.53\%\). **2) From the perspective of poisoning ratios.** LPS strategy can be widely applied to different poisoning ratios, but the degree of improvement is also related to the poisoning ratio. Specifically, when the poisoning ratio is extremely low (\(e.g.\), 1
\begin{table}
\begin{tabular}{l l l c c c c c} \hline \hline \multicolumn{7}{c}{**Dataset: CIFAR-100**} & \multicolumn{3}{c}{**Surrogate: ResNet-18 \(\Longrightarrow\) Target: ResNet-34**} \\ \hline Attack & Pratio (\#Img/Cls) & 0.198\% (\#1) & 0.396\% (\#2) & 0.594\% (\#3) & 0.792\% (\#4) & 0.99 \% (\#5) \\ \hline \multirow{2}{*}{\begin{tabular}{l} BadNets [13] (all-to-one) \\ \end{tabular} } & Random & 8.09 \(\pm\) 2.31 & 36.74 \(\pm\) 6.22 & 50.68 \(\pm\) 2.68 & 59.50 \(\pm\) 4.56 & 64.81 \(\pm\) 5.97 \\ & FUS [40] & 10.41 \(\pm\) 4.20 & 43.60 \(\pm\) 6.79 & 51.06 \(\pm\) 6.74 & 62.28 \(\pm\) 6.22 & 68.34 \(\pm\) 6.30 \\ & LPS (Ours) & **17.98**\(\pm\) 2.58 & **52.02**\(\pm\) 4.05 & **58.46**\(\pm\) 1.87 & **63.49**\(\pm\) 5.90 & **70.45**\(\pm\) 3.49 \\ \hline \multirow{2}{*}{\begin{tabular}{l} Blended [7] (all-to-one) \\ \end{tabular} } & Random & 37.53 \(\pm\) 2.23 & 59.98 \(\pm\) 1.09 & 68.53 \(\pm\) 1.26 & 77.37 \(\pm\) 0.67 & 81.47 \(\pm\) 0.26 \\ & FUS [40] & **38.65**\(\pm\) 1.58 & 65.75 \(\pm\) 0.98 & 69.04 \(\pm\) 5.40 & 82.25 \(\pm\) 0.69 & 86.14 \(\pm\) 0.46 \\ & LPS (Ours) & 38.64 \(\pm\) 1.72 & **66.94**\(\pm\) 1.75 & **81.73**\(\pm\) 1.73 & **89.88**\(\pm\) 1.19 & **93.29**\(\pm\) 0.67 \\ \hline \multirow{2}{*}{\begin{tabular}{l} SIG [1] (clean label) \\ \end{tabular} } & Random & 2.79 \(\pm\) 0.44 & 6.09 \(\pm\) 0.99 & 14.3 \(\pm\) 2.38 & 22.08 \(\pm\) 3.28 & 43.95 \(\pm\) 1.35 \\ & FUS [40] & 3.79 \(\pm\) 1.12 & **7.80**\(\pm\) 1.60 & 15.84 \(\pm\) 2.50 & N/A & N/A \\ & LPS (Ours) & **4.49**\(\pm\) 1.43 & 7.01 \(\pm\) 1.73 & **16.11**\(\pm\) 1.99 & **25.12**\(\pm\) 1.37 & **46.43**\(\pm\) 0.55 \\ \hline \multirow{2}{*}{\begin{tabular}{l} SSBA [28] (all-to-one) \\ \end{tabular} } & Random & 1.42 \(\pm\) 0.24 & 7.45 \(\pm\) 1.62 & 18.73 \(\pm\) 3.33 & 31.61 \(\pm\) 0.63 & 43.37 \(\pm\) 1.77 \\ & LPS (Ours) & **1.51**\(\pm\) 0.40 & 7.99 \(\pm\) 1.11 & 18.44 \(\pm\) 1.51 & 33.35 \(\pm\) 1.06 & 44.00 \(\pm\) 2.66 \\ & LPS (Ours) & 1.49 \(\pm\) 0.10 & **8.03**\(\pm\) 1.09 & **21.46**\(\pm\) 1.81 & **34.12**\(\pm\) 2.85 & **48.77**\(\pm\) 1.38 \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Trojan-WM [30] (all-to-one) \\ \end{tabular} } & Random & 39.44 \(\pm\) 4.24 & 68.64 \(\pm\) 1.83 & 82.13 \(\pm\) 0.47 & 88.08 \(\pm\) 0.93 & 91.16 \(\pm\) 1.52 \\ & FUS [40] & 39.74 \(\pm\) 2.42 & 75.43 \(\pm\) 3.23 & 84.80 \(\pm\) 0.79 & 92.58 \(\pm\) 0.95 & 93.87 \(\pm\) 0.33 \\ \cline{1-1} & LPS (Ours) & **44.90**\(\pm\) 3.51 & **84.75**\(\pm\) 3.24 & **96.36**\(\pm\) 1.18 & **98.16**\(\pm\) 0.33 & **99.30**\(\pm\) 0.16 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Attack success rate (%) on CIFAR-100, where the surrogate and target model are ResNet-18 and ResNet-34 respectively. **Bold** means the best.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & \multicolumn{3}{c}{**Dataset: Tiny-ImageNet**} & \multicolumn{3}{c}{**Surrogate: ResNet-18 \(\Longrightarrow\) Target: ResNet-34**} \\ \hline Attack & Pratio (\#Img/Cls) & 0.199\% (\#1) & 0.398\% (\#2) & 0.597\% (\#3) & 0.796\% (\#4) & 0.995\% (\#5) \\ \hline \multirow{2}{*}{\begin{tabular}{l} BadNets [13] (all-to-one) \\ \end{tabular} } & Random & 4.93 \(\pm\) 6.19 & 37.18 \(\pm\) 6.61 & 42.98 \(\pm\) 1.89 & 48.91 \(\pm\) 3.46 & 60.52 \(\pm\) 2.35 \\ & FUS [40] & **5.44**\(\pm\) 3.54 & 32.93 \(\pm\) 1.69 & 43.74 \(\pm\) 3.67 & 48.72 \(\pm\) 3.58 & 60.76 \(\pm\) 4.72 \\ & LPS (Ours) & 5.21 \(\pm\) 3.10 & **38.05**\(\pm\) 2.26 & **47.21**\(\pm\) 3.90 & **49.34**\(\pm\) 3.41 & **61.22**\(\pm\) 2.12 \\ \hline \multirow{2}{*}{\begin{tabular}{l} Blended [7] (all-to-one) \\ \end{tabular} } & Random & 66.73 \(\pm\) 0.52 & **78.79**\(\pm\) 0.63 & 84.87 \(\pm\) 1.50 & 87.81 \(\pm\) 0.72 & 89.96 \(\pm\) 0.43 \\ & FUS [40] & 70.95 \(\pm\) 1.47 & 82.01 \(\pm\) 0.50 & 88.38 \(\pm\) 0.94 & 90.70 \(\pm\) 1.37 & 93.19 \(\pm\) 0.39 \\ & LPS (Ours) & **82.76**\(\pm\) 2.52 & **93.55**\(\pm\) 0.45 & **96.20**\(\pm\) 0.11 & **97.65**\(\pm\) 0.10 & **98.08**\(\pm\) 0.09 \\ \hline \multirow{2}{*}{
\begin{tabular}{l} SIG[1] (all-to-one
Img/Cls, \(0.054\%\) pratio), although the improvement of our method is not obvious compared with other strategies due to the attack itself being weak, it also shows similar results. However, once the poisoning ratio is increased, LPS shows a strong advantage over other strategies. **3) From the perspective of attacks**, our LPS strategy consistently improves different types of triggers and poisoning labels, demonstrating that LPS strategy is widely applicable to various backdoor attacks.
**Compare with different datasets.** To verify whether our proposed LPS strategy supports larger datasets (more images and classes, larger image size), we also evaluate these three strategies on CIFAR-100 and Tiny-ImageNet. The results in Tabs. 2 and 3 further demonstrate the superiority of LPS strategy to both the random selection and the FUS strategy.
**Resistance to backdoor defense.** We further evaluate the resistance against defenses of different poisoning sample selection strategies. The defense results are shown in Tab. 4. It can be seen our method outperforms others in most cases (higher ASR is better), indicating that a reasonable poisoning sample selection strategy probably makes the attack better resistant to defenses.
### Ablation studies
**Effects of different constraints in LPS.** As demonstrated under Eq. (3), the equation \(\mathbf{H}\mathbf{m}=\tilde{\alpha}\cdot\mathbf{\mu}\) captures three constraints, including satisfying the poisoning ratio, excluding the target class (dubbed ET), and selecting the same number of samples per class (dubbed PC), respectively. Here we compare LPS with its two variants of changing the last two constraints, including: **1)**\(LPS\) without excluding target class (LPS\(\backslash_{\text{ET}}\)), **2)**\(LPS\)\(\backslash_{ET}\) without selecting the same number of poisoned samples per class (LPS\(\backslash_{\text{ET,PC}}\)). The results in Tab. 5 show that both constraints are important for the LPS strategy. Note that even removing two constraints, LPS\(\backslash_{\text{ET,PC}}\) still outperforms FUS.
**Effect of the number of iterations \(T\).** In Algorithm 1, our LPS method requires iteratively solving a min-max optimization problem. Here we explore the effect of different iterations \(T\) on the attack results. As shown in Fig. 4, we evaluate LPS strategy in a wide range of iterations from \(1\) to \(50\). We can see that LPS strategy shows stable and high performance in the range \(T\in[10,20]\). Therefore, we choose \(T=15\) as the default setting of the main experiments.
## 6 Analysis
**Analysis of computational complexity.** Both LPS and FUS adopt the iterative algorithm by alternatively updating the surrogate model and the poisoning mask. In term of updating the surrogate model in each iteration, the complexity is \(O(|\mathcal{D}|K(F+B))\), with \(|\mathcal{D}|\) being the train data size, \(F\) is the cost of forward pass in a DNN model and \(B\) being the backward [36] pass cost, \(K\) being the
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Defense} & \multicolumn{2}{c|}{No Defense} & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{FP29} & \multicolumn{2}{c|}{AL26(10)} & \multicolumn{2}{c|}{ND2(5)} & \multicolumn{2}{c|}{CLP(56)} & \multicolumn{2}{c}{I-BAU[5]} \\ & & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC \\ \hline BadNet[13] & Random & 69.73 & 93.97 & 35.82 & 93.87 & 3.88 & 93.43 & 18.86 & 62.42 & 1.18 & 88.23 & 95.7 & 92.91 & 2.18 & 84.64 \\
0.216\% & FUS[08] & 68.97 & 93.34 & 39.91 & 93.99 & 6.39 & 95.19 & 18.74 & 72.94 & 18.22 & 87.34 & 35.51 & 93.65 & 3.86 & 76.67 \\ (all-one) & LPS (Ours) & **89.14** & 90.56 & **51.48** & 92.57 & **10.88** & 93.45 & **22.37** & 63.59 & **73.73** & 91.77 & **32.99** & **30.33** & **70.18** & 88.99 \\ \hline Blended[7] & Random & 53.22 & 94.01 & 33.26 & 93.85 & 24.39 & 93.47 & 30.07 & 71.75 & 23.58 & 51.99 & 32.53 & 93.33 & 9.77 & 76.32 \\
0.216\% & FUS[08] & 48.96 & 93.90 & 34.04 & 93.94 & 21.67 & 93.54 & 25.19 & 93.54 & 25.16 & 92.83 & **38.51** & 93.62 & 6.29 & 83.6 \\
0.216\% & LPS (Ours) & **89.73** & 93.95 & **34.08** & 93.23 & **28.02** & 93.79 & **38.01** & 79.12 & **25.68** & 91.56 & 73.66 & 93.71 & **9.18** & 75.7 \\ \hline SIGH[1] & Random & 12.61 & 93.86 & 12.58 & 93.59 & 10.84 & 93.45 & 13.99 & 73.69 & 2.08 & 90.88 & 15.48 & 93.63 & 2.99 & 87.26 \\
0.216\% & FUS[08] & 14.19 & 93.88 & 11.83 & 93.87 & 12.81 & 93.44 & 10.91 & 76.70 & 4.21 & 90.34 & 15.04 & 93.27 & 63.1 & 84.96 \\ (clean label) & LPS (Ours) & **41.31** & **33.81** & **33.01** & 93.94 & **36.99** & **35.32** & **34.06** & 72.19 & **29.52** & 87.47 & 93.64 & **7.92** & 89.42 \\ \hline Trojan WM[30] & Random & 89.43 & 93.78 & 86.4 & 93.65 & **46.59** & 93.15 & 51.71 & 71.5 & 43.21 & 91.2 & 2.74 & 92.75 & 7.29 & 84.57 \\
0.216\% & FUS[08] & 82.9 & 93.83 & 68.7 & 93.73 & 35.52 & 93.57 & 84.86 & 74.97 & 40.88 & 92.60 & 67.2 & 94.14 & **11.96** & 81.53 \\ (all-no-one) & LPS (Ours) & **93.76** & 94.01 & **96.94** & 94.17 & 90.09 & 93.44 & **62.66** & _69.58_ & **46.65** & 91.09 & **92.91** & 93.84 & 9.70 & 86.36 \\ \hline \end{tabular}
\end{table}
Table 4: Results of various defenses against attacks on CIFAR-10. **Bold** means the best
Figure 4: Attack results of LPS strategy on CIFAR-10 under different iterations \(T\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Attack & Pratio & LPS & LPS\(\backslash_{\text{ET}}\) & LPS\(\backslash_{\text{ETPC}}\) & FUS[49] \\ \hline BadNets[13] & 0.216\% & 80.58 & 75.33 & 71.47 & 68.01 \\ Blended[7] & 0.432\% & 87.20 & 85.72 & 82.71 & 79.06 \\ SSBA[28] & 0.432\% & 23.29 & 21.18 & 20.36 & 14.86 \\ Trojan-WM[30] & 0.216\% & 93.27 & 89.91 & 87.80 & 77.63 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies of LPS’s constraints.
number of epochs. In terms of updating the poisoning mask, it requires one forward pass for all training samples, then the complexity is \(O(|\mathcal{D}|F)\). Thus, the overall complexity of both LPS and FUS is \(O(T|\mathcal{D}|((K+1)F+KB))\), \(T\) being the number of overall iterations. It is notable that in FUS, the surrogate model is re-initialized in each iteration, so it has to set \(K\) as a large number (\(i.e.\), 60), while our LPS sets \(K\) as 1. Thus, our practical efficiency is much better than FUS. We compare the full training time of different strategies in the **supplement materials**.
**Visualization of selected samples.** In Fig. 6, we visualize some samples selected by our method and FUS from Tiny-ImageNet, from which we can find that our method differs from FUS in two aspects. First, our method tends to select samples with discriminative patterns that is easy to remember. It indicates that our method prefers samples with higher clean confidence. Second, the samples selected by our method have a higher inter-class similarity. To evaluate the inter-class similarity, we compute the average pairwise Structural Similarity Index (SSIM)[46] within each class over samples selected by our method and FUS, respectively. Since some classes are ignored by FUS, we only report the classes selected by both our method and FUS. The results are reported in Fig. 6 which show that our LPS has higher inter-class similarity.
**The importance of selected samples.** In Fig. 7, we present the distribution for forgetting events histogram of blended trigger poisoned samples from CIFAR-10 obtained using different strategies at a very low poisoning ratio. Forgetting events were calculated during the standard training of the target model, given the poisoning masks obtained by different strategies. The results show that DNN trained with poisoned sample whose forgetting events is small have higher generalization performance. DNN do not force a poisoned sample in mind, losing generalization capability.
## 7 Conclusion and future work
This work has explored an often overlooked step in data-poisoning based backdoor attacks, \(i.e.\), selecting which benign samples to generate poisoned samples. We innovatively propose a learnable poisoning sample selection strategy based on the trigger and benign data. It is formulated as a min-max optimization problem, where a surrogate model and a binary poisoning mask are learned together, to encourage the selected samples to have good backdoor effect when training the unknown target model. Extensive results validate the effectiveness and efficiency of the proposed LPS strategy in enhancing existing data-poisoning backdoor attacks.
**Limitations and future works**. Note that in the case of extremely low poisoning ratio, the improvement of LPS is very limited, mainly due to that the poisoning information contained in few poisoned samples with fixed triggers are insufficient to inject backdoor, no matter which poisoning samples are selected. It inspires that learning trigger and poisoning sample selection simultaneously may further enhance the backdoor attack, which will be explored in future. In addition, the proposed LPS strategy
is specially designed for data poisoning backdoor attack. Developing the similar selection strategy for training controllable backdoor attack also deserves to be explored in future.
**Broader impacts**. The proposed LPS strategy could be easily utilized by adversaries to enlarge the attack performance of existing backdoor attack methods, which exposes the urgency to develop proactive defense strategies and detection mechanisms to safeguard machine learning systems. |
2304.06566 | NeRD: Neural field-based Demosaicking | We introduce NeRD, a new demosaicking method for generating full-color images
from Bayer patterns. Our approach leverages advancements in neural fields to
perform demosaicking by representing an image as a coordinate-based neural
network with sine activation functions. The inputs to the network are spatial
coordinates and a low-resolution Bayer pattern, while the outputs are the
corresponding RGB values. An encoder network, which is a blend of ResNet and
U-net, enhances the implicit neural representation of the image to improve its
quality and ensure spatial consistency through prior learning. Our experimental
results demonstrate that NeRD outperforms traditional and state-of-the-art
CNN-based methods and significantly closes the gap to transformer-based
methods. | Tomas Kerepecky, Filip Sroubek, Adam Novozamsky, Jan Flusser | 2023-04-13T14:25:05Z | http://arxiv.org/abs/2304.06566v1 | # NERD: Neural Field-Based Demosaicking
###### Abstract
We introduce NeRD, a new demosaicking method for generating full-color images from Bayer patterns. Our approach leverages advancements in neural fields to perform demosaicking by representing an image as a coordinate-based neural network with sine activation functions. The inputs to the network are spatial coordinates and a low-resolution Bayer pattern, while the outputs are the corresponding RGB values. An encoder network, which is a blend of ResNet and U-net, enhances the implicit neural representation of the image to improve its quality and ensure spatial consistency through prior learning. Our experimental results demonstrate that NeRD outperforms traditional and state-of-the-art CNN-based methods and significantly closes the gap to transformer-based methods.
Tomas Kerepecky \({}^{1,2}\), Filip Sroubek \({}^{1}\), Adam Novozamsky \({}^{1}\), Jan Flusser \({}^{1}\)+\({}^{1}\) Institute of Information Theory and Automation, The Czech Academy of Sciences, Czechia
\({}^{2}\)Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Czechia Demosaicking, neural field, implicit neural representation.
Footnote †: This work was supported in part by the Czech Science Foundation grant GA21-03921S, the _Praenium Academiae_ awarded by the Czech Academy of Sciences, and the Fulbright commission under the Fulbright-Masaryk award.
## 1 Introduction
Raw data acquired by modern digital camera sensors is subject to various types of signal degradation, one of the most severe being the color filter array. To convert the raw data (Fig. 1a) into an image suitable for human visual perception (Fig. 1b), a demosaicking procedure is necessary [1].
Two main categories of image demosaicking exist: model-based and learning-based methods. Model-based methods, such as bilinear interpolation, Malvar [2], or Menon [3], are still widely used, but they fail to match the performance of recent deep learning-based approaches using deep convolutional networks (CNN) [4, 5, 6] or Swin Transformers [7].
Recently, Transformer networks have seen remarkable success in computer vision tasks and have become a state-of-the-art approach in demosaicking. However, a new paradigm in deep learning, Neural Fields (NF) [8], is gaining attention due to its comparable or superior performance in several computer vision tasks [8, 9, 10, 11, 12, 13, 14]. The basic idea behind NF is to represent data as the weights of a Multilayer Perceptron (MLP), known as implicit neural representation.
NF has been applied in various domains and applications including Neural Radiance Fields (NeRF) [9] which achieved state-of-the-art results in representing complex 3D scenes. NeRV [11] encodes entire videos in neural networks. The Local Implicit Image Function (LIIF) [12] represents an image as a neural field capable of extrapolating to 30 times higher resolution. SIREN [13] uses a sinusoidal neural representation and demonstrates superiority over classical ReLU MLP in representing complex natural signals such as images.
Prior information from training data can be encoded into neural representation through conditioning (local or global) using methods such as concatenation, modulation of activation functions [15], or hypernetworks [14]. For example, CURE [10], a state-of-the-art method for video interpolation based on NF, uses an encoder to impose space-time consistency using local feature codes.
NF has also been used in image-to-image translation tasks such as superresolution, denoising, inpainting, and generative modeling [8]. However, to the best of our knowledge, no NF method has been proposed for demosaicking.
In this paper, we present NeRD, a novel approach for image demosaicking based on NF. The proposed method employs a joint ResNet and U-Net architecture to extract prior information from high-resolution ground-truth images and their corresponding Bayer patterns. This information is then used to condition the MLP using local feature encodings. The proposed approach offers a unique and innovative solution for image demosaicking.
Figure 1: An illustration of demosaicking using coordinate-based Multilayer Perceptron and local encoding technique.
## 2 Proposed Method
NeRD converts spatial coordinates and local encodings into RGB values. The local encodings are generated by an encoder that integrates consistency priors in NeRD. The overall architecture of NeRD is depicted in Fig. 2.
The core of NeRD is a fully connected feedforward network \(\mathcal{N}_{\Phi}:(\xi_{\mathbf{x}},\mathbf{x})\rightarrow\mathbf{n}\) with 5 hidden layers, each with 256 output channels and sine activation functions. \(\Phi\) denotes the network weights. The input is a spatial coordinate \(\mathbf{x}~{}=~{}(x,y)~{}\in~{}\mathbb{R}^{2}\) and local encoding vector \(\xi_{\mathbf{x}}\). The output is a single RGB value \(\mathbf{n}=(r,g,b)\in\mathbb{R}^{3}\). The SIREN architecture [13] was chosen for its ability to model signals with greater precision compared to MLPs with ReLU. There are two skip connections that concatenate the input vector with the output of the second and fourth hidden layers.
Using the MLP without local encoding \(\xi_{\mathbf{x}}\) leads to suboptimal demosaicking results due to the insufficient information contained in the training image. This is demonstrated by the result in Fig. 3-NeRD.0, where the reconstructed image is the output of the SIREN model trained only on original input Bayer pattern in self-supervised manner. The lack of spatial consistency in these results highlights the need for additional prior information in the form of spatial encoding, which is why we utilize an encoder.
The encoder provides local feature codes \(\xi_{\mathbf{x}}\) for a given coordinate \(\mathbf{x}\) and its architecture is shown in the first row of Fig. 2. The Bayer pattern is processed through a combined network that incorporates 8 residual blocks (using the EDSR architecture [16]) and 4 downsampling and 4 upsampling layers (U-Net architecture [17]) connected by multiple skip connections. The result is a global feature encoding \(H\times W\times 128\), where \(H\) and \(W\) denote the height and width of the initial Bayer pattern in pixels. The local encoding \(\xi_{\mathbf{x}}\) is extracted from the global encoding as a \(5\times 5\) region centered at \(\mathbf{x}\), which is then flattened into a 3200-dimensional feature vector. The architecture of the encoder is adopted from [10].
The final RGB image is produced by independently retrieving the RGB pixel values from NeRD at the coordinates specified by the input Bayer pattern.
## 3 Experiment
We numerically validated NeRD on standard image datasets. Experiments also include an ablation study highlighting the key components of the proposed architecture and comparisons with state-of-the-art methods.
### Dataset and Evaluation Metrics
A training set was created by combining multiple high-resolution datasets, such as DIV2K [18], Flickr2K [16], and OST [19], resulting in a total of 12 000 images. During each epoch, 10 000 randomly cropped patches of size \(200\times 200\) and corresponding Bayer patterns (GBRG) were generated. The Kodak and McM [20] datasets were used for testing.
Figure 2: The overall architecture of NeRD. Encoder consisting of 8 residual blocks and U-net architecture generates encoding \(\xi\) for the input Bayer pattern. Numbers below each layer in the encoder represent the number of output channels. Spatial coordinates \(\mathbf{x}=(x,y)\) concatenated with the corresponding local encoding vector \(\xi_{\mathbf{x}}\) are transformed into RGB value using a multilayer perceptron with 5 hidden layers each with 256 output channels, siren activation functions, and two skip connections.
The evaluation was performed using Peak Signal to Noise Ratio (PSNR) and the Structural Similarity Index Measure (SSIM).
### Training Configuration
The training was conducted using an Nvidia A100 GPU. The NeRD model was optimized using the Mean Squared Error loss function, and the Adam optimizer was used with \(\beta_{1}\) = 0.9 and \(\beta_{2}\) = 0.999. The initial learning rate was set to 0.0001, and a step decay was applied, reducing the learning rate by 0.95 every epoch consisting of 10 000 iterations. The patch size was set to \(200\times 200\) and the batch size was 5.
### Ablation Study
**MLP and activation functions.** RGB images can be represented as the weights of a fully connected feedforward neural network. This representation is achieved by training an MLP in a self-supervised manner to fit the original image. However, the usage of standard ReLU activation functions in MLPs produces unsatisfactory results, as shown in Fig. 3-ReLU. To significantly improve reconstruction, Fourier feature mapping of input spatial coordinates can be used (see Fig. 3-ReLU.pe). This technique is referred to as "positional encoding". Nonetheless, an even better outcome can be achieved by replacing ReLU with sine functions, also known as SIRENs. They demonstrate the capability of MLPs as image decoders and hold promise for demosaicking applications. SIREN architecture has the capacity to model RGB images with great precision. As demonstrated in Fig. 3-Siren, the SIREN with 5 hidden layers, each with 256 neurons, achieved a PSNR of 50.7 dB when trained for just 1000 iterations to fit the original image.
**Encoder.** The naive approach of decoding RGB images from Bayer patterns using SIREN architecture fails as it loses two-thirds of the original information, as shown in Fig. 3-NeRD.0. To improve the demosaicking capability of the MLP, prior information must be incorporated through an encoder. This encoder learns prior information across various training image pairs and conditions the MLP with local encodings. The effectiveness of the encoder is demonstrated in Fig. 3-NeRD, which shows the results of demosaicking using the NeRD architecture described in Sec. 2.
**Skip Connections.** The integration of encoding into the MLP can be achieved through various methods. However, methods such as modulation of activation functions or the use of hypernetworks present challenges in terms of parallelization. Hence, we utilized a method of concatenation, where the
Figure 3: The ablation study of NeRD. The original image is from DIV2K dataset. “ReLU” and ”Siren” models show the implicit neural representation of the original image using MLP with ReLU and sine activation functions, respectively. These models were trained in a self-supervised manner to fit the original image. ”ReLU.pe” stands for ”ReLU” model with additional positional encoding in the form of Fourier feature mapping. ”NeRD.0” model is identical to ”Siren” model but is only trained using the input Bayer pattern. ”NeRD” is the proposed demosaicking method, while ”NeRD.ns” represents the proposed architecture without skip connections in the MLP. Each image is labeled with its PSNR value with respect to the original image.
coordinates and feature vectors are combined at the input and later concatenation of the input with the second and fourth hidden layers is performed using skip connections. The significance of incorporating skip connections into the MLP is illustrated in Fig. 3-NeRD.ns (no-skip). This figure demonstrates a degradation in both the quality of the reconstruction and the PSNR value when these connections are omitted.
### Comparison With Existing Methods
The evaluation of the proposed NeRD demosaicking algorithm was performed on the McM and Kodak datasets, which were resized and cropped to \(200\times 200\) px. A comparison of NeRD with traditional demosaicking algorithms and state-of-the-art methods is presented in Table 1 in terms of average PSNR and SSIM values calculated from the demosaicked images. The results show that NeRD outperforms traditional methods and the CNN-based DeepDemosaick [4], but falls slightly behind the transformer-based RSTCANet [7].
A visual comparison of the demosaicked images is presented in Fig. 4. The figure highlights differences between NeRD and the other methods and provides insights into their performance. One notable characteristic of NeRD is that it avoids over-smoothing details, unlike the DeepDemosaick [4] method, as indicated by the cyan arrow in the Fig. 4g. Furthermore, NeRD outperforms traditional methods in terms of preserving fine details and avoiding unpleasant artifacts, as indicated by the magenta arrow in the Fig. 4d.
## 4 Conclusion
This paper presents a novel demosaicking algorithm, NeRD, that leverages the recent class of techniques known as Neural Fields. The ablation study results emphasize the significance of incorporating an encoder and skip connections within the MLP, which results in significant improvement over traditional techniques and outperforms the CNN-based DeepDemosaick method in preserving fine details while avoiding undesirable artifacts. Although NeRD shows slightly lower visual performance compared to the transformer-based RSTCANet, it still demonstrates remarkable accuracy in terms of reconstruction. Future research can focus on enhancing NeRD through fine-tuning using input Bayer pattern-specific loss functions and integrating Transformer networks or ConvNeXt into the encoder. In addition, expanding the training set by more diverse datasets can improve the prior. Albeit NeRD may not attain the performance level of Transformer-based demosaicking, our contribution broadens the range of domains where Neural Fields can be applied.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & McM [20] & Kodak \\ & PSNR/SSIM & PSNR/SSIM \\ \hline Bilinear & 27.15/0.912 & 28.01/0.894 \\ Matlab (Malvar) [2] & 30.54/0.923 & 33.52/0.957 \\ Menon [3] & 31.40/0.918 & 35.20/0.968 \\ DeepDemosaick [4] & 33.31/0.942 & 37.76/0.976 \\ RSTCANet [7] & **37.77/0.978** & **40.84/0.988** \\ NeRD & 36.18/0.969 & 39.07/0.984 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average PSNR/SSIM obtained by NeRD and the current state-of-the-art methods on the McM and Kodak datasets. **Bold** and underline highlights the highest and second highest values, respectively. Note the superior results of NeRD over the CNN-based and traditional methods. Only RSTCANet, which is based on transformers, has slightly higher scores.
Figure 4: A visual comparison of NeRD and the current state-of-the-art methods on an example from the Kodak dataset. The visual differences are highlighted by close-ups, which correspond to the red box in the original image. Although NeRD exhibits slightly inferior visual performance compared to RSTCANet, it outperforms traditional methods in terms of reconstruction accuracy (indicated by the magenta arrow) and avoids over-smoothing details, as seen with the DeepDemosaick method (indicated by the cyan arrow). |
2307.09305 | Stationary equilibria and their stability in a Kuramoto MFG with strong
interaction | Recently, R. Carmona, Q. Cormier, and M. Soner proposed a Mean Field Game
(MFG) version of the classical Kuramoto model, which describes synchronization
phenomena in a large population of rational interacting oscillators. The MFG
model exhibits several stationary equilibria, but the characterization of these
equilibria and their ability to capture dynamic equilibria in long time remains
largely open.
In this paper, we demonstrate that, up to a phase translation, there are only
two possible stationary equilibria: the incoherent equilibrium and the
self-organizing equilibrium, given that the interaction parameter is
sufficiently large. Furthermore, we present some local stability properties of
the self-organizing equilibrium. | Annalisa Cesaroni, Marco Cirant | 2023-07-18T14:43:49Z | http://arxiv.org/abs/2307.09305v1 | # Stationary equilibria and their stability in a Kuramoto MFG with strong interaction
###### Abstract
Recently, R. Carmona, Q. Cormier, and M. Soner proposed a Mean Field Game (MFG) version of the classical Kuramoto model, which describes synchronization phenomena in a large population of "rational" interacting oscillators. The MFG model exhibits several stationary equilibria, but the characterization of these equilibria and their ability to capture dynamic equilibria in long time remains largely open.
In this paper, we demonstrate that, up to a phase translation, there are only two possible stationary equilibria: the incoherent equilibrium and the self-organizing equilibrium, given that the interaction parameter is sufficiently large. Furthermore, we present some local stability properties of the self-organizing equilibrium.
**AMS-Subject Classification**. 35Q89, 49N80, 92B25
**Keywords**. Mean Field Games, Kuramoto model, Synchronization, dynamic stability.
## 1 Introduction
The classical Kuramoto model is a system of nonlinear ordinary differential equations that describes the dynamics of coupled oscillators, and it has been derived to understand phenomena of collective synchronization in chemical and biological systems. Roughly speaking, the main features of this model are the following. Uncoupled oscillators run independently at their natural frequencies, and when the coupling is sufficiently weak, they still run incoherently. At a critical value of the coupling strength, the system presents a phase transition to synchrony: the oscillators spontaneously exhibit a collective behavior, that is partial synchronization, the incoherent state loses stability and coherent dynamics emerge. Full synchronization occurs as the interaction strength goes to infinity. We refer to the review paper [1] for a detailed description of the model and for several related results.
As the number of oscillators goes to infinity, the Mean Field approach comes into play. Recently, Carmona, Cormier and Soner [9] proposed a Mean Field Game (MFG) version of the classical Kuramoto model. The synergy between the Kuramoto and MFG formalisms has been already explored in [10], where a jet-lag recovery model was considered, and in [28], where bifurcation arguments have been used to analyze incoherence and coordination in some large population game Kuramoto models. The main difference between classical and MFG Kuramoto models is the following; in the former, oscillators are treated as a particle system that evolve according to predetermined rules. In the latter, particles are rational agents, who are allowed to "choose" their evolution to minimize a (predetermined) cost depending on the evolution of the other oscillators. Equilibria are then considered in the Nash sense.
A main question in these models is to understand the emergence of syncronization, and study its possible long time stability. In classical Kuramoto (Mean Field) models the evolution is naturally forward in time, and the long time analysis is quite well understood, see for instance [11, 26]. On the other hand, a main difficulty of the MFG setting is its forward-backward nature, because evolution runs forward while optimization runs backward by Bellman dynamic programming principle.
In [9] the existence of a phase transition is observed, as in the classical Kuramoto model: first, for large interaction parameter, there are non-uniform stationary solutions, that become fully syncronized as the interaction parameter goes to \(+\infty\). Moreover, the authors show that below a certain critical parameter, agents desynchronize: their distribution converges, in long time, to the uniform measure, at least for initial data in a neighborhood of the uniform distribution. In other words, the incoherent state (uniform distribution) is locally stable in long time when the interaction parameter is _small_ (or the discount factor is large). However, several interesting questions remain open. Is it possible to characterize all stationary equilibria? Is it possible to say something on their long time stability/instability?
In this paper we provide some partial answers to the previous questions. In particular, we are able to describe stationary equilibria and study their local stability properties when the interaction parameter is _large_.
Let us now introduce the MFG version of the Kuramoto model, in the periodic state space \(\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}\), that will be identified with \((-\pi,\pi]\). The phase of a generic oscillator evolves according to
\[dX_{t}=\alpha_{t}dt+\sqrt{2}dW_{t},\]
where \(W_{t}\) is a standard Brownian motion. We first discuss the _ergodic_ framework (that is, when the discount parameter \(\beta\) in [9] vanishes). In such case, the control \(\alpha_{t}\) is chosen to minimize the long run cost
\[\lim_{T\to+\infty}\frac{1}{T}\mathbb{E}\int_{0}^{T}\left[\frac{|\alpha_{t}|^{2 }}{2}+2\kappa\int_{-\pi}^{\pi}\sin^{2}\left(\frac{X_{t}-y}{2}\right)dm(y) \right]dt, \tag{1.1}\]
where \(m\) is the invariant measure of all the oscillators, that is, the observed stationary distribution of the environment, while
\[\kappa>0\]
is the interaction parameter. In an equilibrium regime, the law \(\mathcal{L}(X_{t}^{\alpha})\) of the generic oscillator, driven by the optimal control, converges as \(t\to+\infty\) to the distribution \(m\). Observe that
\[2\sin^{2}\left(\frac{x-y}{2}\right)=1-\cos(x-y)=1-\cos x\cos y-\sin x\sin y\]
therefore the long run cost can be rewritten as:
\[\lim_{T\to+\infty}\frac{1}{T}\mathbb{E}\int_{0}^{T}\left[\frac{|\alpha_{t}|^{ 2}}{2}-\kappa\cos X_{t}\int_{-\pi}^{\pi}\cos(y)dm(y)-\kappa\sin X_{t}\int_{- \pi}^{\pi}\sin(y)dm(y)\right]dt\ (+\kappa). \tag{1.2}\]
Note that equilibria are translation invariant, that is: if \(m\) is an equilibrium, then \(m(\cdot-z)\) is also an equilibrium for every \(z\in\mathbb{R}\). This is expected, because no syncronization to any "special" phase is enforced.
Using the analytic (PDE) approach in MFG [21, 22, 23], the equilibrium regime in (1.2) is encoded by \(2\pi\)-periodic solutions \((u,\tilde{\lambda},m)\) of the ergodic MFG
\[\begin{cases}-u^{\prime\prime}+\frac{1}{2}|u^{\prime}|^{2}+\tilde{\lambda}=- \kappa\cos x\int_{-\pi}^{\pi}\cos(y)m(y)dy-\kappa\sin x\int_{-\pi}^{\pi}\sin( y)m(y)dy\quad\text{in $\mathbb{T}$}\\ m(x)=\frac{e^{-u(x)}}{\int_{-\pi}^{\pi}e^{-u(y)}dy}\\ u,m\text{ are }2\pi\text{-periodic, }u(0)=0.\end{cases} \tag{1.3}\]
The density \(m\) of the population of oscillators is a solution of the Fokker-Planck equation
\[-m^{\prime\prime}-(u^{\prime}m)^{\prime}=0,\qquad\int_{-\pi}^{\pi}m=1.\]
It is well known, and easy to check, that the unique solution to the previous equation is explicitly given by the formula in (1.3).
We introduce the notion of incoherent and self-organizing solutions to (1.3).
**Definition 1.1** (Incoherent and self-organizing ergodic solutions).: The triple \((0,0,\frac{1}{2\pi})\) where \(u\equiv 0\in\mathbb{R}\) is constant, \(m\equiv\frac{1}{2\pi}\) is the uniform probability density on the torus is called the _incoherent_ solution of the Kuramoto MFG (1.3).
A solution \((u,\lambda,m)\) to (1.3) is _self-organizing_ if it is not equal to the incoherent solution.
The first main result of the paper is the existence and _uniqueness_, up to translation, of self-organizing solutions to (2.1), for sufficiently large values of the interaction parameter \(\kappa\).
**Theorem 1.2**.: _There exists \(\kappa_{0}>4\) such that, for all \(\kappa\geq\kappa_{0}\), self organizing solutions to the Kuramoto system (1.3) are unique, up to translation in the \(x\)-variable._
Our uniqueness result answers positively, for \(\kappa\) sufficiently large, to a conjecture proposed in [9, Remark 7.4]. We derive here some fine properties of a real valued function whose fixed points are connected with solutions of (1.3). Note that such function is believed to be convex; here, we are able to obtain properties of its first derivative that are strong enough to classify all of its fixed points.
The second part of the paper is devoted to the study of the _local_ dynamical stability of self-organizing solutions to the Kuramoto MFG. Let us first briefly recall some known facts on the long time behavior of MFG. The classical Lasry-Lions monotone case is pretty well understood, see for instance [6, 8, 18, 27] (and references therein), namely solutions enjoy an exponential turnpike property, that is: any solution \((u,m)=(u^{T},m^{T})\) of the finite \(T\) horizon problem is exponentially close to the unique stationary state \((\bar{u},\bar{m})\) in the following sense:
\[\|m(t)-\bar{m}\|_{L^{2}}+\|u(t)-\bar{u}\|_{L^{2}}\lesssim e^{-\omega t}+e^{- \omega(T-t)}\qquad\forall t\in[0,T].\]
Such property is _global_, namely it holds for solutions satisfying arbitrary initial-final condition. The core principle behind this kind of long-time stability is, in our viewpoint, the following. If the coupling is monotone and the Hamiltonian is uniformly convex, one can show that the quantity
\[\Phi(t)=\|m(t)-\bar{m}\|_{L^{2}}^{2}+\|u(t)-\bar{u}\|_{L^{2}}^{2}\]
satisfies the following inequality
\[\int_{t_{1}}^{t_{2}}\Phi(t)dt\lesssim\Phi(t_{1})+\Phi(t_{2}) \tag{1.4}\]
for every \(t_{1}\leq t_{2}\in[0,T]\). From this, it is possible to deduce the exponential decay: we detail the argument in the Appendix, Lemma A.1. The inequality (1.4) is a straightforward consequence of the standard duality identity between _state_\(m\) and _co-state_\(u\), plus an application of the Poincare inequality. It has then been noted in [17] that (1.4) is available also if the coupling is _mildly_ nonmonotone, at least in problems with nondegenerate diffusion. Indeed, the stabilization properties of these diffusions can be quantified in order to compensate a mild nonmonotone coupling. This observation will be important also in this work, as we will see below.
Still, long time stability in MFG is mainly understood whenever global uniqueness of dynamic and stationary equilibria holds, and for problems which are set on bounded domains. We are aware of a few exceptions only: [4] obtains stability for some deterministic problems with particular structure, and [19] studies the local stability for a special nonmonotone problem, for which a linear stability analysis can be carried out explicitly. In fact, a stable long time behavior in MFG that have no monotone structure is in general not expected [7, 12, 15, 16, 24].
Thus, the study of local stability of stationary solutions in MFG is widely open when uniqueness fails. Local stability is actually what one would like to understand in a Kuramoto MFG, where, as we prove in the first part of this paper, there is a continuum of different stationary solutions.
To simplify the stability analysis, we will restrict to _even_ solutions, and \(\kappa\) large enough. Within this framework, we have shown that there exist _two_ stationary solutions only: the incoherent one and a self-organizing one (satisfying \(\int\cos\tilde{m}>0\)). The dynamic, finite-horizon version of the Kuramoto MFG in the time-space cylinder \((0,T)\times(-\pi,\pi)\) is:
\[\begin{cases}-u_{t}-u_{xx}+\frac{1}{2}|u_{x}|^{2}=-\kappa\cos x\int_{-\pi}^{ \pi}\cos(y)m(t,y)dy\\ m_{t}-m_{xx}-(mu_{x})_{x}=0\\ m_{x}(t,\pi)=m_{x}(t,-\pi)=0\qquad u_{x}(t,\pi)=u_{x}(t,-\pi)=0\\ u(\cdot,t),m(\cdot,t)\text{ are even, }\int_{-\pi}^{\pi}m(x,t)dx=1,m(\cdot,t) \geq 0\text{ for all }t\end{cases} \tag{1.5}\]
Since there can be several dynamic equilibria \((u,m)\) for any fixed initial-final conditions \(m(0),u(T)\), our goal is to show the following local stability property / local (exponential) turn-pike of the self-organizing solution \((\tilde{u},\tilde{m})\): there exists a neighborhood \(\mathcal{U}\) of \((\tilde{m},\tilde{u})\), such that, for any dynamic solution \((u,m)\) to (1.5) remaining in \(\mathcal{U}\) for all times, that is \((m(t),u(t))\in\mathcal{U}\) for all \(t\in[0,T]\), it is true that
\[\Phi(t)\lesssim e^{-\omega t}+e^{-\omega(T-t)}\qquad\forall t\in[0,T], \tag{1.6}\]
for some constants \(\omega\) independent of \(T\). Here \(\Phi(t)\) should be a positive function which quantifies the distance between \(m(t)\) and \(\tilde{m}\) (and also between \(u(t)\) and \(\tilde{u}\)).
We are able here to identify a suitable \(\Phi\); our second main result reads then informally as follows:
**Theorem**.: _Let \(\kappa\) be large enough so that \((\tilde{u},\tilde{m})\) is the unique even self-organizing solution (Theorem 1.2). Assume that \((m,u)\) is a solution to (1.5) such that_
\[m(x,t)\leq C\tilde{m}(x)\qquad\text{for all }x,t.\]
_Then, (1.6) holds with \(\Phi(t)=\|m(t)-\tilde{m}\|_{L^{2}(\tilde{m}^{-1})}\)._
The precise results is stated in Theorem 3.1. Note that the constants involved in the estimate depend on \(C,\kappa\), and \(u_{x},m|_{t=0,T}\), but not on \(T\). The result is obtained starting from the crucial observation that the rescaled variables \(w(t,x)=u(\kappa^{-\frac{1}{2}},xx^{-\frac{1}{4}})\), \(\mu(t,x)=\kappa^{-\frac{1}{4}}m(t\kappa^{-\frac{1}{2}},xx^{-\frac{1}{4}})\) solve a MFG system where the coupling (formally) vanishes as \(\kappa\to+\infty\), see (3.7). Since the coupling is mild in this new scale, one is tempted to argue as in [17], but soon realizes that a main difficulty is that the problem is set on a domain that becomes the real line in the limit \(\kappa\to\infty\). We have then to implement weighted Poincare inequalities, and stability of the Fokker-Planck equation in wighted \(L^{2}\) spaces (see, for instance, [5] and references therein). Though we address a specific problem, we believe that the functional setting used here can be useful to study the long time behavior in more general MFG which are set on unbounded domains (like the whole Euclidean space), for which, even in the Lasry-Lions monotone case, there are very few available results: we are only aware of [2].
Note that an estimate like (1.6) just guarantees that trajectories that remain close to the equilibrium actually converge to it very quickly. Given an initial-final condition, the existence of these trajectories is not addressed here. Nevertheless, the kind of estimates that we obtain can be used to set up a topologic fixed point argument, which in fact yields existence, at least for boundary data that are close to the equilibrium, as in [17]. Finally, the question of the long time local stability remains open at this stage for dynamic equilibria which are _not even_. In this case, it is not clear whether or not they stabilize in long time to a stationary self-organizing one, and if so, which one of the infinitely many is selected. We believe that to tackle this issue one should have a look at _orbital stability_, a key stability concept in Hamiltonian systems such as the Schrodinger equation. We will pursue this investigation in a future work.
The paper is organized as follows. In Section 2 we provide existence and uniqueness of symmetric self-organizing solutions to the problem for \(\kappa\) larger than a threshold \(\kappa_{0}>4\). Section 3 contains the proof of the local stability of the self-organizing solutions. We finally collect in the appendix some useful estimates, and the proof of the Poincare weighted inequality.
### Acknowledgements
The authors are members of GNAMPA-INdAM. They were partially supported by the King Abdullah University of Science and Technology (KAUST) project CRG2021-4674 "Mean-Field Games: models, theory, and computational aspects".
## 2 Ergodic self-organizing equilibria
In this section we prove Theorem 1.2. Most of the efforts will be devoted to classify _even_ solutions, that is, to show the following result.
**Theorem 2.1**.: _There exists \(\kappa_{0}>4\) such that for all \(\kappa\geq\kappa_{0}\) the Kuramoto MFG (1.3) admits, besides the incoherent solution, a unique even self organizing solution \((u,\lambda,m)\) with \(\int_{-\pi}^{\pi}m(x)\cos xdx>0\) and a unique even self organizing solution \((u,\lambda,m)\) with \(\int_{-\pi}^{\pi}m(x)\cos xdx<0\)._
Indeed, let \((u,\lambda,m)\) be any solution to (1.3). Up to translation, we can always assume that
* \(\int_{-\pi}^{\pi}m(y)\sin y\ dy=0\),
* \(\int_{-\pi}^{\pi}m(y)\cos y\ dy\geq 0\),
* \(u(x)=u(-x)\), \(m(x)=m(-x)\) for all \(x\).
To check (i), consider \(\hat{u}(x)=u(x+z)-u(z),\hat{m}(x)=m(x+z)\), which still solves
\[\begin{cases}-\hat{u}^{\prime\prime}+\frac{1}{2}|\hat{u}^{\prime}|^{2}+\tilde {\lambda}=-\kappa\cos x\int_{-\pi}^{\pi}\cos(y)\hat{m}(y)dy-\kappa\sin x\int_ {-\pi}^{\pi}\sin(y)\hat{m}(y)dy\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad=2\kappa\int_{-\pi}^{\pi}\sin^{2} \left(\frac{x-y}{2}\right)d\hat{m}(y)-\kappa\\ \hat{m}(x)=\frac{e^{-\hat{u}(x)}}{\int_{-\pi}^{\pi}e^{-\hat{u}(y)}dy}\\ \hat{u},\hat{m}\text{ are }2\pi\text{-periodic, }\hat{u}(0)=0.\end{cases}\]
In addition,
\[\int_{-\pi}^{\pi}\hat{m}(y)\sin y\ dy=\cos z\int_{-\pi}^{\pi}m(y)\sin y\ dy- \sin z\int_{-\pi}^{\pi}m(y)\cos y\ dy=0\]
for a suitable choice of \(z\).
Regarding (ii), if \(\int_{-\pi}^{\pi}m(y)\cos y\ dy\leq 0\) one can proceed as before by considering \(\hat{u}(x)=u(x+\pi)-u(\pi),\hat{m}(x)=m(x+\pi)\), which solves the same problem and satisfies also
\[\int_{-\pi}^{\pi}\hat{m}(y)\cos y\ dy=-\int_{-\pi}^{\pi}m(y)\cos y\ dy\geq 0.\]
Finally, if \((i)\) holds, then \((iii)\) holds as well, that is, \(u\) and \(m\) are even. Indeed, \(u\) turns out to be a \(2\pi\)-periodic solution of the ergodic HJ equation
\[-u^{\prime\prime}+\frac{1}{2}|u^{\prime}|^{2}+\tilde{\lambda}=-\kappa\cos x \int_{-\pi}^{\pi}\cos(y)m(y)dy\qquad\text{and }u(0)=0.\]
Periodic solutions of the previous equation are known to be unique, namely the couple \((\tilde{\lambda},u)\) is unique. Since \(u(-x)\) also solves the previous problem, we get that \(u(x)=u(-x)\). Hence \(u\) is even, and \(m\) needs to be even as well.
Therefore, any solution to (1.3) satisfies, _up to translation_, the following problem:
\[\begin{cases}-u^{\prime\prime}+\frac{1}{2}|u^{\prime}|^{2}+\tilde{\lambda}=- \kappa\cos x\int_{-\pi}^{\pi}\cos(y)m(y)dy\\ m(x)=\frac{e^{-\kappa(x)}}{\int_{-\pi}^{\pi}e^{-\hat{u}(y)}dy},\\ \hat{u},\hat{m}\text{ are even, }2\pi\text{-periodic, }\hat{u}(0)=0,\,\int_{-\pi}^{\pi}m(y) \cos y\ dy\geq 0.\end{cases} \tag{2.1}\]
Theorem 2.1 states that self-organizing solutions to the previous problem are unique, hence Theorem 1.2 follows as a straightforward consequence.
We now proceed with the proof of Theorem 2.1. First of all we slightly rewrite the system (2.1) in an equivalent way. Define
\[V(x):=1-\cos x\]
and \(\lambda:=\tilde{\lambda}+\kappa\int_{-\pi}^{\pi}m(y)\cos ydy\). Then (2.1) becomes
\[\begin{cases}-u^{\prime\prime}+\frac{1}{2}|u^{\prime}|^{2}+\lambda=\kappa V(x )\left[1-\int_{-\pi}^{\pi}V(y)m(y)dy\right]\\ u^{\prime}(\pm\pi)=0\\ m(x):=\frac{e^{-u(x)}}{\int_{-\pi}^{\pi}e^{-u(y)}dy}.\end{cases} \tag{2.2}\]
Since \(u,m\) are even, it will be indeed convenient below to work with Neumann boundary conditions at the boundary of the set \((-\pi,\pi)\). We say that \((u,\lambda,m)\) is a solution to (2.2) if \(\lambda\in\mathbb{R}\), \((u,m)\) are smooth and solves in the classical sense the first equation in (2.2) (classical solution are in fact \(C^{\infty}\)). Note that
\[\frac{x^{2}}{6}\leq V(x)\leq\frac{x^{2}}{2}\qquad\text{on }[-\pi,\pi]. \tag{2.3}\]
**Remark 2.2** (The rescaled system).: Several arguments below exploit a blow-up of (2.2). Let \(w(x)=u(x\kappa^{-\frac{1}{4}})\) and \(\mu(x)=\kappa^{-\frac{1}{4}}m(x\kappa^{-\frac{1}{4}})\). The rescaled problem then reads:
\[-w^{\prime\prime}+\frac{1}{2}|w^{\prime}|^{2}+\kappa^{-\frac{1}{2}}\lambda=V_ {\kappa}(x)\left[1-\kappa^{-\frac{1}{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi \kappa^{\frac{1}{4}}}V_{\kappa}(y)\mu(y)dy\right]\qquad\text{with }\mu(x)=\frac{e^{-w(x)}}{ \int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}e^{-w(y)}dy} \tag{2.4}\]
where \(V_{\kappa}(x)=\kappa^{\frac{1}{2}}V(x\kappa^{-\frac{1}{4}})\). Note that by (2.3), \(V_{\kappa}\) satisfies
\[\frac{x^{2}}{6}\leq V_{k}(x)\leq\frac{x^{2}}{2}\qquad\text{on }(-\pi\kappa^{ \frac{1}{4}},\pi\kappa^{\frac{1}{4}}). \tag{2.5}\]
The main advantage of this blow-up is that it "weakens" the coupling between \(u\) and \(m\), since \(\kappa\) in front of \(\int m\cos\) becomes \(\kappa^{-\frac{1}{2}}\).
In order to prove uniqueness of solutions to (2.2) (and in fact also existence), we note that such solutions correspond to fixed points of a function of a real variable. Fix \(a\in\mathbb{R}\), and consider the solution \((u_{a},\lambda_{a})\) to the ergodic Hamilton-Jacobi equation with Neumann boundary conditions
\[-u_{a}^{\prime\prime}+\frac{1}{2}|u_{a}^{\prime}|^{2}+\lambda_{a}=\kappa V(x )\left[1-a\right],\qquad u_{a}(0)=0 \tag{2.6}\]
where the parameter \(\kappa\) is fixed, and define \(F_{\kappa}:\mathbb{R}\to\mathbb{R}\) as
\[F_{\kappa}(a):=\int_{-\pi}^{\pi}V(y)m_{a}(y)dy\qquad\text{where }m_{a}(y):= \frac{e^{-u_{a}(y)}}{\int_{-\pi}^{\pi}e^{-u_{a}(y)}dy}. \tag{2.7}\]
It is well known, see [13, 14, 20] that for every \(\kappa,a\in\mathbb{R}\) there exists a unique \(\lambda_{a}\in\mathbb{R}\) and a smooth (even) function \(u_{a}\) which solves in the classical sense (2.6).
Note that
\[F_{k}(a)=1-\int_{-\pi}^{\pi}m_{a}(y)\cos y\ dy\in[0,2],\]
hence \(F_{k}:[0,2]\to[0,2]\). The stability property of the ergodic problem with respect to variations of the parameters is a well known result, see e.g. [13, Proposition 3], hence \(F_{k}\) is also continuous.
Clearly, there is a one-to-one correspondence between fixed points \(a=F_{k}(a)\) and solutions to (2.2). Note that since we consider solutions \(\int_{-\pi}^{\pi}m_{a}(y)\cos y\ dy\geq 0\) we should restrict to
\[a=1-\int_{-\pi}^{\pi}m_{a}(y)\cos y\ dy\in[0,1].\]
We shall see below that actually \(F_{k}:[0,1]\to[0,1]\). In particular \(a=1\) is a fixed point of \(F_{\kappa}\) for every \(\kappa\), and \((u_{1},\lambda_{1},m_{1})\) coincides with the incoherent solution to the Kuramoto MFG.
We now derive a crucial representation formula for \(F_{\kappa}^{\prime}\).
**Proposition 2.3** (Variations with respect to \(a\)).: _Let \(a\in[0,2]\) and \((u_{a},\lambda_{a})\) the solution to (2.6) with parameter \(\kappa\)._
_Then there exists a unique \(\lambda_{a}^{\prime}\in\mathbb{R}\) and a unique smooth \(v_{a}\) which solves in classical sense the equation_
\[-v_{a}^{\prime\prime}+v_{a}^{\prime}u_{a}^{\prime}+\lambda_{a}^{\prime}=- \kappa V(x),\quad v_{a}(0)=0 \tag{2.8}\]
_with Neumann boundary conditions. Moreover_
\[\lambda_{a}^{\prime}=\lim_{h\to 0}\frac{\lambda_{a+h}-\lambda_{a}}{h}=- \kappa F_{\kappa}(a)=-\kappa\int_{-\pi}^{\pi}V(y)m_{a}(y)dy \tag{2.9}\]
_and_
\[F_{\kappa}^{\prime}(a)=\lim_{h\to 0}\frac{F_{\kappa}(a+h)-F_{ \kappa}(a)}{h} = -\frac{1}{\kappa}\lim_{h\to 0}\frac{\lambda_{a+h}^{\prime}- \lambda_{a}^{\prime}}{h} \tag{2.10}\] \[= \frac{1}{\kappa}\int_{-\pi}^{\pi}(v_{a}^{\prime}(y))^{2}m_{a}(y)dy\] \[= -\int_{-\pi}^{\pi}\left(\frac{\lambda_{a}^{\prime}}{\kappa}+V(y) \right)v_{a}m_{a}dy. \tag{2.11}\]
In particular \(F_{\kappa}\) is a nondecreasing function, and since \(F_{k}(1)=1\), then \(F_{\kappa}:[0,1]\to[0,1]\). Observe also that for \(a=1\), since \(u_{1}=0,\lambda_{1}=0\), \(m_{1}=\frac{1}{2\pi}\), we get
\[\lambda_{1}^{\prime}=-\kappa,\qquad v_{1}(x)=\kappa\cos x,\qquad F_{\kappa}^{ \prime}(1)=\frac{\kappa}{2}.\]
Proof.: To obtain (2.8) and (2.9) we follow the same arguments as in [13, Section 5.1]. First of all, the existence of a unique couple \((\lambda^{\prime},v)\) solving (2.8) is standard.
Fix \(h\) small and consider \((u_{a+h},\lambda_{a+h})\) the solution to (2.6) with parameter \(\kappa\). Define \(v_{h}=\frac{u_{a+h}-u_{a}}{h}\) and \(\lambda_{h}=\frac{\lambda_{a+h}-\lambda_{a}}{h}\). Then \(v_{h},\lambda_{h}\) solves
\[-v_{h}^{\prime\prime}+\frac{1}{2}(u_{a}^{\prime}+u_{a+h}^{\prime})v_{h}^{ \prime}+\lambda_{h}=-\kappa V(x).\]
If we multiply by \(m_{a}\) this equation and integrate, recalling that \(-m_{a}^{\prime\prime}-(m_{a}u_{a}^{\prime})^{\prime}=0\), we get
\[\lambda_{h}=-\kappa\int_{-\pi}^{\pi}V(y)m_{a}(y)-\frac{1}{2}\int_{-\pi}^{\pi} h(v_{h}^{\prime})^{2}m_{a}dy.\]
Moreover \(m_{h}=\frac{m_{a+h}-m_{a}}{h}\) solves
\[-m_{h}^{\prime\prime}-(u_{a+h}^{\prime}m_{h}+v_{h}^{\prime}m_{a})^{\prime}=0\]
with \(\int_{-\pi}^{\pi}m_{h}=0\). If we multiply by \(m_{h}\) the equation for \(v_{h}\) and we subtract to it the equation for \(m_{h}\) multiplied by \(v_{h}\), and we integrate over \((-\pi,\pi)\) we conclude that
\[\frac{1}{2}\int_{-\pi}^{\pi}h(v_{h}^{\prime})^{2}(m_{a}+m_{a+h})dy=\kappa\int_ {-\pi}^{\pi}V(y)(m_{a+h}-m_{a})dy.\]
So, since \(m_{a+h}-m_{a}\to 0\) uniformly as \(h\to 0\), we conclude that \(\lim_{h\to 0}\lambda_{h}=-\kappa\int_{-\pi}^{\pi}V(y)m_{a}(y)=-\kappa F_{ \kappa}(a)\). Moreover since \(\frac{1}{2}(u_{a}^{\prime}+u_{a+h}^{\prime})\to u_{a}^{\prime}\) as \(h\to 0\) uniformly, by stability of viscosity solutions and uniqueness of the ergodic constant we conclude that \(\lambda^{\prime}=\lim_{h\to 0}\lambda_{h}\) and that \(\lim_{h}v_{h}=v_{a}\) uniformly in \(\mathbb{C}^{1,\gamma}\).
To obtain (2.10) and (2.11) we now define \(z_{h}=\frac{v_{a+h}-v_{a}}{h}\) and \(\lambda^{\prime}_{h}=\frac{\lambda^{\prime}_{a+h}-\lambda^{\prime}_{a}}{h}\). Then \(z_{h},\lambda^{\prime}_{h}\) solves
\[-z^{\prime\prime}_{h}+u^{\prime}_{a+h}z^{\prime}_{h}+\frac{u^{\prime}_{a+h}-u ^{\prime}_{a}}{h}v^{\prime}_{a}+\lambda^{\prime}_{h}=0.\]
We multiply this equation by \(m_{a+h}\) and integrate in \([-\pi,\pi]\) recalling that \(-m^{\prime\prime}_{a+h}-(m_{a+h}u^{\prime}_{a+h})^{\prime}=0\) and we obtain:
\[-\lambda^{\prime}_{h}=\int_{-\pi}^{\pi}\frac{u^{\prime}_{a+h}-u^{\prime}_{a}}{ h}v^{\prime}_{a}m_{a+h}dy=\int_{-\pi}^{\pi}v^{\prime}_{h}v^{\prime}_{a}m_{a+h}dy.\]
As \(h\to 0\), we get that \(v_{h}\to v_{a}\) in \(C^{1,\gamma}\) and \(m_{a+h}\to m_{a}\) uniformly, so, passing to the limit we obtain
\[\big{(}-\kappa F_{\kappa}(a)\big{)}^{\prime}=\lim_{h}\lambda^{\prime}_{h}=- \int_{-\pi}^{\pi}(v^{\prime}_{a}(y))^{2}m_{a}(y)dy. \tag{2.12}\]
Finally, we multiply (2.8) by \(m_{a}v_{a}\) and subtract the equation \(-m^{\prime\prime}_{a}-(m_{a}u^{\prime}_{a})^{\prime}=0\) multiplied by \(\frac{1}{2}v_{a}^{2}\) and integrate:
\[\int_{-\pi}^{\pi}\big{[}-v^{\prime\prime}_{a}+v^{\prime}_{a}u^{ \prime}_{a}+\lambda^{\prime}_{a}+\kappa V(x)\big{]}\,v_{a}m_{a}-\big{[}-m^{ \prime\prime}_{a}-(m_{a}u^{\prime}_{a})^{\prime}\big{]}\,\frac{v_{a}^{2}}{2}dy\] \[= \int_{-\pi}^{\pi}\left[-\frac{1}{2}(v_{a}^{2})^{\prime\prime}m_{a }+(v^{\prime}_{a})^{2}m_{a}+v^{\prime}_{a}u^{\prime}_{a}v_{a}m_{a}+\big{[} \lambda^{\prime}_{a}+\kappa V(x)\big{]}\,v_{a}m_{a}+\frac{1}{2}(v_{a}^{2})^{ \prime\prime}m_{a}-m_{a}u^{\prime}_{a}v_{a}v^{\prime}_{a}\right]dy\] \[= \int_{-\pi}^{\pi}\left[(v^{\prime}_{a})^{2}m_{a}+\big{[}\lambda^ {\prime}_{a}+\kappa V(x)\big{]}\,v_{a}m_{a}\right]dy=0.\]
This, together with (2.12), implies (2.11).
**Remark 2.4** (Symmetry of \(F_{\kappa}\)).: Let us observe that if \(u_{a},\lambda_{a}\) is the solution to (2.6) associated to \(a\in[0,1]\) (where we fix \(u_{a}(0)=0\)), then \(\bar{u}_{a}(x):=u_{a}(x+\pi)-u_{a}(\pi),\bar{\lambda}_{a}:=\lambda_{a}-2\kappa (1-a)\) is the solution to (2.6) associated to \(\bar{a}=2-a\). This implies that for all \(a\in[0,2]\),
\[u_{2-a}(x):=u_{a}(x+\pi)-u_{a}(\pi)\qquad\lambda_{2-a}=\lambda_{a}-2\kappa(1-a )\qquad F_{\kappa}(2-a)=2-F_{\kappa}(a).\]
We now are going to show that if \(\kappa\) is sufficiently large then there exists a fixed point \(\bar{a}\in(0,1)\) of the function \(F_{\kappa}\), and that this fixed point is unique. This would also imply, by the previous Remark 2.4, that for \(\kappa\) is sufficiently large, \(2-\bar{a}\) is the unique fixed point of the function \(F_{\kappa}\) in \((1,2)\). Our strategy is to derive the following properties of \(F_{\kappa}\):
* Theorem 2.10: for any \(\delta>0\), \(F^{\prime}_{\kappa}=O(\kappa^{-1/2})\) on \([0,1-\delta]\) for large enough \(\kappa\).
* Proposition 2.11: there exists \(\tau_{0}\in(0,1)\) such that \(F^{\prime}_{\kappa}\left(1-\frac{\tau}{\kappa}\right)\geq\frac{\kappa}{4}\) for any \(\tau\in[\tau_{0},1]\) and \(\kappa\) large enough.
The two points above show that there can be just one fixed point in an interval \([0,1-\delta]\) and close to \(a=1\). In other words, for large \(\kappa\), \(F_{\kappa}\) is almost flat (and close to zero) for all \(a\), and as soon as \(a\) gets close to one, \(F_{\kappa}\) abruptly reaches the fixed point \(a=1\), see Figure 1. Combining this information with the fact that \(F_{\kappa}\) is monotone gives the result.
Below, \(a\in[0,1]\), and \((u_{a},\lambda_{a})\) the solution to (2.6) with \(u_{a}(0)=0\) and with interaction parameter \(\kappa\geq 1\). Most importantly, positive constants in the statements will be independent of \(\kappa\).
**Proposition 2.5**.: _There exists \(\ell>0\) such that_
\[0\leq\lambda_{a}\leq\ell\kappa^{\frac{1}{2}}. \tag{2.13}\]
Proof.: Let \(\varphi(x):=e^{-\frac{u(x)}{2}}\). Then, the couple \((\varphi,\lambda_{a})\) solves
\[\begin{cases}-\varphi^{\prime\prime}(x)+\frac{\kappa V(x)[1-a]}{2}\varphi(x)= \lambda_{a}\varphi(x)\quad\text{on }(-\pi,\pi),\\ \varphi^{\prime}(\pi)=\varphi^{\prime}(-\pi)=0.\end{cases}\]
Since \(\varphi>0\) on \((-\pi,\pi)\), \(\lambda_{a}\) is the first (nontrivial) eigenvalue of the Schrodinger operator \(-\Delta+\frac{\kappa V(x)[1-a]}{2}\) on \((-\pi,\pi)\) (with Neumann boundary conditions), hence it has the following well-known characterization
\[\lambda_{a}=\inf_{\begin{subarray}{c}\phi\in H^{1}_{1}(-\pi,\pi)\\ \int_{-\pi}^{\pi}\phi^{2}=1\end{subarray}}\int_{-\pi}^{\pi}|\phi^{\prime}|^{2} +\frac{\kappa V(x)[1-a]}{2}\phi^{2}dx,\]
which yields \(\lambda_{a}\geq 0\) as a straightforward consequence. Pick any smooth nonnegative \(\psi\) with compact support in \((0,1)\) and such that \(\int_{-\pi}^{\pi}\psi^{2}=1\), and let
\[\phi(x)=\kappa^{1/8}\psi\left(\operatorname{xx}^{1/4}\right).\]
Clearly, \(\phi\) has support in \((0,\kappa^{-1/4})\) and it satisfies \(\int_{-\pi}^{\pi}\phi^{2}=1\). Therefore,
\[\lambda_{a}\leq\int_{0}^{\kappa^{-1/4}}|\phi^{\prime}|^{2}+\frac{\kappa V(x)[ 1-a]}{2}\phi^{2}dx=\int_{0}^{1}\kappa^{1/2}|\psi^{\prime}|^{2}+\frac{\kappa V( \operatorname{xx}^{-1/4})[1-a]}{2}\psi^{2}dx.\]
Using (2.3) we obtain
\[\lambda_{a}\leq\kappa^{1/2}\int_{0}^{1}|\psi^{\prime}|^{2}+\frac{x^{2}}{4}\psi ^{2}dx,\]
that gives the conclusion.
**Remark 2.6** (Mathieu functions).: As one can see from the previous proof, \(\phi=e^{-u_{a}}\) is a so-called _Mathieu function_, because it solves an equation of the form
\[-\varphi^{\prime\prime}+(b+q\cos x)\varphi=0\]
for some real \(b,q\); \(b=b(q)\) is the characteristic number, and it is strictly related to \(\lambda_{a}\) in our formulation (while \(q\) is proportional to \(a\) and \(\kappa\)). This class of special functions has been extensively studied during the last century [25]. For instance, by known results one could infer very precise asymptotics of \(\lambda_{a}\) as \(\kappa\to\infty\). One could then prove that \(F_{\kappa}\) has just two fixed points on \([0,1]\) by showing for instance that it is convex (which is reasonable if one looks at Figure 1). Since \(F_{\kappa}^{\prime\prime}=-\lambda_{a}^{\prime\prime\prime}/k\) by Proposition 2.3, this amounts to establish the sign of the third derivative of \(\lambda_{a}\) with respect to \(a\). Unfortunately, we are not aware of any result on the behavior of derivatives of the characteristic number \(b\) as a function of \(q\).
Figure 1: Plot of the function \(a\mapsto F_{k}(a)\), for large \(\kappa\).
**Proposition 2.7**.: _Let \(\delta\in(0,1)\) and \(a\in[0,1-\delta]\). Then, there exists \(\tilde{\kappa}=\tilde{\kappa}(\delta)\) (with \(\tilde{\kappa}(\delta)\to+\infty\) as \(\delta\to 0\)) such that if \(\kappa\geq\tilde{\kappa}(\delta)\) then_
\[c_{1}\kappa^{1/2}x^{2}-c_{3}\leq u_{a}(x)\leq c_{2}\kappa^{1/2}x^{2}+c_{3} \qquad\text{for }|x|\leq\pi.\]
_for some \(c_{2},c_{3}>0\) and \(c_{1}=c_{1}(\delta)>0\) with \(c_{1}(\delta)\to 0\) as \(\delta\to 0\)._
Proof.: We first need to obtain a control on \(u_{a}\) close to \(x=0\). We rescale the equation (2.6) as in Remark 2.2: we let \(w_{a}(x)=u_{a}(xx^{-\frac{1}{4}})\) and we get
\[-w_{a}^{\prime\prime}+\frac{1}{2}|w_{a}^{\prime}|^{2}+\kappa^{-\frac{1}{2}} \lambda_{a}=\kappa^{\frac{1}{2}}V(x\kappa^{-\frac{1}{4}})[1-a]. \tag{2.14}\]
Observe that \(\kappa^{\frac{1}{2}}V(x\kappa^{-\frac{1}{4}})\) satisfies (2.5), hence it is locally bounded with respect to \(x\), uniformly in \(\kappa\). By the local gradient estimates for \(w_{a}\) (see e.g. [14, 20]), we have that \(|w_{a}^{\prime}|\leq C_{r}\) on any interval \([-r,r]\subset(-\pi\kappa^{1/4},\pi\kappa^{1/4})\), which implies, going back to \(u_{a}\), that \(|u_{a}^{\prime}|\leq C_{r}\kappa^{\frac{1}{4}}\) on \([-r\kappa^{-1/4},r\kappa^{-1/4}]\). Hence,
\[|u_{a}(x)|\leq rC_{r}\qquad\text{on }[-r\kappa^{-1/4},r\kappa^{-1/4}]. \tag{2.15}\]
Let us now proceed with the bound for \(u_{a}\) from above, by constructing a suitable supersolution of the HJ equation. Note that \(u_{a}\) is even, hence it suffices to argue on \([0,\pi]\). Let
\[\tilde{u}(x):=\frac{\tilde{c}}{2}\kappa^{1/2}x^{2},\]
where \(\tilde{c}\geq 1\) will be chosen below (large). We have that, for \(x\geq r\kappa^{-1/4}\),
\[-\tilde{u}^{\prime\prime}+\frac{1}{2}|\tilde{u}^{\prime}|^{2}+ \lambda_{a}-\kappa V(x)\left[1-a\right]\stackrel{{\lambda_{a}\geq 0}}{{ \geq}}-\tilde{c}\kappa^{1/2}+\frac{\tilde{c}^{2}}{2}x^{2}-\kappa V(x)\left[1-a \right]\stackrel{{\eqref{eq:bound}}}{{\geq}}\\ -\tilde{c}\kappa^{1/2}+\frac{\kappa}{2}(\tilde{c}^{2}-1)x^{2} \geq\kappa^{1/2}\big{(}-\tilde{c}+(\tilde{c}^{2}-1)r^{2}\big{)}\geq 0,\]
provided that \(\tilde{c}=\tilde{c}(r)\) is chosen large enough. Note that \(\tilde{u}^{\prime}(\pi)>0\). By the Maximum Principle, the maximum of \(u_{a}-\tilde{u}\) on \(r\kappa^{-1/4}\leq x\leq\pi\) is achieved at the boundary. Since it cannot be achieved at \(x=\pi\) (that would contradict Hopf's Lemma), we get recalling (2.15) that
\[u_{a}(x)-\tilde{u}(x)\leq u_{a}(r\kappa^{-1/4})-\tilde{u}(r\kappa^{-1/4})\leq r C _{r}\qquad\text{in }r\kappa^{-1/4}\leq x\leq\pi,\]
which, using again (2.15), yields
\[u_{a}(x)\leq\frac{\tilde{c}}{2}\kappa^{1/2}x^{2}+rC_{r}\qquad\text{on }|x|\leq\pi.\]
Pick now any \(r\) (\(r=1\) for instance) to conclude the bound on \(u_{a}\) from above.
To control \(u_{a}\) from below, we use the following subsolution on \(x\geq r\kappa^{-1/4}\):
\[\underline{u}(x):=\frac{c}{2}\kappa^{1/2}(1-\cos x),\]
for \(c>0\) small. Note that now \(r\) will need to be chosen large enough so that \(\ell-\frac{\delta r^{2}}{12}\leq-1\). For \(x\geq r\kappa^{-1/4}\), and \(c\) small so that \(c^{2}\leq\delta/2\) and \(c\leq 1\),
\[-\underline{u}^{\prime\prime}+\frac{1}{2}|\underline{u}^{\prime}|^{2}+ \lambda_{a}-\kappa V(x)\left[1-a\right]\stackrel{{\eqref{eq:bound}}}{{\leq}}\\ \kappa^{1/2}\cos x+\frac{c^{2}k}{2}(1-\cos x)(1+\cos x)+\ell \kappa^{1/2}-\kappa(1-\cos x)\left[1-a\right]\leq\\ \c\kappa^{1/2}+\ell\kappa^{1/2}+\kappa(1-\cos x)\left(\frac{c^{2} }{2}(1+\cos x)-1+a\right)\leq\\ \c\kappa^{1/2}+\ell\kappa^{1/2}-\frac{\delta\kappa}{2}(1-\cos x) \leq\kappa^{1/2}\left(c+\ell-\frac{\delta r^{2}}{12}\right)\leq 0.\]
Arguing as before, we obtain
\[u_{a}(x)\geq\underline{u}(x)\geq\frac{c\kappa^{1/2}}{12}x^{2}-rC_{r}\qquad\text{ on }|x|\leq\pi.\]
**Corollary 2.8**.: _Let \(\delta\in(0,1)\). Fix \(a\in[0,1-\delta]\). Then, for \(\kappa\geq\bar{\kappa}(\delta)\) (where \(\bar{\kappa}(\delta)\) is as in Proposition 2.7) there holds_
\[C^{-1}\kappa^{1/4}e^{-c_{2}\kappa^{1/2}x^{2}}\leq m_{a}(x)=\frac{e^{-u_{a}(x) }}{\int_{-\pi}^{\pi}e^{-u_{a}(x)}dx}\leq C\kappa^{1/4}e^{-c_{1}\kappa^{1/2}x^{2 }}\qquad\text{ for all }x\in[-\pi,\pi], \tag{2.16}\]
_for some \(C>0\) and for \(c_{1},c_{2}\) as in Proposition 2.7. Moreover_
\[F_{\kappa}(a)=\int_{-\pi}^{\pi}V(x)m_{a}(x)dx\leq\frac{C^{\prime}}{\kappa^{1/2}} \tag{2.17}\]
_for some \(C^{\prime}>0\). Here, \(C,C^{\prime}\) depend on \(c_{1},c_{2},c_{3}\)._
Proof.: For the first assertion, it is sufficient to use Proposition 2.7:
\[\frac{e^{-u_{a}(x)}}{\int_{-\pi}^{\pi}e^{-u_{a}(x)}dx}\leq e^{2c_{3}}\frac{e^{ -c_{1}\kappa^{1/2}x^{2}}}{\int_{-\pi}^{\pi}e^{-c_{2}\kappa^{1/2}x^{2}}dx}=e^{ 2c_{3}}\kappa^{1/4}\frac{e^{-c_{1}\kappa^{1/2}x^{2}}}{\int_{-\pi\kappa^{1/4}}^ {\pi\kappa^{1/4}}e^{-c_{2}y^{2}}dy}\]
\[\frac{e^{-u_{a}(x)}}{\int_{-\pi}^{\pi}e^{-u_{a}(x)}dx}\geq e^{2c_{3}}\frac{e^{ -c_{2}\kappa^{1/2}x^{2}}}{\int_{-\pi}^{\pi}e^{-c_{1}\kappa^{1/2}x^{2}}dx}=e^{2c _{3}}\kappa^{1/4}\frac{e^{-c_{2}\kappa^{1/2}x^{2}}}{\int_{-\pi\kappa^{1/4}}^ {\pi\kappa^{1/4}}e^{-c_{1}y^{2}}dy}.\]
To get the second one, since \(V(x)\leq x^{2}/2\),
\[F_{\kappa}(a)\leq C\int_{-\pi}^{\pi}x^{2}e^{-c_{1}\kappa^{1/2}x^{2}}\kappa^{1/ 4}dx=\frac{C}{\kappa^{1/2}}\int_{-\pi\kappa^{1/4}}^{\pi\kappa^{1/4}}y^{2}e^{- c_{1}y^{2}}dy.\]
**Proposition 2.9**.: _Let \(\delta\in(0,1)\). Fix \(a\in[0,1-\delta]\), and consider \((v_{a},\lambda^{\prime}_{a})\) the solution to (2.8) with \(v_{a}(0)=0\) and with interaction parameter \(\kappa\). Then, there exists \(\bar{\kappa}(\delta)\geq\bar{\kappa}(\delta)\) (where \(\bar{\kappa}(\delta)\) is as in Proposition 2.7) such that for \(\kappa\geq\bar{\kappa}(\delta)\) there holds_
\[|v_{a}(x)|\leq\bar{c}_{1}\kappa^{1/2}x^{2}+\bar{c}_{2}\qquad\text{for }|x|\leq\pi \tag{2.18}\]
_for some \(\bar{c}_{1},\bar{c}_{2}>0\)._
Proof.: We start with some bounds on \(v_{a}\) close to \(x=0\). Since \(v_{a}\) is a solution to (2.8), and \(u_{a}\) is even, we get that also \(v_{a}\) is even, and then \(v^{\prime}_{a}(0)=0\). By direct integration of (2.8) we get
\[v^{\prime}_{a}(x)=e^{u_{a}(x)}\int_{0}^{x}e^{-u_{a}(s)}\big{(}\lambda^{\prime} _{a}+kV(s)\big{)}ds.\]
Note that \(|\lambda^{\prime}_{a}|=\kappa F_{\kappa}(a)\leq C\kappa^{1/2}\) by (2.17). Hence, by the control on \(u_{a}\) obtained in Proposition 2.7 we get, for any \(r\geq 0\) and \(|x|\leq r\kappa^{-1/4}\),
\[|v^{\prime}_{a}(x)|\leq e^{c_{2}\kappa^{1/2}x^{2}}e^{2c_{3}}\int _{0}^{x}\big{(}|\lambda^{\prime}_{a}|+\kappa V(s)\big{)}ds\leq e^{c_{2}r^{2}}e ^{2c_{3}}\int_{0}^{x}\Big{(}C\kappa^{1/2}+\frac{\kappa}{2}s^{2}\Big{)}\,ds\leq\] \[e^{c_{2}r^{2}}e^{2c_{3}}r\kappa^{-1/4}\left(C\kappa^{1/2}+\frac{ \kappa^{1/2}}{2}r^{2}\right)=C_{r}\kappa^{1/4},\]
which in turn yields
\[|v_{a}(x)|\leq rC_{r}\qquad\text{on }[-r\kappa^{-1/4},r\kappa^{-1/4}]. \tag{2.19}\]
Now we need to control from above and below \(v_{a}\) in the annulus \(r\kappa^{-1/4}\leq|x|\leq\pi\) by constructing suitable sub/supersolutions of (2.8). We first pick \(r\) such that
\[\frac{r^{2}\delta}{12}\geq\ell,\qquad\frac{r^{2}}{6}\geq C+\ell,\]
where \(\ell,C\) appear in (2.13) and (2.17) respectively. Let \(\underline{v}(x)=-\eta u(x)\), where \(\eta\) satisfies
\[\eta\geq\frac{2}{\delta}.\]
For \(x\geq r\kappa^{-1/4}\) (note that \(v_{a}\) is even, hence all the arguments below adapt to \(x\leq-r\kappa^{-1/4}\)),
\[-\underline{v}^{\prime\prime}+\underline{v}^{\prime}u_{a}^{ \prime}+\lambda_{a}^{\prime}+\kappa V=\eta u_{a}^{\prime\prime}-\eta|u_{a}^{ \prime}|^{2}+\lambda_{a}^{\prime}+\kappa V\stackrel{{\lambda^{ \prime}\leq 0}}{{\leq}}\eta\lambda_{a}-\eta\kappa V[1-a]+\kappa V\stackrel{{ \eqref{eq:v_a}}}{{\leq}}\\ \eta\ell\kappa^{1/2}+\kappa V[1-\eta(1-a)]\leq\eta\ell\kappa^{1/ 2}-\kappa V\frac{\eta\delta}{2}\leq\eta\ell\kappa^{1/2}-\kappa x^{2}\frac{\eta \delta}{12}\leq\eta\kappa^{1/2}\left(\ell-\frac{r^{2}\delta}{12}\right)\leq 0.\]
Hence \(\underline{v}-v_{a}\) is a subsolution of (2.8) on \(x\geq r\kappa^{-1/4}\). By Hopf's Lemma (recall that \(\underline{v}^{\prime}(\pi)-v_{a}^{\prime}(\pi)=0\)) and the maximum principle,
\[\underline{v}(x)-v_{a}(x)\leq\underline{v}(r\kappa^{-1/4})-v_{a}(r\kappa^{-1 /4})\leq rC_{r}\qquad\text{for }r\kappa^{-1/4}\leq x\leq\pi,\]
which implies
\[v_{a}(x)\geq-\eta u(x)-rC_{r}\geq-\eta c_{2}\kappa^{1/2}x^{2}-\eta c_{3}-rC_{r }\qquad\text{for }|x|\leq\pi\]
in view of Proposition (2.7).
To control \(v_{a}\) from above, we argue similarly with \(\tilde{v}=\eta u\). Indeed, for \(x\geq r\kappa^{-1/4}\),
\[-\tilde{v}^{\prime\prime}+\tilde{v}^{\prime}u_{a}^{\prime}+ \lambda_{a}^{\prime}+\kappa V=-\eta u_{a}^{\prime\prime}+\eta|u_{a}^{\prime}| ^{2}+\lambda_{a}^{\prime}+\kappa V\stackrel{{\eqref{eq:v_a}}}{{ \geq}}-\eta\lambda_{a}+\eta\kappa V[1-a]-\eta C\kappa^{1/2}+\kappa V\geq\\ -\eta(C+\ell)\kappa^{1/2}+\kappa V\geq-\eta\kappa^{1/2}(C+\ell)+ \kappa^{1/2}\frac{r^{2}}{6}=\eta\kappa^{1/2}\left(-C-\ell+\frac{r^{2}}{6} \right)\geq 0,\]
and we conclude as above that
\[v_{a}(x)\leq\eta u(x)+rC_{r}\leq\eta c_{2}\kappa^{1/2}x^{2}+\eta c_{3}-rC_{r }\qquad\text{for }|x|\leq\pi.\]
Our first result is the existence and uniqueness of a fixed point of \(F_{\kappa}\) on the set \([0,1-\delta,\) for \(\kappa\) sufficiently large.
**Theorem 2.10**.: _Let \(\delta\in(0,1)\). Fix \(a\in[0,1-\delta]\), and consider the map \(F_{\kappa}(a)\) defined in (2.7), with interaction parameter \(\kappa\). Then, for \(\kappa\geq\bar{\kappa}(\delta)\) (where \(\bar{\kappa}(\delta)\) is as in Proposition 2.9) there holds for some \(C>0\)_
\[0\leq F_{\kappa}^{\prime}(a)\leq C\kappa^{-\frac{1}{2}}.\]
_In particular there exists \(\kappa_{0}(\delta)\geq\bar{\kappa}(\delta)\) (with \(\kappa_{0}(\delta)\to+\infty\) when \(\delta\to 0\)), such that if \(\kappa\geq\kappa_{0}(\delta)\), then \(F_{\kappa}:[0,1-\delta]\to[0,1-\delta]\) is a contraction. Hence it admits a unique fixed point, which is associated to a self-organizing solution to the Kuramoto MFG (2.2)._
Note that, by Remark 2.4, \(F_{\kappa}:[1+\delta,2]\to[1+\delta,2]\) is also a contraction.
Proof.: We recall that \(|\lambda_{a}^{\prime}|=\kappa F_{\kappa}(a)\leq C\kappa^{\frac{1}{2}}\) by (2.17) and \(V(x)\leq x^{2}/2\). So
\[0\leq F_{\kappa}^{\prime}(a) \stackrel{{\eqref{eq:2.11}}}{{=}} -\int_{-\pi}^{\pi}\frac{\lambda_{a}^{\prime}}{\kappa}\nu_{a}m_{a}+ V(x)\nu_{a}m_{a}dx\] \[\leq \int_{-\pi}^{\pi}\left|\frac{\lambda_{a}^{\prime}}{\kappa}+V(x) \right|\left|v_{a}\right|m_{a}dx\] \[\stackrel{{\eqref{eq:2.16},\eqref{eq:2.18}}}{{\leq}} \int_{-\pi}^{\pi}\left(\frac{C}{\kappa^{\frac{1}{2}}}+\frac{x^{2}}{2} \right)(\tilde{c}_{1}\kappa^{\frac{1}{2}}x^{2}+\tilde{c}_{2})C^{\prime}\kappa^ {\frac{1}{2}}e^{-c_{1}\kappa^{\frac{1}{2}}x^{2}}dx\] \[= \int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\left( \frac{C}{\kappa^{\frac{1}{2}}}+\frac{y^{2}}{2\kappa^{\frac{1}{2}}}\right)( \tilde{c}_{1}y^{2}+\tilde{c}_{2})C^{\prime}e^{-c_{1}y^{2}}dy\] \[\leq \frac{C^{\prime\prime}}{\kappa^{\frac{1}{2}}}\int_{-\infty}^{ \infty}(1+y^{2})(\tilde{c}_{1}y^{2}+\tilde{c}_{2})e^{-c_{1}y^{2}}dy\leq\frac{C ^{\prime\prime\prime}}{\kappa^{\frac{1}{2}}}\]
for all \(a\in[0,1-\delta]\). Now it is sufficient to choose \(\kappa_{1}(\delta)\geq\pi(\delta)\) sufficiently large such that \(C^{\prime\prime\prime}\kappa_{1}^{-1/2}(\delta)<1\). So, the map \(F_{\kappa}\) is a contraction in \([0,1-\delta]\) and we conclude by Banach-Caccioppoli theorem.
To conclude the proof of Theorem 2.1, we now show that the incoherent solution to the Kuramoto system is isolated as \(\kappa\) is sufficiently large.
**Proposition 2.11**.: _There exists \(\tau_{0}\in(0,1)\) such that for all \(\kappa>4\), there holds for \(\tau\in(0,\tau_{0}]\),_
\[F_{\kappa}^{\prime}\left(1-\frac{\tau}{\kappa}\right)\geq\frac{\kappa}{4}, \qquad\text{so that}\qquad F_{\kappa}\left(1-\frac{\tau}{\kappa}\right)\leq 1- \frac{\tau}{4}<1-\frac{\tau}{\kappa}.\]
_Consequently, \(F_{\kappa}\) in \(\left[1-\frac{\tau_{0}}{\kappa},1+\frac{\tau_{0}}{\kappa}\right]\) admits a unique fixed point which is \(a=1\)._
Proof.: We show that \(F_{\kappa}\) in \(\left[1-\frac{\tau_{0}}{\kappa},1\right]\) admits a unique fixed point which is \(a=1\), and then the fact that the same is true also in the interval \(\left[1,1+\frac{\tau_{0}}{\kappa}\right]\) is a direct consequence of Remark 2.4.
We first estimate \(\lambda_{a}\) as in the proof of Proposition 2.5: let \(\varphi_{a}(x):=\frac{e^{-\frac{\varphi_{a}(x)}{2}}}{\sqrt{\int_{-\pi}^{\pi}e^ {-a_{a}(y)}}dy}=\sqrt{m_{a}(x)}\). Then, recalling that \(1-a=\frac{\tau}{\kappa}\), the couple \((\varphi_{a},\lambda_{a})\) solves
\[\begin{cases}-\varphi_{a}^{\prime\prime}(x)+\frac{\tau V(x)}{2}\varphi_{a}(x) =\lambda_{a}\varphi_{a}(x)\quad\text{on }(-\pi,\pi),\\ \varphi_{a}^{\prime}(\pi)=\varphi_{a}^{\prime}(-\pi)=0\\ \int_{-\pi}^{\pi}\varphi_{a}^{2}(x)dx=1.\end{cases} \tag{2.20}\]
So,
\[\lambda_{a}=\inf_{\begin{subarray}{c}\phi\in H^{1}(-\pi,\pi)\\ \int_{-\pi}^{\pi}\phi^{2}=1\end{subarray}}\int_{-\pi}^{\pi}|\phi^{\prime}|^{2} +\frac{\tau V(x)}{2}\phi^{2}dx\leq\inf_{\begin{subarray}{c}\phi\in H^{1}(-\pi, \pi)\\ \int_{-\pi}^{\pi}\phi^{2}=1\end{subarray}}\int_{-\pi}^{\pi}|\phi^{\prime}|^{2} +\frac{\tau x^{2}}{4}\phi^{2}dx\]
which yields
\[0\leq\lambda_{a}\leq C\tau \tag{2.21}\]
as a straightforward consequence (just consider the constant competitor \(\phi=\sqrt{1/2\pi}\)).
We multiply by \(\phi_{a}\) the equation in (2.20) and integrate by parts: we obtain, recalling (2.21),
\[\int_{-\pi}^{\pi}|\phi_{a}^{\prime}|^{2}dx\leq\lambda_{a}\int_{-\pi}^{\pi}| \phi_{a}|^{2}dx=\lambda_{a}\leq C\tau. \tag{2.22}\]
By the mean value theorem there exists \(\xi\in[-\pi,\pi]\) such that \(\frac{1}{2\pi}=\frac{1}{2\pi}\int_{-\pi}^{\pi}|\phi|^{2}dx=\phi^{2}(\xi)\). So we conclude, recalling that \(m_{a}=\phi_{a}^{2}\), for all \(x\in[-\pi,\pi]\) and using (2.22)
\[|m_{a}(x)-\frac{1}{2\pi}|=|m_{a}(x)-m_{a}(\xi)|=\left|\int_{\xi}^{x}2\phi_{a} \phi_{a}^{\prime}dx\right|\leq 2\left[\int_{-\pi}^{\pi}|\phi_{a}|^{2}dx\right]^{1/2} \left[\int_{-\pi}^{\pi}|\phi_{a}^{\prime}|^{2}dx\right]^{1/2}\leq 2\sqrt{C\tau}. \tag{2.23}\]
Again multiplying by \(\phi_{a}^{\prime\prime}\) the equation in (2.20) and integrating by parts, we get by (2.21) (2.22) and the Young inequality that
\[\int_{-\pi}^{\pi}|\phi_{a}^{\prime\prime}|^{2}dx \leq \lambda_{a}\int_{-\pi}^{\pi}|\phi_{a}^{\prime}|^{2}dx+\tau\int_{- \pi}^{\pi}V(x)\phi_{a}\phi_{a}^{\prime\prime}dx\] \[\leq C^{2}\tau^{2}+\frac{1}{2}\int_{-\pi}^{\pi}|\phi_{a}^{\prime \prime}|^{2}dx+\frac{\tau^{2}}{2}\int_{-\pi}^{\pi}\|V\|_{\infty}^{2}|\phi_{a}|^ {2}dx\leq(C^{2}+1)\tau^{2}+\frac{1}{2}\int_{-\pi}^{\pi}|\phi_{a}^{\prime\prime }|^{2}dx.\]
From this, recalling that \(\phi_{a}^{\prime}(\pm\pi)=0\) we conclude for all \(x\in[-\pi,\pi]\),
\[|\phi_{a}^{\prime}(x)|=\left|\int_{-\pi}^{\pi}\phi_{a}^{\prime\prime}dx\right| \leq\sqrt{2\pi}\left(\int_{-\pi}^{\pi}|\phi_{a}^{\prime\prime}|^{2}dx\right) ^{1/2}\leq C\tau.\]
Now we recall that \(m_{a}(x)=e^{-u_{a}(x)}/\int_{-\pi}^{\pi}e^{-u_{a}}dx\), and so for all \(x\in[-\pi,\pi]\) and for \(\tau>0\) sufficiently small such that \(m_{a}\geq 1/16\) (by (2.22)), we conclude
\[|u_{a}^{\prime}(x)|=\frac{|m_{a}^{\prime}(x)|}{m_{a}(x)}=2\frac{|\phi_{a}^{ \prime}(x)|}{\phi_{a}(x)}\leq 8C\tau, \tag{2.24}\]
and then also \(|u_{a}|\leq C\tau\).
By formula (2.9) and by (2.22) we get that
\[|\lambda_{a}^{\prime}+\kappa|=\left|-\kappa\int_{-\pi}^{\pi}V(x) m_{a}(x)dx+\kappa\right|=\left|-\kappa\int_{-\pi}^{\pi}V(x)\left(m_{a}(x)-\frac{1}{ 2\pi}\right)dx\right|\\ \leq\kappa\int_{-\pi}^{\pi}V(x)\left|m_{a}(x)-\frac{1}{2\pi} \right|dx\leq\kappa C\sqrt{\tau}. \tag{2.25}\]
We consider now the function \(v_{a}\) solution to (2.8) with \(v_{a}(0)=0\). Let us write \(v_{a}(x)=\kappa(\cos x-1+z_{a}(x))\) for some function \(z_{a}\). Then it is easy to check that \(z_{a}\) is a solution to
\[-z_{a}^{\prime\prime}+z_{a}^{\prime}u_{a}^{\prime}=\sin x\ u_{a}^{\prime}- \frac{\kappa+\lambda_{a}^{\prime}}{\kappa} \tag{2.26}\]
with periodic boundary conditions and with \(z_{a}(0)=0\). By the gradient estimates on \(u_{a}\) (2.24) and the estimate (2.25), the right hand side of the previous equation is bounded by \(C\sqrt{\tau}\) for some \(C>0\), for \(\tau<1\). It is a straightforward computation (by direct integration, and again by the estimates on \(u_{a}\)) to show that \(|z_{a}^{\prime}(x)|\leq C\sqrt{\tau}\), and \(|z_{a}(x)|\leq C\sqrt{\tau}\), for some \(C>0\).
Recalling formula (2.10) and the previous estimates on \(z_{a}\), \(m_{a}\) we compute \(F_{\kappa}^{\prime}(a)\):
\[F_{\kappa}^{\prime}(a) = \frac{1}{\kappa}\int_{-\pi}^{\pi}(v_{a}^{\prime}(y))^{2}m_{a}(y) dy=\kappa\int_{-\pi}^{\pi}(z_{a}^{\prime}(y)-\sin y)^{2}m_{a}(y)dy\] \[\geq \kappa\left(\frac{1}{2\pi}-C\sqrt{\tau}\right)\int_{-\pi}^{\pi}(z _{a}^{\prime}(y)-\sin y)^{2}dy\geq\kappa\left(\frac{1}{2\pi}-C\sqrt{\tau} \right)(\pi-C\sqrt{\tau}).\]
In particular there exists \(\tau_{0}=\tau_{0}(C)>0\) such that if \(\tau\leq\tau_{0}\), then \(F_{\kappa}^{\prime}(a)=F_{\kappa}^{\prime}\left(1-\frac{\tau}{\kappa}\right) \geq\frac{\kappa}{4}.\) This implies immediately that for all \(\tau\in(0,\tau_{0}]\) there holds, for \(\kappa>4\),
\[F_{\kappa}\left(1-\frac{\tau}{\kappa}\right)\stackrel{{\tau^{ \prime}\in(0,\tau)}}{{=}}F_{\kappa}(1)-\frac{\tau}{\kappa}F_{\kappa}^{\prime} \left(1-\frac{\tau^{\prime}}{\kappa}\right)\stackrel{{\tau^{\prime}< \tau_{0}}}{{\leq}}1-\frac{\tau}{\kappa}\frac{\kappa}{4}=1-\frac{\tau}{4}<1- \frac{\tau}{\kappa}.\]
Using the previous results, we conclude with the proof of the main result of this section.
Proof of Theorem 2.1.: Let \(\delta=\frac{\tau_{0}}{4}\), where \(\tau_{0}\) is as in Proposition 2.11. By Theorem 2.10, there exists \(\kappa_{0}:=\kappa_{0}(\tau_{0})>4\) (using the same notation as in Theorem 2.10) such that for \(\kappa\geq\kappa_{0}\) the map \(F_{\kappa}:[0,1-\frac{\tau_{0}}{4}]\to[0,1-\frac{\tau_{0}}{4}]\) admits a unique fixed point \(\bar{a}\).
By Proposition 2.11 for all \(\kappa\geq\kappa_{0}\) there holds \(F_{k}\left(1-\frac{\tau_{0}}{4}\right)\leq 1-\frac{\tau_{0}}{4}\). Since \(F_{\kappa}\) is a nondecreasing map by (2.10), this implies that \(F_{\kappa}(a)<a\) for all \(a\in\left[1-\frac{\tau_{0}}{4},1-\frac{\tau_{0}}{4}\right]\).
Finally, again by Proposition 2.11, \(F_{\kappa}\) in \(\left[1-\frac{\tau_{0}}{4},1\right]\) admits a unique fixed point which is \(a=1\). This implies that there exists a unique fixed point \(\bar{a}\in[0,1)\). Note that by Remark 2.4, \(2-\bar{a}\) is the unique fixed point in \((1,2]\).
## 3 Local dynamic stability of the self-organizing solution
Let \((\bar{u},\bar{\lambda},\bar{m})\) be the unique stationary even self-organizing solution with \(\int_{-\pi}^{\pi}\cos x\ \bar{m}(x)dx>0\), which has been obtained in the previous section, under the assumption that \(\kappa\geq\kappa_{0}\) (see Theorem 2.1). We show in this section a _local_ exponential stability property of \((\bar{u},\bar{\lambda},\bar{m})\).
We consider the dynamic solutions \((\bar{u},m)\) of (1.5). First of all we observe that if we define
\[u(x,t):=\bar{u}(x,t)-\kappa\int_{0}^{t}\int_{-\pi}^{\pi}\cos y\ m(s,y)dyds \tag{3.1}\]
then \((u,m)\) is a dynamic solution to
\[\begin{cases}-u_{t}-u_{xx}+\frac{1}{2}|u_{x}|^{2}=\kappa V(x)\left[1-\int_{- \pi}^{\pi}V(y)m(t,y)dy\right]\\ m_{t}-m_{xx}-(mu_{x})_{x}=0\\ m_{x}(t,\pi)=m_{x}(t,-\pi)=0\qquad u_{x}(t,\pi)=u_{x}(t,-\pi)=0\\ u(\cdot,t),m(\cdot,t)\text{ are even, }\int_{-\pi}^{\pi}m(x,t)dx=1,m(\cdot,t)\geq 0 \text{ for all }t.\end{cases} \tag{3.2}\]
Note that Neumann boundary conditions at the boundary of \([-\pi,\pi]\) are equivalent to requiring \((u,m)\) to be \(2\pi\)-periodic.
We are going to show that if \((u,m)\) is a solution to (3.2) such that the density and the optimal control \((m,u_{x})\) remain for all the time in a suitable neighborhood of the equilibrium density and of the ergodic optimal control \((\bar{m},\bar{u}_{x})\), then \((m,u_{x})\) is going to converge exponentially fast to \((\bar{m},\bar{u}_{x})\) as \(T\) goes to infinity. In particular, this will also imply that the associated solution \((\bar{u},m)\) to (3.2), according to (3.1), satisfies the same exponential stability property.
We introduce the following constant
\[Q:=Q_{\kappa}=\int_{-\pi}^{\pi}\kappa x^{4}\bar{m}(x)dx. \tag{3.3}\]
It is easy to check, by using the upper and lower bounds (2.16) obtained in Corollary 2.8, that for \(\kappa>\kappa_{0}\), \(Q\) can be controlled above and below by some positive constants independent of \(\kappa\) (in fact, depending on \(c_{1}\), \(c_{2}\), \(c_{3}\) in (2.16)).
Our main result reads as follows.
**Theorem 3.1**.: _Let \((u,m)\) be a solution of (3.2). Assume that_
\[0<m(t,x)\leq c\kappa^{\frac{1}{4}}\bar{m}(x) \tag{3.4}\]
_for all \(x\in[-\pi,\pi]\), \(t\in[0,T]\), where \(c\) is specified below (see (3.11)). Then there exist \(\kappa_{1}\geq\kappa_{0}\wedge 1\), and \(C>0\) (independent of \(\kappa,T\)), such that for all \(0\leq t\leq T\) and \(\kappa\geq\kappa_{1}\), there holds_
\[\fint_{t}^{t+C\kappa^{-1/4}}\int_{-\pi}^{\pi}|u_{x}(\tau,x)-\bar{u}_{x}(x)|^{2} \bar{m}+Q\frac{|m(\tau,x)-\bar{m}(x)|^{2}}{\bar{m}(x)}dxd\tau\leq K(e^{-\omega t }+e^{-\omega(T-t)}), \tag{3.5}\]
_where_
\[K=4\sum_{t=0,t=T}\int_{-\pi}^{\pi}|u_{x}(x,t)-\bar{u}_{x}(x)|^{2}\bar{m}(x)+Q\frac {|m(x,t)-\bar{m}(x)|^{2}}{\bar{m}(x)}dx,\qquad\omega=\frac{\log 2}{C}\kappa^{1/4}.\]
_Moreover, for every \(t\in[0,T]\),_
\[\int_{-\pi}^{\pi}\frac{|m(t,x)-\bar{m}(x)|^{2}}{\bar{m}(x)}dx\leq KC^{\prime}( \kappa^{1/4}+1)(e^{-\omega t}+e^{-\omega(T-t)}). \tag{3.6}\]
The constant \(C\) will actually depend on \(C_{P}\) (see (3.9) below) and \(Q\).
**Remark 3.2**.: The way the previous statement quantifies the convergence of \(u_{x}\) to \(\bar{u}_{x}\) is in time average (note that the length of time integration vanishes as \(\kappa\) goes to infinity). One could get an exponential convergence _pointwise_ in time, as it is done for \(m\) to \(\bar{m}\) in (3.6), by coupling (3.5) with suitable estimates on the linearized HJ equation, which are not developed here.
Another way would be to use Lemma A.2 which yields pointwise information in time right away from (3.5), though it involves a (uniform in \(T\)) Lipschitz control on \(t\mapsto\int|u_{x}(t,x)-\bar{u}_{x}(x)|^{2}\bar{m}+\frac{|m(t,x)-m(x)|^{2}}{m (x)}dx\). Such control can be derived, but it should be quite sensitive to the value of \(\kappa\).
To prove Theorem 3.1, we first rescale the problem as in Remark 2.2; let us consider
\[w(t,x)=u(t\kappa^{-\frac{1}{2}},x\kappa^{-\frac{1}{4}}),\qquad\mu(t,x)=\kappa ^{-\frac{1}{4}}m(t\kappa^{-\frac{1}{2}},x\kappa^{-\frac{1}{4}}),\]
that satisfy in the rescaled space-time cylinder \((0,T\kappa^{\frac{1}{2}})\times[-\pi\kappa^{\frac{1}{4}},\pi\kappa^{\frac{1}{ 4}}]\) the system
\[\begin{cases}-w_{t}-w_{xx}+\frac{1}{2}|w_{x}|^{2}=V_{\kappa}(x)\left[1-\kappa^ {-\frac{1}{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V_{ \kappa}(y)\mu(y)dy\right]\\ \mu_{t}-\mu_{xx}-(\mu w_{x})_{x}=0\\ \mu_{x}(t,\pi\kappa^{\frac{1}{4}})=\mu_{x}(t,-\pi\kappa^{\frac{1}{4}})=0 \qquad w_{x}(t,\pi\kappa^{\frac{1}{4}})=w_{x}(t,-\pi\kappa^{\frac{1}{4}})=0\\ w(\cdot,t),\mu(\cdot,t)\text{ are even, }\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi \kappa^{\frac{1}{4}}}\mu(x,t)dx=1,\mu(\cdot,t)\geq 0\text{ for all }t.\end{cases} \tag{3.7}\]
Let \(\kappa>\kappa_{0}\) and \((\bar{w},\bar{\mu},\bar{\lambda})\) be the rescaling, according to Remark 2.2, of the unique even self organizing solution with \(\bar{\lambda}>0\) obtained in Theorem 2.1:
\[\bar{w}(x)=\bar{u}(x\kappa^{-\frac{1}{4}}),\qquad\bar{\mu}(x)=\kappa^{-\frac{ 1}{4}}\bar{m}(x\kappa^{-\frac{1}{4}}).\]
We recall that by Proposition 2.7 and Corollary 2.8, we get that for some constants independent of \(\kappa\),
\[c_{1}|x|^{2}-c_{3}\leq\bar{w}(x)\leq c_{2}|x|^{2}+c_{3}\qquad c_{4}e^{-c_{2}x ^{2}}\leq\bar{\mu}(x)\leq c_{5}e^{-c_{1}x^{2}}\qquad x\in[-\kappa^{\frac{1}{4 }}\pi,\kappa^{\frac{1}{4}}\pi]. \tag{3.8}\]
Recall also that we fixed \(\bar{w}(0)=0\) and moreover there holds \(\bar{w}(0)=\min\bar{w}\), by simmetry of \(w\). Due to (3.8), the following weighted Poincare inequality holds.
**Theorem 3.3** (Poincare weighted inequality).: _Let \((\bar{w},\bar{\mu},\bar{\lambda})\) as in (3.8) and \(\kappa\geq\kappa_{0}\), as in Theorem 2.1. Then there exist \(\kappa_{1}\geq\kappa_{0}\) and a constant \(C_{P}\) independent of \(\kappa\) such that for all \(\kappa\geq\kappa_{1}\), \(f\in H^{1}_{\bar{\mu}}(-\kappa^{\frac{1}{4}}\pi,\kappa^{\frac{1}{4}}\pi)\) with \(\int_{-\kappa^{\frac{1}{4}}\pi}^{\kappa^{\frac{1}{4}}\pi}f(x)\bar{\mu}(x)dx=0\) there holds_
\[\int_{-\kappa^{\frac{1}{4}}\pi}^{\kappa^{\frac{1}{4}}\pi}f^{2}(x)\bar{\mu}(x) dx\leq C_{P}\int_{-\kappa^{\frac{1}{4}}\pi}^{\kappa^{\frac{1}{4}}\pi}f_{x}^{2}(x) \bar{\mu}(x)dx. \tag{3.9}\]
The proof is reported for completeness in the appendix. Note that the constant \(Q\) introduced in (3.3) coincides with
\[Q=\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}x^{4}\bar{\mu}(x)dx. \tag{3.10}\]
We can now specify \(c\) in the statement of Theorem 3.1
\[c=\sqrt{\frac{C_{P}}{Q}} \tag{3.11}\]
so that (3.4) reads
\[0<\mu(t,x)\leq\kappa^{\frac{1}{4}}\sqrt{\frac{C_{P}}{Q}}\bar{\mu}(x) \tag{3.12}\]
for all \(x\in[-\pi\kappa^{\frac{1}{4}},\pi\kappa^{\frac{1}{4}}]\), \(t\in[0,T\kappa^{\frac{1}{2}}]\).
Let us define \(\zeta(t,x)=\mu(t,x)-\bar{\mu}(x)\) and \(v(t,x)=w(t,x)-\bar{w}(x)-\bar{\lambda}(T-t)\). They are solutions to
\[\begin{cases}-v_{t}-v_{xx}+\frac{1}{2}|v_{x}|^{2}+\bar{w}_{x}v_{x}=-\kappa^{- \frac{1}{2}}V_{x}(x)\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V _{x}(y)\zeta(t,y)dy\\ \zeta_{t}-\zeta_{xx}-(\zeta\bar{w}_{x})_{x}=(\mu v_{x})_{x}\\ \zeta(0,x)=\kappa^{-\frac{1}{4}}m_{0}(x\kappa^{-\frac{1}{4}})(x)-\bar{\mu}(x ),\ \ v(T\kappa^{\frac{1}{2}},x)=u_{T}(x\kappa^{-\frac{1}{4}})-\bar{w}(x)\end{cases} \tag{3.13}\]
with Neumann boundary conditions. Observe that \(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\zeta(t,x)dx=0\) for all \(t\).
The main result that will provide the exponential convergence is the following.
**Proposition 3.4**.: _Let_
\[\Phi(t):=\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x )|v_{x}(t,x)|^{2}dx+\frac{Q}{\kappa^{1/2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi \kappa^{\frac{1}{4}}}\frac{|\zeta(t,x)|^{2}}{\bar{\mu}(x)}dx.\]
_Let us assume that (3.12) holds. Then there exists \(\kappa_{1}\geq\kappa_{0}\), where \(\kappa_{0}\) is as in Theorem 2.1, such that for all \(0\leq t_{1}<t_{2}\leq T\kappa^{1/2}\) and \(\kappa\geq\kappa_{1}\), there holds_
\[\int_{t_{1}}^{t_{2}}\Phi(t)dt\leq 4\left(C_{P}+\frac{1}{C_{P}}+\frac{1}{Q} \right)\kappa^{1/4}\big{(}\Phi(t_{1})+\Phi(t_{2})\big{)}. \tag{3.14}\]
For the proof of this Proposition we need some lemmata. First of all we have the following result, obtained by duality arguments.
**Lemma 3.5**.: \[\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{ \frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}dxdt -2\kappa^{-\frac{1}{2}}\int_{t_{1}}^{t_{2}}\left(\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V_{k}(x)\zeta(t,x)dx\right)^{2}\] (3.15) \[\leq C_{P}\kappa^{1/4}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^ {\frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}(t_{1},x)dx+\frac{1}{\kappa^{1/4}}\int_{- \pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t_{1},x)|^{2} }{\bar{\mu}(x)}dx\] \[+C_{P}\kappa^{1/4}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{ \frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}(t_{2},x)dx+\frac{1}{\kappa^{1/4}}\int_{- \pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t_{2},x)|^{2}}{ \bar{\mu}(x)}dx.\]
Proof.: By duality, since \(v,\zeta\) are solutions to (3.13), and recalling that \(\mu>0\), we get
\[\frac{d}{dt}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4} }}v(t,x)\zeta(t,x)dx=-\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}} \frac{\mu(t,x)+\bar{\mu}(x)}{2}v_{x}^{2}(t,x)dx+\kappa^{-\frac{1}{2}}\left(\int _{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V_{k}(x)\zeta(t,x)dx \right)^{2}\] \[\leq -\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{ \bar{\mu}(x)}{2}v_{x}^{2}(t,x)dx+\kappa^{-\frac{1}{2}}\left(\int_{-\pi\kappa^{ \frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V_{k}(x)\zeta(t,x)dx\right)^{2}.\]
Integrating in \((t_{1},t_{2})\subseteq(0,7\kappa^{1/2})\) we get
\[\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{ \frac{1}{4}}}\frac{\bar{\mu}(x)}{2}v_{x}^{2}(t,x)dxdt-\kappa^{-\frac{1}{2}} \int_{t_{1}}^{t_{2}}\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1} {4}}}V_{k}(x)\zeta(t,x)dx\right)^{2}\] \[\leq \int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\zeta(t _{1},x)v(t_{1},x)dx-\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}} \zeta(t_{2},x)v(t_{2},x)dx.\]
Let \(t\in[0,T\kappa^{1/2}]\) and \(c(t)=\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v(t,x)\bar{\mu} (x)dx\). Since \(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\zeta(t,x)dx=0\) for all \(t\), there holds, also using Young inequality and the Poincare inequality (3.9):
\[\left|\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}} \zeta(t,x)v(t,x)dx\right| = \left|\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}} \zeta(t,x)(v(t,x)-c(t))dx\right|\] \[\leq \frac{\kappa^{1/4}}{2}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^ {\frac{1}{4}}}\bar{\mu}(x)(v(t,x)-c(t))^{2}dx+\frac{1}{2\kappa^{1/4}}\int_{- \pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t,x)|^{2}}{\bar {\mu}(x)}dx\] \[\leq \frac{\mathrm{C}_{p}\kappa^{1/4}}{2}\int_{-\pi\kappa^{\frac{1}{4 }}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}(t,x)dx+\frac{1}{2\kappa^{1 /4}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t,x )|^{2}}{\bar{\mu}(x)}dx.\]
Substituting this estimate in (3.16) per \(t=t_{1},t_{2}\) we conclude that (3.15) holds.
**Lemma 3.6**.: _For all \(0\leq t_{1}<t_{2}\leq T\kappa^{\frac{1}{2}}\), there holds_
\[\int_{t_{1}}^{t_{2}}\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1} {4}}}V_{k}(x)\zeta(t,x)dx\right)^{2}dt\leq\frac{Q}{4}\int_{t_{1}}^{t_{2}}\int _{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t,x)|^{2}}{ \bar{\mu}(x)}dx. \tag{3.17}\]
Proof.: First of all we observe, recalling (2.5), that
\[\int_{t_{1}}^{t_{2}}\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1} {4}}}V_{k}(x)\zeta(t,x)dx\right)^{2}dt\leq\frac{1}{4}\int_{t_{1}}^{t_{2}} \left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}|\zeta(t,x)|x^{2 }dx\right)^{2}dt\]
for some constant \(C\) not depending on \(\kappa\). By Holder inequality and (3.8) we get
\[\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}|\zeta(t,x)|x^{ 2}dx\right)^{2}\leq\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4 }}}|\zeta(t,x)|^{2}\bar{\mu}(x)dx\right)\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{ \pi\kappa^{\frac{1}{4}}}x^{4}\bar{\mu}(x)dx\right).\]
Substituting in the previous inequality we get the conclusion.
**Lemma 3.7**.: _Let assume that (3.12) holds. For all \(0\leq t_{1}<t_{2}\leq T\kappa^{1/2}\), there holds_
\[\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{ \frac{1}{4}}}\frac{|\zeta(t,x)|^{2}}{\bar{\mu}(x)}dxdt+\frac{1}{C_{P}}\int_{- \pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t_{2},x)|^{2}}{ \bar{\mu}(x)}dx\\ \leq\frac{\kappa^{\frac{1}{2}}}{Q}\int_{t_{1}}^{t_{2}}\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v_{x}^{2}(t,x)\bar{\mu}(x)dxdt+ \frac{1}{C_{P}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{ |\zeta(t_{1},x)|^{2}}{\bar{\mu}(x)}dx. \tag{3.18}\]
Proof.: Note that the equation satisfied by \(\mu\) in (3.7) can be written as
\[\mu_{t}-\mu_{xx}-(\mu\bar{w}_{x})_{x}=(\mu v_{x})_{x},\]
multiply it by \(\frac{\mu(t,x)}{\bar{\mu}(x)}-1=\frac{\zeta(t,x)}{\bar{\mu}(x)}\) and integrate in \([-\pi\kappa^{-\frac{1}{4}},\pi\kappa^{\frac{1}{4}}]\)
\[\frac{1}{2}\frac{d}{dt}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa ^{\frac{1}{4}}}\bar{\mu}(x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)^{2}dx +\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}(\mu_{x}(t,x)+\bar{w }_{x}(x)\mu(t,x))\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)_{x}dx\] \[= -\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v_{x}( t,x)\mu(t,x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)_{x}dx.\]
Recalling that \(\bar{\mu}(x)=e^{-\bar{\mu}(x)+c}\), see (2.4), we get \(\bar{\mu}_{x}(x)=-\bar{w}_{x}(x)\bar{\mu}(x)\) and so
\[\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)_{x}=\frac{\mu_{x}(t,x)+\bar{w}_{ x}(x)\mu(t,x)}{\bar{\mu}(x)}.\]
Substituting in the previous equality and using the Young inequality for the right hand side we obtain
\[\frac{1}{2}\frac{d}{dt}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa ^{\frac{1}{4}}}\bar{\mu}(x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)^{2}dx +\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)\left[ \left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)_{x}\right]^{2}dx\] \[= -\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v_{x}( t,x)\mu(t,x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)_{x}dx\] \[\leq \frac{1}{2}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{ 4}}}v_{x}^{2}(t,x)\frac{\mu^{2}(t,x)}{\bar{\mu}(x)}dx+\frac{1}{2}\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)\left[\left(\frac{ \mu(t,x)}{\bar{\mu}(x)}-1\right)_{x}\right]^{2}dx.\]
Therefore we get, recalling that by (3.12) \(\mu^{2}(t,x)\leq\kappa^{\frac{1}{2}}\frac{C_{P}}{Q}\bar{\mu}^{2}(x)\),
\[\frac{d}{dt}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu }(x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)^{2}dx+\int_{-\pi\kappa^{\frac {1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)\left(\frac{\mu(t,x)}{\bar{\mu} (x)}-1\right)_{x}^{2}dx\leq\kappa^{\frac{1}{2}}\frac{C_{P}}{Q}\int_{-\pi\kappa^{ \frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v_{x}^{2}(t,x)\bar{\mu}(x)dx.\]
By the Poincare inequality, (3.9), since \(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)\left( \frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)dx=0\), we get
\[\frac{d}{dt}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu} (x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)^{2}dx+C_{P}\int_{-\pi\kappa^{ \frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)\left(\frac{\mu(t,x)}{\bar {\mu}(x)}-1\right)^{2}dx\leq\kappa^{\frac{1}{2}}\frac{C_{P}}{Q}\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v_{x}^{2}(t,x)\bar{\mu}(x)dx.\]
By integration on \((t_{1},t_{2})\) we get, recalling that \(\bar{\mu}(x)\left(\frac{\mu(t,x)}{\bar{\mu}(x)}-1\right)^{2}=\frac{|\zeta(t,x)|^{ 2}}{\bar{\mu}(x)}\),
\[\frac{1}{C_{P}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1 }{4}}}\frac{|\zeta(t_{2},x)|^{2}}{\bar{\mu}(x)}dx+\int_{t_{1}}^{t_{2}}\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t,x)|^{2}}{\bar{\mu}( x)}dx\\ \leq\frac{\kappa^{\frac{1}{2}}}{Q}\int_{t_{1}}^{t_{2}}\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}v_{x}^{2}(t,x)\bar{\mu}(x)dx+ \frac{1}{C_{P}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{| \zeta(t_{1},x)|^{2}}{\bar{\mu}(x)}dx.\]
We are ready to prove Proposition 3.4.
Proof of Proposition 3.4.: We rewrite the inequality (3.15) by recalling the definition of \(\Phi(t)\):
\[\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar {\mu}(x)v_{x}^{2}(t,x)dxdt-2\kappa^{-\frac{1}{2}}\int_{t_{1}}^{t_{2}}\left(\int_ {-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V_{k}(x)\zeta(t,x)dx\right) ^{2}\leq\kappa^{1/4}\left(C_{P}+\frac{1}{Q}\right)\big{(}\Phi(t_{1})+\Phi(t_{2}) \big{)}. \tag{3.19}\]
By (3.17) and (3.18) we obtain
\[-2\kappa^{-\frac{1}{2}}\int_{t_{1}}^{t_{2}}\left(\int_{-\pi\kappa^{\frac{1}{4 }}}^{\pi\kappa^{\frac{1}{4}}}V_{k}(x)\zeta(t,x)dx\right)^{2}\geq-\kappa^{- \frac{1}{2}}\frac{Q}{2}\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{ \pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t,x)|^{2}}{\bar{\mu}(x)}dx\\ \geq-\frac{1}{2}\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4 }}}^{\pi\kappa^{\frac{1}{4}}}v_{x}^{2}(t,x)\bar{\mu}(x)dxdt-\kappa^{-\frac{1}{ 2}}\frac{Q}{2C_{P}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}} \frac{|\zeta(t_{1},x)|^{2}}{\bar{\mu}(x)}dx.\]
Now, using (3.19), the previous inequality and again (3.18) we get
\[\kappa^{1/4}(C_{P}+1/Q)\big{(}\Phi(t_{1})+\Phi(t_{2})\big{)}\geq \\ \geq\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa ^{\frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}(t,x)dxdt-2\kappa^{-\frac{1}{2}}\int_{t_{1 }}^{t_{2}}\left(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}V_{k }(x)\zeta(t,x)dx\right)^{2}\\ \geq\frac{1}{2}\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}} }^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}(t,x)dxdt-\kappa^{-\frac{1}{2} }\frac{Q}{2C_{P}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}} \frac{|\zeta(t_{1},x)|^{2}}{\bar{\mu}(x)}dx\\ \geq\frac{1}{4}\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}} }^{\pi\kappa^{\frac{1}{4}}}\bar{\mu}(x)v_{x}^{2}(t,x)dxdt+\frac{Q}{4\kappa^{1/2 }}\int_{t_{1}}^{t_{2}}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4} }}\frac{|\zeta(t,x)|^{2}}{\bar{\mu}(x)}dx-\\ -\frac{1}{\kappa^{1/2}}\left(\frac{Q}{4C_{P}}+\frac{Q}{2C_{P}} \right)\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{|\zeta(t _{1},x)|^{2}}{\bar{\mu}(x)}dx\]
and so, by the definition of \(\Phi\),
\[\kappa^{1/4}(C_{P}+1/Q)(\Phi(t_{1})+\Phi(t_{2}))\geq\frac{1}{4}\int_{t_{1}}^{t _{2}}\Phi(t)dt-\frac{1}{C_{P}}\Phi(t_{1}).\]
We conclude with the proof of Theorem 3.1.
Proof of Theorem 3.1.: Let \(C=16\left(C_{P}+\frac{1}{C_{P}}+\frac{1}{Q}\right)\). A combination of Proposition 3.4 and Lemma A.1 yields
\[\frac{1}{C\kappa^{1/4}}\int_{t}^{t+C\kappa^{1/4}}\Phi(s)ds\leq 4(e^{-\bar{ \omega}t}+e^{-\bar{\omega}(T\kappa^{1/2}-t)})[\Phi(0)+\Phi(T\kappa^{\frac{1}{2 }})],\qquad\text{where}\ \ \bar{\omega}=\frac{\log 2}{C\kappa^{1/4}}. \tag{3.20}\]
Since
\[\Phi(s)=\frac{1}{\kappa^{1/2}}\int_{-\pi}^{\pi}|u_{x}(x,s\kappa^{-1/2})-\bar{u} _{x}(x)|^{2}\bar{m}(x)+Q\frac{|m(x,s\kappa^{-1/2})-\bar{m}(x)|^{2}}{\bar{m}(x)}dx,\]
we get in (3.20) by performing a change of variables \(\tau=s\kappa^{-1/2}\)
\[\frac{1}{C\kappa^{1/4}}\int_{t}^{t+C\kappa^{1/4}}\frac{1}{\kappa^{1/ 2}}\int_{-\pi}^{\pi}|u_{x}(x,s\kappa^{-1/2})-\bar{u}_{x}(x)|^{2}\bar{m}(x)+Q\frac {|m(x,s\kappa^{-1/2})-\bar{m}(x)|^{2}}{\bar{m}(x)}dxds\] \[= \frac{1}{C\kappa^{1/4}}\int_{t\kappa^{-1/2}}^{t\kappa^{-1/2}+C \kappa^{-1/4}}\int_{-\pi}^{\pi}|u_{x}(x,\tau)-\bar{u}_{x}(x)|^{2}\bar{m}(x)+Q \frac{|m(x,\tau)-\bar{m}(x)|^{2}}{\bar{m}(x)}dxd\tau\] \[\leq (e^{-\bar{\omega}\kappa^{1/2}t\kappa^{-1/2}}+e^{-\bar{\omega} \kappa^{1/2}(T-t\kappa^{-1/2})})\frac{K}{\kappa^{1/2}}.\]
Replacing now \(t\kappa^{-1/2}\) by \(t\) and observing that \(\bar{\omega}\kappa^{1/2}=\omega\), we obtain the first assertion.
Applying now the Mean Value Theorem in (3.20), for every \(t\in[C\kappa^{1/4},T\kappa^{1/2}]\) there exists \(\xi=\xi(t)\in[t-C\kappa^{1/4},t]\) such that
\[\frac{Q}{\kappa^{1/2}}\int_{-\pi\kappa^{1}}^{\pi\frac{1}{4}}\frac{|\xi|^{2}}{ \bar{m}}(\xi,x)dx\leq\Phi(\xi)\leq 4(e^{-\bar{\omega}(t-C\kappa^{1/4})}+e^{- \bar{\omega}(T\kappa^{1/2}-t+C\kappa^{1/4})})[\Phi(0)+\Phi(T\kappa^{\frac{1}{ 2}})].\]
By Lemma 3.7,
\[\int_{-\pi\kappa^{1}}^{\pi\frac{1}{4}}\frac{|\xi|^{2}}{\bar{m}}(t,x)dx \leq\frac{C_{p}\kappa^{\frac{1}{2}}}{Q}\int_{\xi}^{t}\int_{-\pi \kappa^{1}}^{\pi\frac{1}{4}}v_{x}^{2}\bar{n}dxdt+\int_{-\pi\kappa^{\frac{1}{4} }}^{\pi\frac{1}{4}}\frac{|\xi|^{2}}{\bar{m}}(\xi,x)dx\] \[\leq\frac{C_{p}\kappa^{\frac{1}{2}}}{Q}\int_{t-C\kappa^{1/4}}^{t} \Phi(t)dt+\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\frac{1}{4}}\frac{|\xi|^{2}}{ \bar{m}}(\xi,x)dx,\]
hence using again (3.20) we get
\[\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\pi\frac{1}{4}}\frac{|\xi|^{2}}{\bar{m}}(t,x)dx\leq 4\frac{\kappa^{1/2}}{Q}\left(C\kappa^{1/4}+1\right)(e^{-\bar{\omega} (t-C\kappa^{1/4})}+e^{-\bar{\omega}(T\kappa^{1/2}-t+C\kappa^{1/4})})[\Phi(0)+ \Phi(T\kappa^{\frac{1}{2}})].\]
Going now back to the original space-time scale we obtain the statement.
## Appendix A Some estimates and a Poincare weighted inequality
**Lemma A.1**.: _Assume that \(\Phi:[0,T]\to[0,\infty)\) satisfies_
\[\int_{t_{1}}^{t_{2}}\Phi(s)ds\leq C[\Phi(t_{1})+\Phi(t_{2})]\]
_for some \(C>0\) and all \(t_{1}<t_{2}\in[0,T]\) (assume also that \(T\geq 8C\)). Then, for all \(t\in[0,T-4C]\),_
\[\frac{1}{4C}\int_{t}^{t+4C}\Phi(s)ds\leq 4(e^{-\omega t}+e^{-\omega(T-t)})[\Phi( 0)+\Phi(T)],\qquad\text{where}\ \ \omega=\frac{\log 2}{4C}.\]
Proof.: Set first \(\Psi(t)=\Phi(t)+\Phi(T-t)\). Then, for any \(t\in[0,T/2]\),
\[\int_{t}^{T-t}\Phi(T-s)ds=\int_{t}^{T-t}\Phi(s)ds\leq C[\Phi(t)+\Phi(T-t)]=C \Psi(t),\]
and therefore
\[\int_{t}^{T-t}\Psi(s)ds\leq 2C\Psi(t).\]
Since
\[\int_{0}^{4C}\Psi(s)ds\leq\int_{0}^{T}\Psi(s)ds\leq 2C\Psi(0),\]
there exists by the Mean Value Theorem a value \(\tau_{1}\in[0,4C]\) such that
\[\Psi(\tau_{1})\leq\frac{1}{2}\Psi(0).\]
Then, since
\[\int_{4C}^{8C}\Psi(s)ds\leq\int_{\tau_{1}}^{T-\tau_{1}}\Psi(s)ds\leq 2C\Psi( \tau_{1})\leq C\Psi(0),\]
there exists by the Mean Value Theorem a value \(\tau_{2}\in[4C,8C]\) such that
\[\Psi(\tau_{2})\leq\frac{1}{4}\Psi(0).\]
We can iterate this procedure to obtain a finite sequence of \(\tau_{n}\in[4(n-1)C,4nC]\) such that \(\Psi(\tau_{n})\leq 2^{-n}\Psi(0)\), until \(4nC\leq T/2\), and
\[\int_{4(n-1)C}^{4nC}\Psi(s)ds\leq\int_{\tau_{n}}^{T-\tau_{n}}\Psi(s)ds\leq \frac{C}{2^{n-2}}\Psi(0).\]
Let now \(t\in[0,T/2]\) and be \(n\) such that \(t\in[4(n-1)C,4nC)\). If \(4(n+1)C\leq T/2\), then
\[\int_{t}^{t+4C}\Psi(s)ds\leq\int_{4(n-1)C}^{4nC}\Psi(s)ds+\int_{4nC}^{4(n+1)C} \Psi(s)ds\leq\frac{8C}{2^{n}}\Psi(0)\leq 8Ce^{-\omega t}\Psi(0),\qquad\omega= \frac{\log 2}{4C},\] (A.1)
which yields
\[\int_{t}^{t+4C}\Phi(s)ds\leq 8Ce^{-\omega t}[\Phi(0)+\Phi(T)].\]
If \(4(n+1)C>T/2\) we use that \(\int_{t}^{t+4C}\Phi(s)ds\leq\int_{\tau_{n-1}}^{T-\tau_{n-1}}\Psi(s)ds\), and conclude as before.
For \(t\in[T/2,T-4C]\), we apply (A.1) with \(t\mapsto T-(t+4C)\) to get
\[8Ce^{-\omega(T-t-4C)}[\Phi(0)+\Phi(T)]\geq\int_{T-(t+4C)}^{T-t}\Psi(s)ds\geq \int_{T-(t+4C)}^{T-t}\Phi(T-s)ds=\int_{t}^{t+4C}\Psi(s)ds,\]
which concludes the proof.
**Lemma A.2**.: _Let \(f:[t_{1},t_{2}]\to[0,\infty)\) be Lipschitz continuous on \([t_{1},t_{2}]\). Then,_
\[f^{2}(t)\leq 2\|f^{\prime}\|_{L^{\infty}(t_{1},t_{2})}(t_{1}-t_{2})\fint_{t_{1}}^{ t_{2}}f(s)ds+\left(\fint_{t_{1}}^{t_{2}}f(s)ds\right)^{2}\qquad\forall t\in[t_{1},t _{2}].\]
Proof.: By the Mean Value Theorem, there exists \(\tau\in[t_{1},t_{2}]\) such that \(f(\tau)=\fint_{t_{1}}^{t_{2}}f(s)ds\). Hence,
\[f^{2}(t)=\int_{\tau}^{t}(f^{2})^{\prime}(s)ds+f^{2}(\tau)=2\int _{\tau}^{t}f(s)f^{\prime}(s)ds+\left(\fint_{t_{1}}^{t_{2}}f(s)ds\right)^{2}\\ \leq 2\|f^{\prime}\|_{L^{\infty}(t_{1},t_{2})}\int_{t_{1}}^{t_{2}} f(s)ds+\left(\fint_{t_{1}}^{t_{2}}f(s)ds\right)^{2}.\]
We now conclude with the proof of the Poincare weighted inequality stated in Theorem 3.3.
Proof of Theorem 3.3.: The proof is based on analogous arguments as in [3].
First of all we show the existence of a Lyapunov function, that is \(\phi\in C^{2}(\mathbb{R})\), with \(\phi(0)=1=\min\phi\), and \(c_{1}|x|^{2}\leq\phi(x)\leq c_{2}|x|^{2}+\tilde{c}\) for some \(c_{1},c_{2},\tilde{c}\), which satisfies for \(r>0\),
\[-\phi^{\prime\prime}+\phi^{\prime}\bar{w}_{x}\geq\beta\phi-\gamma\chi_{B(0,r)} \qquad\text{ in }[-\kappa^{\frac{1}{4}}\pi,\kappa^{\frac{1}{4}}\pi]\]
for some constants \(\beta,\gamma>0\) (depending on \(r\)). We are going to choose \(\phi=\bar{w}-\bar{w}(0)+1\).
Using (2.5) and (3.8), we get
\[\kappa^{-\frac{1}{2}}\int_{-\pi\pi\frac{1}{4}}^{\pi\kappa^{\frac{1}{4}}}V_{\kappa }(y)\bar{n}(y)dy\leq\kappa^{-\frac{1}{2}}\int_{-\pi\pi\frac{1}{4}}^{\pi\kappa^{ \frac{1}{4}}}\frac{y^{2}}{2}c_{5}e^{-c_{1}y^{2}}dy\leq\frac{1}{2},\]
choosing \(\kappa\geq\kappa_{1}\). Using the fact that \(V_{k}(x)\geq\frac{x^{2}}{6}\), that \(\bar{\lambda}\leq\ell\), and for \(\kappa\) sufficiently small
\[-\bar{w}^{\prime\prime}+|\bar{w}^{\prime}|^{2}=-\bar{\lambda}+\frac{|\bar{w}^{ \prime}|^{2}}{2}+V_{\kappa}(x)\left[1-\kappa^{-\frac{1}{2}}\int_{-\pi\pi\frac{ 1}{4}}^{\pi\kappa^{\frac{1}{4}}}V_{\kappa}(y)\bar{n}(y)dy\right]\geq-\ell+ \frac{1}{2}V_{\kappa}(x)\geq-\ell+\frac{x^{2}}{12}.\]
Observe that for \(r<|x|<\kappa^{\frac{1}{4}}\pi\), there exists \(\beta=\beta(r)\) for which \(\beta(\bar{w}(x)-\bar{w}(0)+1)\leq\beta(c_{2}x^{2}+c_{3}+c)\leq-\ell+\frac{x^ {2}}{12}\). Now for \(|x|\leq r\), it is possible to choose \(\gamma=\gamma(r)>0\) such that \(\gamma\geq l-\frac{x^{2}}{12}+\beta(\bar{w}(x)-\bar{w}(0)+1)\).
Now consider \(f\in H^{1}_{\bar{\mu}}(-\kappa^{-\frac{1}{4}}\pi,\kappa^{-\frac{1}{4}}\pi)\). Recall that \(\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\bar{n}(y)dy=1\) and \(\bar{\mu}>0\). First of all we observe that for all \(c\in\mathbb{R}\) there holds
\[\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\left(f(x)-\int_{- \pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}f(y)\mu(y)dy\right)^{2}\bar {\mu}(x)dx\leq\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}(f(x)- c)^{2}\bar{\mu}(x)dx.\] (A.2)
Let us fix \(r>0\), \(\beta=\beta(r),\gamma=\gamma(r)\) as in the construction of the Lyapunov function. Let \(c=\int_{-r}^{r}f(y)\bar{\mu}(y)dy\). Then for such choice of \(c\), there holds that
\[\int_{-r}^{r}(f(x)-c)^{2}\bar{\mu}(x)dx\leq C_{r}\int_{-r}^{r}f_{x}^{2}(x)\bar {\mu}(x)dx\]
for the standard Poincare inequality in the ball, with measure \(\bar{\mu}(x)dx\). Using now this inequality, the Lyapunov function, and the fact that \(\bar{\mu}(x)=e^{-\varpi(x)+c}\), we get
\[\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}(f(x)-c) ^{2}\bar{\mu}(x)dx\leq\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4} }}\frac{(f(x)-c)^{2}}{\beta\phi(x)}(-\phi^{\prime\prime}(x)+\phi^{\prime}(x) \bar{w}_{x}(x)+\gamma\chi_{B(0,r)}(x))\bar{\mu}(x)dx\] \[= \int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\frac{(f (x)-c)^{2}}{\beta\phi(x)}(-\phi^{\prime\prime}(x)\bar{\mu}(x)+\phi^{\prime}(x )\bar{w}_{x}(x)\bar{\mu}(x))dx+\int_{-r}^{r}\frac{(f(x)-c)^{2}}{\beta\phi(x)} \gamma\bar{\mu}(x)dx\] \[\leq \int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\left( \frac{(f(x)-c)^{2}}{\beta\phi(x)}\right)_{x}\phi_{x}(x)\bar{\mu}(x)dx+\frac{ \gamma}{\beta}\int_{-r}^{r}(f(x)-c)^{2}\bar{\mu}(x)dx\] \[\leq \int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}\left[2 \frac{(f(x)-c)f_{x}(x)}{\beta\phi(x)}\phi_{x}(x)-\frac{(f(x)-c)^{2}}{\beta \phi^{2}(x)}\phi_{x}^{2}(x)\right]\bar{\mu}(x)dx+\frac{\gamma}{\beta}C_{r}\int_ {-r}^{r}f_{x}^{2}(x)\bar{\mu}(x)dx\] \[\leq \frac{1}{\beta}\int_{-\pi\kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1 }{4}}}f_{x}^{2}(x)\bar{\mu}(x)dx+\frac{\gamma}{\beta}C_{r}\int_{-r}^{r}f_{x}^ {2}(x)\bar{\mu}(x)dx\leq\left(\frac{1+C_{r}\gamma}{\beta}\right)\int_{-\pi \kappa^{\frac{1}{4}}}^{\pi\kappa^{\frac{1}{4}}}f_{x}^{2}(x)\bar{\mu}(x)dx\]
from which we conclude recalling (A.2).
|
2304.10304 | Activity delay patterns in project networks | Delays in activities completion drive human projects to schedule and cost
overruns. It is believed activity delays are the consequence of multiple
idiosyncrasies without specific patterns or rules. Here we show that is not the
case. Using data for 180 construction project schedules, we demonstrate that
activity delays satisfy a universal model that we call the law of activity
delays. After we correct for delay risk factors, what remains follows a
log-normal distribution. | Alexei Vazquez, Chrysostomos Marasinou, Georgios Kalogridis, Christos Ellinas | 2023-04-20T13:34:07Z | http://arxiv.org/abs/2304.10304v3 | The law of activity delays
## Abstract
Delays in activities completion drive human projects to schedule and cost overruns. It is believed activity delays are the consequence of multiple idiosyncrasies without specific patterns or rules. Here we show that is not the case. Using data for 180 construction project schedules, we demonstrate that activity delays satisfy a universal model that we call the law of activity delays. After we correct for delay risk factors, what remains follows a log-normal distribution.
## Introduction
We are planners. We plan our day and our lives. We plan at home and at work. We break down plans into discrete activities and aggregate activities into projects. Projects account for 20 and 50% of economic activity depending on the country[1, 2]. However, projects rarely progress as planned.
About 75% of construction projects are delayed, with a median delay between 20 and 40% of the project duration depending on the sector[3, 4, 5, 6]. Based on data for eight software engineering projects, 20% of activities are late, with delays between one to hundred of days[7]. NASA space projects are delayed by 20% of project duration, with maximum schedule growth near 85%[8]. Over 75% of crowdfunded projects deliver the intended product later than planned, with delays of up to ten months[9]. PhD completion takes on average ten months longer than expected, sometimes taking five years extra[10]. The duration of surgical procedures follows a log-normal distribution and often exceeds the surgeon's estimates[11, 12].
Delays in project completion have a negative impact on our society. The project product has an intended economic or social benefit. The materialisation of that benefit will have to wait if the end product is not delivered on time. There is an additional cost for the project owner as well. An increase in project duration is correlated with an increase in the project cost[13, 8, 14].
Understanding what are the causes of project delays can help us anticipate and mitigate their negative effects. In the 80s, Kahneman and Tversky postulated an optimism bias during the planning phase as a key factor[15, 16]. We tend to underestimate activity durations and, as a consequence, observed activity durations are larger. Kahneman and Tversky proposed looking at past activities of a similar kind as a corrective measure. A methodology known as reference class forecasting. Nowadays, the theory of Kahneman and Tversky is having a renaissance at the Flyvbjerg school of reference class forecasting[17].
The theory of optimism bias does not tell us what is the actual cause of delay. To apply reference class forecasting, we need to specify the properties binding activities into the same class. Surveys, literature mining and statistical analysis of observed data have been used to investigate delay determinants[18, 19, 20, 21, 22]. More recently, methods from machine learning and natural language processing are being used to automate this procedure[23, 24, 25, 26].
Despite this body of work, we do not know what are the patterns of activity delays in human projects. It is not even clear if there are any delay patterns at all, or whether every project is unique. Other areas suggest there are. The statistics of interest times between recurrent activity executions is universal[27, 28, 29, 30, 31, 32]. Whether it is letters, emails, phone calls, web access or github commits, the time between two consecutive events has a heavy tail distribution [27, 28, 29, 30, 31, 32]. Here we show that project delays follow a universal pattern as well, that we call the law of activity delays.
## Results
### Motivation
The longer an activity takes, the higher the chance something goes wrong. If the planned execution rate is 1 work unit per day but the actual execution rate is _r_<1 work units per day, then the activity completion is delayed by (1-_r_) * (planned duration). In other words, delays are proportional to activity durations. Other factors may be relevant as well. Because delays are proportional to duration, those other factors should contribute in a multiplicative fashion. If \(y\) denotes the activity delay (the impact), _f_\({}_{t}\) the activity duration and _f_\({}_{2}\), _f_\({}_{3}\),..._f_\({}_{n}\) the remaining \(n\) factors, then \(y\) = _f_\({}_{t}\), _f_\({}_{2}\), _f_\({}_{3}\),..._f_\({}_{n}\). This multiplicative equation can be transformed into an additive one by taking the logarithm: logy = \(\Sigma\)_x_\({}_{i}\) where _x_\({}_{i}\) = logf_\({}_{i}\). In practice, we are not aware of all possible delay factors. Suppose we have \(k\) known factors, including duration, while \(u\) factors remain unknown. Grouping together the contribution of known and unknown factors: logy = \(\Sigma\)_1sisak_\({}_{i}\) + \(\Delta\)_x_, where \(\Delta\)_x_ = \(\Sigma\)_sisak_\({}_{i}\). If the unknown factors are modelled as random variables, then the distribution of their sum can be approximated by the normal distribution _N_(\(\Delta\)_x, \(u\)_\(\mu\)_0, \(u^{12}\sigma_{0}\)), where \(\mu\)_0 and \(\sigma_{0}\) are the typical mean and standard deviation of the unknown factors. Setting \(\mu\)=\(u\mu_{0}\) and \(\sigma\)=\(u^{12}\sigma_{0}\) we arrive at the expression logy = \(\mu\) + \(\Sigma\)_1sisak_\({}_{i}\) + \(\sigma\)_z_, where \(z\) is a random variable with the standardised normal distribution _N_(_z_, 0, 1). Here \(\mu\) and \(\sigma\) are the mean and standard deviation of the residual log-delay (\(\Delta\)logy = logy - \(\Sigma\)_1sisak_\({}_{i}\)). They parametrise the unknown.
### The law of activity delays
We define an activity execution delay as the time difference between its actual duration and its planned duration. Following the standard of risk models, activity delays are characterised by a delay likelihood \(p\), and a delay impact \(y\). Based on the motivation given above, we postulate the law of activity delays
\[p(x)=1\ /\left(\ 1+\exp(\ g_{0}+g_{1}\Sigma_{1sisak}X_{i}\ )\ \right), \tag{2}\] \[\text{logy}=\mu+\Sigma_{1sisak}X_{i}+\sigma\ \textbf{z}. \tag{1}\]
where \(z\) is a random variable with a standardised normal distribution and the logistic function in equation (1) enforces the requirement that 0\(\leq\)\(\rho\)\(\leq\)1.
We can solve equation (2) for \(z\) to obtain
\[z=(\log\)\(\ \text{\text{\text{\text{\text{log}}}}}\ \text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
### Unknown factors
The visual inspection of Fig. 1C indicates that the empirical distribution of residuals follows the shape of the standardised normal distribution. However, when we perform the Kolmogorov-Smirnov test of normality, we obtain a p-value close to zero, \(3\cdot 10^{-45}\). Based on that p-value we would reject the null hypothesis that the distribution of residuals is a standardised normal distribution. Therefore, either the postulated law of activity delays is incorrect or we are missing delay factors. We believe it is the latter. As we increase the number of delay factors, the distance between the empirical and expected distributions decreases and the associated p-value increases (Table 1).
Figure 1: **The law of activity delays.** (A-F) Residuals probability density function for different models (line), based on ~120,000 reported delays across 180 schedules. The dashed line is the postulated standardised normal distribution. (G,H) Residual mean and standard deviation of the log delays versus the numbers of delay factors, a point per model. The dashed line is the best linear regression. (I) Kolmogorov-Smirnov distance between the empirical and the normal distribution versus the number of factors.
In the Motivation section we estimated that \(\mu\) = \(\mu_{0}\,u\) = \(\mu_{0}\) (_n-k_), where \(n\), \(k\) and \(u\) are the number of total, known and unknown factors determining activity delays. While we do not know \(n\), we can plot the estimated \(\mu\) for each model as a function of the model number of factors. Using that plot we extrapolate to \(\mu\) = 0 to obtain an estimate for \(n\). Using this approach we estimate that activity delays are determined by 4 to 5 factors. Given we have uncovered 3 factors (D, I, O), that means we have 1 or 2 unknown factors.
### Activity signatures
Activity names (signatures) contain information about their categories and categories about the risk of delay. For example, outdoor work is susceptible to weather conditions[35]. From that observation we infer that the term "outdoor work" in activity signatures is indicative of similar delay statistics. We may have other examples in mind, but in general there is no obvious relationship between activity names and delay risk. Artificial intelligence (AI) has been proven a powerful method to uncover hidden relationships between text signatures and continuous variables[24, 25]. Using AI we constructed a map between the activity signature \(S\) and a delay factor _x\({}_{S}\)_, where _x\({}_{S}\)_ is the expected log delay given the signature \(S\). The top 5 words indicative of high delay risk are {detailed, piping, cutting, excavation, construction}. The bottom 5 are {revue, redundant, pour, idc, check}, where idc stands for inter discipline check. The activity signature factor was then incorporated into the law of delays (1)-(2).
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Model & \(g_{0}\) & \(g_{1}\) & \(\mu\) & \(\sigma\) & Distance & p-value \\ \hline Null & -0.79 & 0.00 & 3.26 & 1.31 & 0.3057 & 0 \\ \hline D & -1.77 & 0.68 & 2.05 & 1.18 & 0.0337 & 2.3E-103 \\ \hline I & -1.61 & 1.64 & 2.87 & 1.3 & 0.0434 & 5.9E-171 \\ \hline O & -1.74 & 1.81 & 2.81 & 1.3 & 0.0428 & 1.1E-165 \\ \hline S & -2.02 & 0.31 & -0.02 & 1.12 & 0.0234 & 6.7E-050 \\ \hline DI & -2.15 & 0.78 & 1.66 & 1.18 & 0.0275 & 1.3E-068 \\ \hline DO & -2.19 & 0.78 & 1.6 & 1.19 & 0.0266 & 2.1E-064 \\ \hline DS & -2.58 & 0.36 & -1.22 & 1.11 & 0.0071 & 5.1E-005 \\ \hline DIO & -2.46 & 0.78 & 1.22 & 1.19 & 0.0223 & 3.3E-045 \\ \hline DIOS & -3.27 & 0.44 & -2.01 & 1.14 & 0.0044 & 0.035 \\ \hline \end{tabular}
\end{table}
Table 1: **Models fit and performance.** The models are constructed with a combination of delay factors among no factors (Null), activity duration (D), input dependencies (I) and output dependencies (O). We report the Kolmogorov-Smirnov statistic, the distance between the empirical and expected cumulative distribution function, and the associated p-value.
First we tested the delay factor \(x_{\text{S}}\) alone. The residual distribution distance to the normal is slightly smaller than using activity duration (Fig. 1E vs 1B, Table 1), and comparable to the DIO model (Fig. 1E vs 1C). Adding duration, DS model, decreases the distance even further (Fig. 1E vs D). The final boost comes when we combine all, the DIOS model. The Kolmogorov-Smirnov distance between the empirical and the postulated distributions is 5 times smaller for the DIOS than the DIO or S models. Furthermore, the Kolmogorov-Smirnov p-value goes up to 0.035, in the range where we cannot reject the null hypothesis that the residual follows a standardised normal distribution.
### Is your project unique?
All projects that we have analysed satisfy the law of activity delays in equations (1)-(2). To test that observation in a systematic manner, we conducted a leaf-one-project-out validation using the DIOS model. Given a project, we trained the model in the complement dataset excluding the data for the given project (the training set). That includes training the mapping between activity signatures and delays (the \(x_{\text{S}}\) factor). Then we computed the residuals for the given project (the validating set) using the trained model. Figure 2A and B report the mean and standard deviation of the non-scaled residuals, relative to the values training on the whole dataset. Figure 2C reports the Kolmogorov-Smirnov distance between the empirical distribution of scaled residuals and the normal distribution. One point for each of the 180 projects in our database, ordered by the number of actualised activities they contain. When there is enough data (>500 delays) there is a convergence towards the values for the whole dataset (Fig. 2A-C, symbols spread around the dashed line). That means that the residual delays follow the same statistics in all projects analysed. From this analysis we conclude a project is unique to the extent it has a unique distribution of delay factors (D, I, O, S) across activities.
Figure 2: **Statistics across projects. (A, B) Mean and standard deviation of the non-scaled residual delay (log \(y\) - \(\Sigma_{\text{s=D,I,O,S}}x_{\text{J}}\)) for each specific project (symbols) as a function of the number of delayed activities, normalised to their values for the aggregate dataset. C) Kolmogorov-Smirnov distance between the distribution of scaled residuals (z) and the normal distribution. One point for each of the 180 projects in our database. The DIOS model was trained on the whole dataset excluding a given project and the residuals computed on the given project data. The horizontal dashed line represents the DIOS model values when fitted to the whole dataset.The vertical line marks 500 delayed activities.**
## Discussion
In conclusion, data for 180 construction projects validates our law of activity delays (equation 1 and 2). In practical terms, that means the distribution of x-factors across activities determines the statistics of activity delays. This conclusion has a number of key implications:
Activity delays follow a lognormal distribution with a log mean determined by the delay x-factors.
Two projects are similar or distinct if their distributions of x-factors across activities are similar or distinct. Your project is unique if it has a unique distribution of x-factors across activities.
At the activity level, reference class forecasting is valid to the extent it buckets together activities with similar x-factors. At the project level, to the extent it buckets together projects with similar distributions of x-factors across activities.
Our collection of projects contains different geographical regions, construction companies and periods of time. Those idiosyncrasies are relevant to the extent they contributed to the x-factor distribution across activities.
The DIOS model with activity duration, input dependencies, output dependencies and activity signatures is a validated model for delay risk analysis in the construction industry.
The validity of the law of activity delays remains to be tested in other areas. The delay factors of activity duration, input and output are universal. In contrast, the mapping of activity signatures to delay risk is domain-specific.
## Methods
### Data
Our dataset consists of 180 schedules containing a total of 397,507 finished activities (actualised) and among those 123,909 delayed activities. We define an activity delay event as an instance where the actual duration exceeded the planned duration by more than 2 days. We used the threshold of 2 days rather than 0 days to exclude annotation errors. If delayed, we define the activity delay as the time difference between its actual duration and its planned duration. For duration as a delay factor we used the planned duration. The number of input and output dependencies is extracted from the activity dependencies reported in the schedules. For the numerical calculations, we used log(1+\(D\)), log(1+\(I\)) and log(1+O) instead of log(\(D\)), log(\(I\)) and log(O) to avoid a log(0) value for activities with annotated duration zero, no input or no output dependencies.
### Activity signatures
We trained a neural network with a vocabulary composed by the _W_=100 most frequent words in the activity names (signatures) as a predictor of delay. The network was made of 3 layers (input, hidden, output) with \(W\) nodes each. The mapping of activity names to vectors was done with the CountVectorize class, python package sklearn.feature_extraction.text. Training and predictions were made with the MLPRegressor class, python package sklearn.neural_network.
### Parameter estimation
(\(g_{0}\), \(g_{1}\)). We define the delay event variable \(e_{j}=1\) if activity \(j\) is delayed and 0 otherwise. We perform logistic regression of \(e_{j}\) vs \(\Sigma_{1\leq\text{sklearn}}x_{j}\), where \(x_{j}\) is the value of x-factor \(i\) for activity \(j\). From this regression, we estimate (\(g_{0}\), \(g_{1}\)) in equation (1). To this end we use the LogisticRegression class of the python module sklearn.linear_model.
(\(\mu\), \(\sigma\)). \(\mu\) and \(\sigma\) are the mean and standard deviation of the non-scaled residual \(\Delta\text{logy}_{j}=\Sigma_{j}(\text{logy}_{j}-\Sigma_{1\leq\text{sklearn}}x _{j})\), where the sum over \(j\) is restricted to delayed activities.
KS test. The one sample Kolmogorov-Smirnov test was performed with the kstest class, python module scipy.stats.
|
2308.08846 | Two-interface and thin filament approximation in Hele--Shaw channel flow | When a viscous fluid partially fills a Hele--Shaw channel, and is pushed by a
pressure difference, the fluid interface is unstable due to the Saffman--Taylor
instability. We consider the evolution of a fluid region of finite extent,
bounded between two interfaces, in the limit the interfaces are close, that is,
when the fluid region is a thin liquid filament separating two gases of
different pressure. In this limit, we derive a second-order `thin filament'
model that describes the normal velocity of the filament centreline, and
evolution of the filament thickness, as functions of the thickness, centreline
curvature and their derivatives. We show that the second-order terms in this
model, that include the effect of transverse flow along the filament, are
necessary to regularise the instability. Numerical simulation of the thin
filament model is shown to be in accordance with level-set computations of the
complete two-interface model. Solutions ultimately evolve to form a bubble of
rapidly increasing radius and decreasing thickness. | Michael C Dallaston, Michael J W Jackson, Liam C Morrow, Scott W McCue | 2023-08-17T08:10:50Z | http://arxiv.org/abs/2308.08846v3 | [
###### Abstract
For a viscous fluid trapped in a Hele-Shaw channel, and pushed by a pressure difference, the fluid interface is unstable due to the Saffman-Taylor instability. We consider the evolution of a fluid region of finite extent, bounded between two interfaces, in the limit the interfaces are close, that is, when the fluid region is a thin liquid filament separating two gases of different pressure. In this limit, we derive a geometric flow rule that describes the normal velocity of the filament centreline, and evolution of the filament thickness, as functions of the thickness and centreline curvature. We show that transverse flow along the filament is necessary to regularise the instability. Numerical simulation of the thin filament flow rule is shown to closely match level-set computations of the complete two-interface model, and solutions ultimately evolve to form a bubble of increasing radius and decreasing thickness.
H 21 August 2023
H 21 August 2023
Hlele-Shaw flows, Lubrication theory, Liquid bridges
## 1 Introduction
For a standard model of Hele-Shaw flow in a rectilinear channel, consisting of a semi-infinite inviscid fluid region and a semi-infinite viscous fluid region separated by a single interface, the interface exhibits the Saffman-Taylor instability when the inviscid fluid displaces the viscous, and is stable when the viscous fluid displaces the inviscid (Saffman & Taylor 1958). In the absence of surface tension, exact solution methods exist that exhibit either finite time cusp or finger formation (Howison 1986\(a\),_b_); however, the problem is ill-posed. Surface tension is needed to regularise the problem by stabilizing sufficiently large wavenumber perturbations. For long time, solutions tend to a travelling wave solution known as the Saffman-Taylor finger, with width dependent on the surface tension, tending to half the channel width in the limit that the surface tension goes to zero (McLean & Saffman 1981).
The traditional Saffman-Taylor instability is studied on the assumption that the viscous fluid region extends infinitely far along the channel, so that there is only a single interface to consider. However, in reality the fluid region will only have finite extent, so that there are in fact two interfaces: one in which the viscous fluid is being displaced by the driving inviscid fluid, and one in which the viscous fluid is the one advancing (see Fig. 1). In this case, the force driving the fluid region is the pressure difference between the two inviscid fluids on either end of the fluid region. In the absence of surface tension, some classes of exact solutions
to the two-interface problem have been found through use of special functions (Crowdy & Tanveer 2004; Feigenbaum _et al._ 2001). The exact solutions in these studies, however, do not exhibit the interfaces becoming closely separated. Farmer & Howison (2006) consider an approximate model where the two interfaces in an unbounded Hele-Shaw cell are very close, resulting in a thin filament of viscous liquid, and construct exact solutions to this approximate model (see section 3.3). All of these zero-surface-tension models are ill-posed.
The two-interface Hele-Shaw model in the limit that the interfaces are close is of great importance as even if the fluid region is not initially thin, the effect of the Saffman-Taylor instability will result in a thin fluid region or filament developing (see for example the experimental results in Ward & White (2011) and Morrow _et al._ (2023)). This formation of a thin filament precedes the fluid 'bursting', at which point the two inviscid regions meet and the pressure rapidly equalises. The breakup of a thin viscous filament (with surface tension but in the absence of a driving pressure difference) has also been examined by the use of the lubrication approximation (Almgren 1996; Almgren _et al._ 1996; Constantin _et al._ 1993; Dupont _et al._ 1993; Goldstein _et al._ 1993, 1995, 1998). The complicated, but self-similar break-up behaviour of the filament in particular (where the filament thickness goes to zero at a finite time and point in space) is detailed in Almgren _et al._ (1996).
In this article we consider two-interface Hele-Shaw flow in a channel, including the effects of both driving pressure difference and surface tension, with particular focus on the case in which the fluid interfaces become close together. In section 2 we describe the full two-interface model and its stability. In section 3 we derive an approximation (the thin filament approximation) that applies when the two interfaces are close together. This approximation represents a regularised version of the model by Farmer & Howison (2006). We show that it is important to include the effect of surface tension both on the velocity of the filament as well as the flow along the filament in the tangent direction. We also find quasi-travelling wave solutions, which may be thought of as the analogue of Saffman-Taylor fingers. In section 4 we compute numerical solutions of the thin filament model, which compare well with solutions of the original two-interface model that are found using a level set method. We show the general behaviour of our thin filament model is not to tend toward one of these quasi-travelling wave solutions, but instead develop a rapidly expanding 'bubble' of circular shape and decreasing thickness.
## 2 Formulation of the two-interface Hele-Shaw flow problem
### Hele-Shaw flow equations
We consider a Hele-Shaw channel of nondimensional width \(2\pi\) in the \(x\)-direction. The fluid region \(\Omega\) is bounded above and below by interfaces \(\partial\Omega_{U}\) and \(\partial\Omega_{L}\), respectively. A nondimensional pressure difference \(P\) acts to push the fluid region in the positive \(y\)-direction, while surface tension acts on both interfaces (see Fig. 1). The standard governing equations in nondimensional form are
\[\nabla^{2}\phi=0, \boldsymbol{x}\in\Omega \tag{1a}\] \[v_{n}=\frac{\partial\phi}{\partial n}, \boldsymbol{x}\in\partial\Omega_{L},\partial\Omega_{U},\] (1b) \[\phi=-P-\sigma\kappa_{L}, \boldsymbol{x}\in\partial\Omega_{L}\] (1c) \[\phi=\sigma\kappa_{U}, \boldsymbol{x}\in\partial\Omega_{U} \tag{1d}\]
with \(\phi\) the velocity potential, \(v_{n}\) the normal velocity of each interface, \(\kappa_{U}\) and \(\kappa_{L}\) the curvatures of each interface, \(P\) the nondimensional pressure difference, and \(\sigma\) the nondimensional surface tension. The normal vector \(\boldsymbol{\hat{n}}\) of each interface is defined so that a flat interface
has normal in the positive \(y\)-direction, and the signs of curvatures \(\kappa\) are chosen such that positive \(\kappa\) implies a concave interface in the positive \(y\)-direction (for example, both interfaces in Fig. 1 have negative curvature at \(x=0\) and positive curvature at \(x=\pm\pi\)). This choice of sign convention for both interfaces will simplify the thin filament derivation in the next section. While one of the parameters \(P\) or \(\sigma\) (if nonzero) could be scaled out by choosing an appropriate time scale, retaining both parameters aids in understanding the effects of the terms that arise in our later analysis.
### Linear stability
The two-interface Hele-Shaw configuration has an exact base state where both interfaces are horizontal, and the fluid region moves upward at constant velocity \(v_{n}=P/h_{0}\), where \(h_{0}\) is the distance between the two interfaces. To examine the stability of this configuration, write the system (1) in Cartesian \((x,y)\) coordinates, and define the upper and lower interfaces as \(y=f_{U}(x,t)\) and \(y=f_{L}(x,t)\), respectively. In addition to Laplace's equation for \(\phi\), the boundary conditions (1)-(1) are
\[\phi_{y}-\phi_{x}(f_{L})_{x}=(f_{L})_{t}, \phi=-P-\sigma\frac{(f_{L})_{xx}}{(1+(f_{L})_{x}^{2})^{3/2}}, \qquad y=f_{L}(x,t),\] \[\phi_{y}-\phi_{x}(f_{U})_{x}=(f_{U})_{t}, \phi=\sigma\frac{(f_{U})_{xx}}{(1+(f_{U})_{x}^{2})^{3/2}}, \qquad y=f_{U}(x,t).\]
The base state is (up to arbitrary translation in \(y\)) represented by \(f_{L}=(P/h_{0})t-h_{0}\), \(f_{U}=(P/h_{0})t\), and \(\phi=(P/h_{0})\hat{y}\), where \(\hat{y}=(y-Pt/h_{0})\) is the coordinate in the travelling frame.
To examine linear stability then, we impose perturbations on \(f_{L}\), \(f_{U}\):
\[f_{L}(x,t) =\frac{P}{h_{0}}t-h_{0}+f_{L1}(x,t),\] \[f_{U}(x,t) =\frac{P}{h_{0}}t+f_{U1}(x,t)\] \[\phi(x,y,t) =\frac{P}{h_{0}}\hat{y}+\phi_{1}(x,\hat{y},t).\]
Figure 1: A finite fluid region in a Hele–Shaw channel. The fluid is driven in the positive \(y\) direction by a pressure difference.
On substitution into the boundary conditions,
\[f_{L1t}=\phi_{1y},\qquad\phi_{1}=-\sigma f_{L1xx}-\frac{P}{h_{0}}f_{L2},\qquad \hat{y}=-h_{0}\]
\[f_{U1t}=\phi_{1y},\qquad\phi_{1}=\sigma f_{U1xx}-\frac{P}{h_{0}}f_{U1},\qquad \hat{y}=0\]
A perturbation with wavenumber \(k\) in the \(x\)-direction takes the form
\[f_{1U}=A\cos(kx)\mathrm{e}^{\lambda t},\qquad f_{1L}=B\cos(kx)\mathrm{e}^{ \lambda t},\qquad\phi_{1}=(c_{1}\mathrm{e}^{k\hat{y}}+c_{2}\mathrm{e}^{-k\hat{y }})\cos(kx)\mathrm{e}^{\lambda t},\]
where the pressure boundary conditions give \(c_{1}\) and \(c_{2}\) in terms of the interfacial amplitudes \(A\) and \(B\), and the kinematic conditions result in an eigenvalue problem for \(\lambda\):
\[\lambda\begin{bmatrix}A\\ B\end{bmatrix}=\frac{k}{\sinh(kh_{0})}\begin{bmatrix}-\cosh(kh_{0})(\sigma k^ {2}+P/h_{0})&(-\sigma k^{2}+P/h_{0})\\ -(\sigma k^{2}+P/h_{0})&\cosh(kh_{0})(-\sigma k^{2}+P/h_{0})\end{bmatrix} \begin{bmatrix}A\\ B\end{bmatrix}. \tag{2.2}\]
The eigenvalues of this system are thus
\[\lambda=-\sigma k^{3}\coth(kh_{0})\pm k\sqrt{\left(\frac{P}{h_{0}}\right)^{2}+ \frac{\sigma^{2}k^{4}}{\sinh^{2}(kh_{0})}}. \tag{2.3}\]
The growth rate curves (\(\lambda=\lambda(k)\)) are shown in Fig. 2 (solid line) for parameter values \(\sigma=0.1,h_{0}=0.2\). One eigenvalue is negative for all wavenumbers \(k\), while the other is positive for a finite band of wavenumbers that ranges from zero up to a critical wavenumber. The presence of surface tension regularises the system by stabilizing the wavenumbers for large \(k\). We emphasise that the system is unstable no matter the direction of the pressure gradient (regardless of the sign of \(P\)), as each direction involves one of the interfaces moving in the unstable direction according to the Saffman-Taylor instability.
Our linear stability analysis here for a finite viscous fluid evolving in a Hele-Shaw channel
Figure 2: Growth rate \(\lambda\) of perturbations of given wavenumber \(k\) for the two interface Hele–Shaw model (2.3), the filament model (3.6), and the filament model in the absence of transverse flow (3.15). Each model has the parameter values \(P=1\), \(h_{0}=0.2\), and \(\sigma=0.1\). All systems have two eigenvalues for each wavenumber. For the Hele–Shaw model and the filament model, the most unstable eigenvalue is stabilised at a cut-off wavenumber \(k\), and agree closely. The filament model that lacks transverse flow is unstable for all wavenumbers, with the positive eigenvalue tending to a constant as \(k\to\infty\).
is analogous to that presented for a radial geometry with two interfaces (Morrow _et al._, 2023). Generalisations and alterations to include viscous fluids on either side of the interfaces have been conducted for both linear and weakly nonlinear frameworks (Gin & Daripa, 2015; Anjos & Li, 2020).
We close this section by noting some limiting behaviour of (2.3). For large \(h_{0}\), in order to keep the speed of the base state \(O(1)\), we need to keep the driving pressure difference \(P=O(h_{0})\). In that case, \(\lambda\sim-\sigma k^{3}\pm kP/h_{0}\) as \(h_{0}\to\infty\). This limit agrees with the well-studied single interface problem (an infinite body of viscous fluid), with the plus (minus) sign associated with the unstable (stable) direction of flow. Of a more particular interest here is the other limit of \(h_{0}\ll 1\). Again, supposing that \(P=O(h_{0})\) in order to keep the interface speed \(O(1)\), we have
\[\lambda\sim\left[-\sigma\left(\frac{k}{h_{0}}\right)^{2}\pm\left(\frac{k}{h} \right)\sqrt{\sigma^{2}\left(\frac{k}{h_{0}}\right)^{2}+\left(\frac{P}{h_{0} }\right)^{2}}\right]h_{0}\quad\text{as}\quad h_{0}\to 0,\quad k=O(h_{0}), \tag{2.4}\]
\[\lambda\sim\left[\frac{1}{2\sigma}\left(\frac{P}{h}\right)^{2}-\frac{\sigma k ^{4}}{2}\right]h_{0},\quad-2\sigma\left(\frac{k}{h_{0}}\right)^{2}\,h_{0} \quad\text{as}\quad h_{0}\to 0,\quad k=O(1), \tag{2.5}\]
\[\lambda\sim-\frac{\sigma(\cosh(k\,h_{0})\mp 1)}{\sinh(k\,h_{0})}\frac{1}{h_{0} ^{3}}\quad\text{as}\quad h_{0}\to 0,\quad k=O(1/h_{0}), \tag{2.6}\]
where for (2.5) \(k=O(1)\) means strictly of order one, while for (2.6) \(k=O(1/h_{0})\) means strictly of order \(1/h_{0}\).
## 3 Thin filament approximation
In this section we derive and analyse an approximation of the two-interface flow by considering the thickness of the fluid region (that is, the distance between the two interfaces) to be small. We refer to this as the 'thin filament' approximation. This assumption will lead to a geometric flow rule, that is, a rule for the normal velocity of the filament, as well as for the evolution of the thickness along the filament. This approximation is valuable as even if the fluid region is not initially thin, as the system is unstable it may evolve to be thin over time, and indeed our numerical results in section 4 indicates that this is generally the case.
### Derivation
Instead of using a Cartesian coordinate system, we describe the fluid region using the position of its centreline \(\mathbf{x}_{0}(s,t)\), as a function of arclength parameter \(s\) and time \(t\), and the thickness in the normal direction, \(h(s,t)\) (see Fig. 3). As well as being a natural way to describe a filament geometrically, this choice will allow for interfaces that become multivalued functions of \(x\). At a given time define a locally orthogonal system \((s,n)\) with \(s\) the arclength parameter along the centreline, and \(n\) the normal coordinate (see Fig. 3). The unit tangent and normal to the centreline are \(\mathbf{\hat{t}}\) and \(\mathbf{\hat{n}}\), respectively. Coordinates of the lower and upper interfaces are then, respectively,
\[\mathbf{x}_{L}=\mathbf{x}_{0}-\frac{h}{2}\mathbf{\hat{n}},\qquad\mathbf{x}_{U}=\mathbf{x}_{0}+ \frac{h}{2}\mathbf{\hat{n}}.\]
On assuming small \(h\), Laplace's equation for velocity potential \(\phi\) (2.1\(a\)) implies that \(\phi\) is approximately linear in \(n\), that is,
\[\phi=v_{n}(s,t)n+c(s,t) \tag{3.1}\]
where \(v_{n}\) is the normal velocity of the centreline (as well as each interface to \(O(h)\)), and both \(v_{n}\) and \(c\) must be determined from the pressure boundary conditions on each interface (2.1c), (2.1d).
In order to apply these boundary conditions, we must find expressions for the curvatures of each interface in terms of the centreline curvature \(\kappa\) and the thickness \(h\). By differentiating the expression for the coordinate on \(\mathbf{x}_{U}\) with respect to the centreline arclength parameter \(s\), we compute a tangent vector on the upper interface:
\[\mathbf{t}_{U}=\frac{\mathrm{d}\mathbf{x}_{U}}{\mathrm{d}s}=\left(1-\frac{h}{2} \kappa\right)\mathbf{\hat{t}}+\frac{h_{s}}{2}\mathbf{\hat{n}},\]
where \(\kappa\) is the curvature of the centreline, and we have used the definition of the tangent as well as the Frenet formulas
\[\frac{\mathrm{d}\mathbf{x}_{0}}{\mathrm{d}s}=\mathbf{\hat{t}},\qquad\frac{\mathrm{d} \mathbf{\hat{t}}}{\mathrm{d}s}=\kappa\mathbf{\hat{n}},\qquad\frac{\mathrm{d}\mathbf{\hat{ n}}}{\mathrm{d}s}=-\kappa\mathbf{\hat{t}}.\]
This tangent vector \(\mathbf{t}_{U}\) is not of unit length since \(s\) is only the arclength on the centreline. Let \(s_{U}\) be the arclength coordinate on the upper interface; then neglecting terms quadratically small in \(h\) and \(h_{s}\):
\[\frac{\mathrm{d}s_{U}}{\mathrm{d}s}=\left|\mathbf{t}_{U}\right|=(1-h\kappa/2)+O(h ^{2}),\]
thus the unit tangent is
\[\mathbf{\hat{t}}_{U}=\mathbf{\hat{t}}+\frac{h_{s}}{2}\mathbf{\hat{n}}+O(h^{2}),\]
and the curvature of the upper interface is
\[\kappa_{U}=\frac{\mathrm{d}\mathbf{\hat{t}}_{R}}{\mathrm{d}s_{U}}\cdot\mathbf{\hat{n}} _{U}=\left(1+\frac{h\kappa}{2}\right)\left(\kappa+\frac{h_{ss}}{2}\right)+O(h ^{2})=\kappa+\frac{h\kappa^{2}}{2}+\frac{h_{ss}}{2}+O(h^{2}).\]
Similarly, we find the expression for the lower curvature:
\[\kappa_{L}=\kappa-\frac{h\kappa^{2}}{2}-\frac{h_{ss}}{2}+O(h^{2}).\]
Figure 3: The coordinate system used to derive the thin filament approximation for the normal velocity \(v_{n}\), (3.2), and thickness along the filament \(h\) (3.4).
Applying the boundary conditions (1c), (1d) at \(n=\pm h/2\) we therefore find
\[v_{n}=\frac{1}{h}\left(P+2\sigma\kappa\right), \tag{3}\]
and \(c=-P/2+\sigma(h\kappa^{2}+h_{ss})/2\). The presence of the term \(c\) allows for fluid flow in the tangent direction due to a tangential pressure gradient; the tangent flow rate \(q\) is given by
\[q=\int_{-h/2}^{h/2}\phi_{s}\;\mathrm{d}n=hc_{s}=\frac{\sigma}{2}(h(h\kappa^{2} )_{s}+hh_{sss}). \tag{4}\]
We now must find an evolution equation for \(h\), which will include both the effects of compression/dilation (arclength \(s\) is not constant over time) as well as the flow due to the tangential pressure gradient. Using a Lagrangian coordinate \(\eta\) (that moves with \(v_{n}\hat{\boldsymbol{n}}\)) and considering conservation of mass over a small step \(\delta t\), we have
\[\frac{h\delta\theta}{\kappa}=(h+\delta h)\left(\frac{1}{\kappa}+v_{n}\delta t \right)\delta\theta+q\delta t-(q+\delta q)\delta t.\]
Simplifying and taking the limit,
\[\frac{Dh}{Dt}=\kappa hv_{n}-\kappa\frac{\partial q}{\partial\theta}=\kappa hv _{n}-\frac{\partial q}{\partial s}.\]
Here \(Dh/Dt\) means the time derivative holding constant the coordinate \(\eta\) that moves with the normal velocity. Substituting in the expressions (3), (4) for \(v_{n}\) and \(q\), respectively:
\[\frac{Dh}{Dt}=\kappa\left(P+2\sigma\kappa\right)-\frac{\sigma}{2}\frac{ \partial}{\partial s}\left(h(h\kappa^{2})_{s}+hh_{sss}\right). \tag{5}\]
Equations (3) and (5) now define the motion of the filament centreline, and the evolution of the thickness on that centreline, respectively.
Our thin filament model above generalises two previously used models in the literature. In the absence of surface tension (\(\sigma=0\)), (3), (5) reduce to the problem introduced in Farmer & Howison (2006), which we describe further below in section 3.3. On the other hand, if surface tension is present but the centreline is straight (\(\kappa=0\)), then (5) would simplify to the well known equation for thin film evolution in a Hele-Shaw cell
\[h_{t}+\frac{\sigma}{2}(hh_{sss})_{s}=0, \tag{6}\]
which has been studied extensively in the context of droplet breakup in Hele-Shaw flow (Almgren 1996; Almgren _et al._ 1996; Constantin _et al._ 1993; Dupont _et al._ 1993; Goldstein _et al._ 1993, 1995, 1998).
### Stability
As with the full two-interface model (1), the thin filament approximation has an exact solution comprising a straight filament of uniform thickness \(h_{0}\) moving upward with speed \(P/h_{0}\). To test the stability of this straight filament, we write the centreline in Cartesian coordinates as \(y=f(x,t)\), and perturb the straight filament:
\[h(y,t)=h_{0}+\tilde{h}\mathrm{e}^{\mathrm{i}k\kappa+\lambda t},\qquad f(y,t)= \frac{P}{h_{0}}t+\tilde{f}\mathrm{e}^{\mathrm{i}k\kappa+\lambda t}.\]
In the linear approximation, \(D/Dt=\partial/\partial t+O\left(f_{x}^{2}\right)\), \(s=x+O\left(f_{x}^{2}\right)\) and \(\kappa=f_{xx}+O\left(f_{x}^{2}\right)\). On substituting these expansions into the two equations (3) and (5) we obtain the eigenvalue
problem
\[\lambda\left[\frac{\bar{f}}{\tilde{h}}\right]=\left[\begin{array}{cc}-2\sigma k^ {2}/h_{0}&-P/h_{0}^{2}\\ -Pk^{2}&-\sigma h_{0}k^{4}/2\end{array}\right]\left[\frac{\bar{f}}{\tilde{h}} \right].\]
The eigenvalues are thus
\[\lambda=-\left(\frac{\sigma k^{2}}{h_{0}}+\frac{\sigma h_{0}k^{4}}{4}\right) \pm\sqrt{\left(\frac{\sigma k^{2}}{h_{0}}+\frac{\sigma h_{0}k^{4}}{4}\right)^ {2}+\frac{P^{2}k^{2}}{h_{0}^{2}}-\sigma^{2}k^{6}}. \tag{3.6}\]
Note (3.6) has the same limiting behaviours (2.4) and (2.5) as the full model, but differs slightly with (2.6). In the latter case, both (2.3) and (3.6) give \(\lambda=O(1/h_{0}^{3})\) as \(h_{0}\to 0\) for \(k=O(1/h_{0})\), but with a different prefactor. Thus, our thin filament approximation recovers the same leading-order linear stability behaviour as the full model for small and moderate wave numbers, which is all we can expect from such a lubrication model. For very large wave numbers (i.e., very small wavelengths of perturbation), with \(k\ll 1/h_{0}\), while the scalings may differ, these modes decay off very quickly and therefore any differences in the models are of no practical consequence.
In Fig. 2 we compare the eigenvalues of the thin filament approximation (3.6) against those of the full problem (2.3). For even moderately small thickness (\(h_{0}=0.2\)), the agreement is excellent.
### The unregularised case
In the case \(\sigma=0\), the filament model reduces to that considered by Farmer & Howison (2006). In this case the problem is ill-posed; the eigenvalues (3.6) are \(\lambda=\pm(P/h_{0})k\), so one eigenvalue is arbitrarily large as \(k\to\infty\). Solutions that exhibit this ill-posedness may be constructed by showing that the centreline is given by level curves of a harmonic function. We summarise and further examine this approach here.
Assuming the filament centreline \((x(\eta,t),y(\eta,t))\) and the thickness \(h(\eta,t)\) are parametrised by a Lagrangian coordinate \(\eta\), then
\[\frac{\partial x}{\partial t}=-\frac{Py_{\eta}}{h\sqrt{x_{\eta}^{2}+y_{\eta}^ {2}}},\qquad\frac{\partial y}{\partial t}=\frac{Px_{\eta}}{h\sqrt{x_{\eta}^{2 }+y_{\eta}^{2}}}\]
For \(\sigma=0\), the evolution of \(h\) (3.4) is equivalent to conservation of mass between a point \(\eta\) and reference point \(\eta_{0}\). That is, we may define an area function \(A(\eta)\), such that
\[A(\eta)=\int_{\eta_{0}}^{\eta}h(\bar{\eta},0)\,\mathrm{d}\bar{\eta}=\int_{ \eta_{0}}^{\eta}h(\bar{\eta},t)\sqrt{x_{\eta}^{2}+y_{\eta}^{2}}\,\mathrm{d} \bar{\eta} \tag{3.7}\]
is constant in time on a point on the centreline moving with its normal velocity. Choosing to scale time such that \(P=1\), we arrive at the following:
\[\frac{\partial x}{\partial t}=-\frac{\partial y}{\partial A},\qquad\frac{ \partial y}{\partial t}=\frac{\partial x}{\partial A}. \tag{3.8}\]
These are Cauchy-Riemann equations relating \((x,y)\) to \((A,t)\). This line of argument was used by Farmer & Howison (2006) to demonstrate that \(w=A+\mathrm{i}t\) must be an analytic function of the complex spatial variable \(z=x+\mathrm{i}y\); thus, for a given time \(t\), the centreline is the level curve of the harmonic function \(t(x,y)\).
Given the definition of \(A\) (3.7), the thickness \(h\) may be calculated by:
\[h=\frac{A_{x}x_{\eta}+A_{y}y_{\eta}}{\sqrt{x_{\eta}^{2}+y_{\eta}^{2}}}=\frac{A _{x}t_{y}-A_{y}t_{x}}{\sqrt{t_{x}^{2}+t_{y}^{2}}}=\sqrt{t_{x}^{2}+t_{y}^{2}}= \left|w^{\prime}(z)\right|. \tag{3.9}\]
This thickness will go to zero at a critical point \(z_{c}\) where \(w^{\prime}(z_{c})=0\). As this is also a point where the conformal map between \(w\) and \(z\) breaks down, we expect to see a singularity in the curvature in the centreline there. The preimage of the straight line \(w=A+\mathrm{i}t_{c}\) that passes through the critical point is the centreline in the \(z\)-plane; assuming \(w^{\prime\prime}(z_{c})\neq 0\) the centreline must therefore have a corner with an angle of \(\pi/2\) at \(z_{c}\) (and not a cusp, as suggested by Farmer & Howison (2006)). If the initial condition is such that \(w^{\prime\prime}(z_{c})\) also vanishes but the third derivative is nonzero, the corner angle is \(\pi/3\), and so on.
As an example, consider an initial condition with the centreline on \(y=0\) and initial thickness given by \(h(x,0)=\delta[1-a\cos x]\), with \(\delta>0\) and \(0<a<1\). This initial condition corresponds to an initially horizontal filament that is thinner near \(x=0\). We thus have \(A=\delta[x-a\sin x]\) at \(t=0\) (determined by choosing our reference point \(\eta_{0}\) to lie on \(x=0\)), and, analytically continuing into the complex plane:
\[A+\mathrm{i}t=\delta(z-a\sin z). \tag{3.10}\]
Taking the imaginary part, we find that the centreline location is given implicitly by
\[t=\delta(y-a\cos x\sinh y). \tag{3.11}\]
The critical point occurs for \(z_{c}=\mathrm{i}\cosh^{-1}(1/a)\) and time \(t_{c}=\delta(\cosh^{-1}(1/a)-\sqrt{1-a^{2}})\). The centreline profiles of this solution, along with the upper and lower interfaces (found by adding and subtracting half the thickness \(h\) (3.9), respectively, in the normal direction) are plotted in Fig. 4a, showing the formation of the \(\pi/2\) angle as \(t\to t_{c}\).
As an example of an initial condition that leads to a non-\(\pi/2\) angle, consider
\[h(x,0)=\delta[1-a\cos x]^{2}\]
which on integrating results in
\[A+\mathrm{i}t=\delta\left[\left(1+\frac{a^{2}}{2}\right)z+\frac{a^{2}}{4}\sin 2 z-2a\sin z\right]. \tag{3.12}\]
Again, the centreline is given by taking the contours of the imaginary part of this function. In this case, since \(w^{\prime}(z_{c})=w^{\prime\prime}(z_{c})=0\), the corner angle at the critical point where \(h\to 0\) is \(\pi/3\). Profiles of this solution are shown in Fig. 4b.
Figure 4: Exact solutions to the thin filament equation in the absence of surface tension. (a) the solution (3.11) that evolves from an initial condition that results in corner formation with the generic angle of \(\pi/2\), and (b) the solution (3.12) that evolves from an initial condition that results in corner formation with a non-generic angle of \(\pi/3\). The parameter values are \(a=0.1\), \(\delta=0.2\). In both examples, the centrelines are plotted in solid blue while the black dashed lines are the upper and lower interfaces (found by adding or subtracting half the thickness \(h\) in the normal direction).
### Singularity formation in the absence of transverse flow
For very small thickness in the filament model, it is tempting to neglect the effects of transverse flow in (3.4) while maintaining the effect of the filament curvature on the normal velocity in both (3.4) and (3.2). Indeed, such a simplification is likely to be important to study the behaviour in regions where the thickness goes to zero (which we will see in section 4 do generically form). However, here we will show that this simplification does not sufficiently regularise the problem and can result in curvature singularities where the thickness becomes infinite in finite time, so should not be used to model the filament across the whole domain.
If transverse flow effects are neglected, the filament velocity and thickness evolution are given by
\[v_{n}=\frac{1}{h}(P+2\sigma\kappa),\qquad\frac{Dh}{Dt}=\kappa(P+2\sigma\kappa). \tag{3.13}\]
In this model, change in thickness is purely due to the dilation or compression of the filament. For further analysis, it is useful to write this system in Cartesian coordinates. Define the centreline to be the curve \(y=f(x,t)\) and let \(h^{(y)}(x,t)=h\sqrt{1+f_{x}^{2}}\) be the filament thickness in the \(y\) direction (rather than the normal direction). Then it can be readily shown that \(h_{t}^{(y)}(x,t)=[f_{x}hv_{n}]_{x}\), and so (3.13) is equivalent to
\[\frac{\partial f}{\partial t} =\frac{1+f_{x}^{2}}{h^{(y)}}\left(P+\frac{2\sigma f_{xx}}{(1+f_{ x}^{2})^{3/2}}\right) \tag{3.14a}\] \[\frac{\partial h^{(y)}}{\partial t} =\frac{\partial}{\partial x}\left[f_{x}\left(P+\frac{2\sigma f_{ xx}}{(1+f_{x}^{2})^{3/2}}\right)\right]. \tag{3.14b}\]
Here (3.14b) has been written in conservative form to highlight the fact that mass is indeed conserved in this system.
The stability analysis in the absence of transverse flow is the same as for the filament model (3.6) but with the loss of the highest order spatial derivative. The eigenvalues of this system are thus
\[\lambda=-\frac{\sigma k^{2}}{h_{0}}\pm\sqrt{\left(\frac{\sigma k^{2}}{h_{0}} \right)^{2}+\frac{P^{2}k^{2}}{h_{0}^{2}}}. \tag{3.15}\]
For small \(k\), (3.15) coincides with the leading-order behaviour (2.4), while for \(k=O(1)\), (3.15) is no longer a reasonable approximation for the full model. Indeed, unlike in (3.6), where both eigenvalues becomes negative for sufficiently large wavenumber \(k\), in (3.15), one eigenvalue tends to a positive, constant value as \(k\to\infty\):
\[\lambda\sim\frac{P^{2}}{2\sigma h_{0}},\qquad k\to\infty\]
(see Fig. 2). Thus the system is (significantly) unstable to perturbations at arbitrarily small spatial scales. While this is not technically ill-posed (the eigenvalues do not become arbitrarily large), this large \(k\) behaviour strongly suggests that singularities in curvature will generically form (see for example Dallaston & McCue 2014).
To establish the existence of curvature singularities we perform a self-similar analysis. Assume a singularity occurs at a time \(t=t_{c}\) at \(x=x_{c}\), and let:
\[f\sim f_{0}(t)+(t_{c}-t)^{\alpha}F(\eta),\qquad h\sim(t_{c}-t)^{\beta}H(\xi), \qquad\xi=\frac{x-x_{c}}{(t_{c}-t)^{\gamma}}, \tag{3.16}\]
where the similarity exponents \(\alpha,\beta,\gamma\) are to be determined. Assuming that \(\alpha>1\), the dominant term in the velocity of \(f\) is \(\dot{f}_{0}=\dot{f}_{0}(t_{c})\). Thus, on balancing terms we find \(\beta=-1\)
and \(\alpha=2\gamma-1\), with \(\gamma\) being undetermined (a second-kind self-similarity). Given \(\alpha>\gamma\) (which we will check for consistency after the fact), the dominant terms in (3.13) become
\[\dot{f_{0}}=2\sigma\frac{F^{\prime\prime}}{H},\qquad H+\gamma\xi H^{\prime}=2 \sigma[F^{\prime}F^{\prime\prime}]^{\prime}.\]
These equations can be further scaled to remove \(\dot{f_{0}}\) and \(\sigma\). Let \(F=(\dot{f_{0}})^{-1}\hat{F}\) and \(H=2\sigma(\dot{f_{0}})^{-2}H\), then
\[\hat{H}=\hat{F}^{\prime\prime},\qquad\hat{H}+\gamma\eta\hat{H}^{\prime}=[\hat{F }^{\prime}\hat{F}^{\prime\prime}]^{\prime}.\]
Let \(u=\hat{F}^{\prime}\) and eliminate \(\hat{F}\), then \(u\) satisfies the second-order equation
\[u^{\prime\prime}=\frac{u^{\prime}(u^{\prime}-1)}{\gamma\xi-u}.\]
For symmetry we require \(u\) to be odd in \(\xi\); thus, when \(\xi=0\), \(u=0\), which is therefore a singular point of the ODE. By expanding near this point we find that the similarity exponent \(\gamma\) is uniquely specified by the first odd power in the expansion greater than unity:
\[u\sim\xi+C\xi^{n},\qquad\gamma=\frac{n}{n-1},\qquad n=3,5,\ldots\]
Since \(\gamma>1\), this result is consistent with the assumptions made on the similarity exponents above.
The equation for \(u\) can be solved implicitly by letting \(u\) be the independent variable, which allows us to construct parametric solutions for \(\hat{F}\) and \(\hat{G}\):
\[\xi=u+Cu^{n},\qquad\hat{H}=\frac{1}{1+nCu^{n-1}},\qquad\hat{F}=\frac{u^{2}}{2} +\frac{n}{n+1}Cu^{n+1}. \tag{3.17}\]
While \(n\) can be any odd number \(\geqslant 3\), dependent on the initial condition, the most generic case will be \(n=3\), in which case \(\gamma=3/2\) and \(\alpha=2\). The constant \(C\) is arbitrary, due to scale invariance in the equations (3.13) once curvature dominates over the driving pressure. In a given case, the scale \(C\), velocity \(\dot{f_{0}}\), and exponent \(n\) will all depend on the initial condition.
We provide numerical evidence for the curvature singularity formation by numerically solving the system (3.14a), (3.14b). This computation is performed in MATLAB using finite difference discretisation along with MATLAB's ode15s algorithm for time-stepping. Parameters \(P=1\), \(\sigma=0.5\) and an initial condition of \(f(x,0)=-\cos(x)\) and \(h(x,0)=1\) is chosen in order to start with high curvature near \(x=0\), where the singularity will occur.
In Fig. 5 we plot the results of this numerical computation. The singularity time \(t_{c}\) is estimated by fitting a straight line through \(h_{\max}(t)=(\max_{x}h(x,t))^{-1}\), which occurs at \(x=0\), and should (according to the analysis above) go to zero linearly. The centreline velocity \(\dot{f_{0}}(t)\) at the maximum thickness is observed to tend to a nonzero constant, from which \(\dot{f_{0}}\) is estimated. Scaling the profiles near the singularity time into similarity variables \(\eta,\hat{F},\hat{H}\), we observe collapse. The exact similarity profiles (3.17), with a suitable fitted value of the arbitrary constant \(C\), match well with the numerical profiles.
The curvature singularity exhibited by this model is weaker than a corner singularity, in that as \(\alpha>\gamma\), the first derivative goes to zero in the neighbourhood of the singularity, even while the curvature becomes unbounded. It is also interesting to note that the singularity is not dependent on (and does not require) the driving pressure \(P\); its cause is the presence of surface tension pulling regions of high positive curvature inward, concentrating the thickness at a single point. It is thus only present as the model (3.13) has a surface tension-dependent velocity, but no regularising term that penalise large thickness, as appears in (3.4).
### Quasi-travelling wave solutions
The system (3.14), while insufficiently regularised for simulating the full dynamics of a thin filament, is useful in that it exhibits a quasi-travelling wave solution of the form \(f=-B\log(t_{0}-t)+F(x)\), \(h^{(x)}(x,t)=(t_{0}-t)H(x)\), where \(t_{0}\) is a finite time. A solution of this form is similar to, but not exactly, a travelling wave, as the centreline has a fixed shape but moves to infinity (with speed unbounded) as \(t\to t_{0}\), while the thickness linearly decreases to zero. The parameter \(B\) is the analogue of a travelling wave speed. Solutions to (3.14) of
Figure 5: Curvature singularity formation in the Hele–Shaw filament model in the absence of transverse flow terms (3.13) for initial condition \(s(x,0)=-\cos(x)\), and \(h(x,0)=1\). (a) centreline profiles \(f(x,t)\) approaching a curvature singularity and (b) filament thickness becoming unbounded at singularity time of \(t_{c}\approx 0.77\). These profiles (solid lines) collapse onto similarity profiles (c) \(\hat{F}(\xi)\) and (d) \(\hat{H}(\xi)\), which asymptotically match the exact similarity solution (3.17) (dotted lines) as \(t\to t_{c}\). The scaling factor \(C\approx 0.3\) is fit to the profiles. (e) The singularity time (large dot) is found by fitting a straight line approximation to the reciprocal of the maximum thickness near the singularity time, while (f) the speed of the centreline \(\hat{f}_{0}=f_{t}(0,t_{c})\) (large dot) at the singularity is similarly estimated. The five smaller points in (e,f) are the times at which the scaled profiles are plotted in (c,d).
this form will thus also be asymptotically valid solutions to the system with transverse flow (3.2), (3.4), valid in regions where thickness goes to zero.
On substitution of the above ansatz into (3.14), and scaling the variables according to
\[H=\frac{P}{B}\hat{H},\qquad F=B\hat{F},\qquad x=B\hat{x}\]
we obtain
\[-\left[\hat{F}^{\prime}\left(1+\epsilon\frac{\hat{F}^{\prime\prime}}{(1+\hat{F }^{\prime 2})^{3/2}}\right)\right]^{\prime}=(1+\hat{F}^{\prime 2})\left(1+ \epsilon\frac{\hat{F}^{\prime\prime}}{(1+\hat{F}^{\prime 2})^{3/2}}\right), \qquad\epsilon=\frac{2\sigma}{BP}. \tag{3.18}\]
This equation may be solved numerically directly, but to further simplify we cast it into the following curvature-angle formulation. Let \(\theta=-\tan^{-1}\hat{F}^{\prime}\) be the angle between the tangent to the centreline and the \(x\)-axis (counting positive for negative \(F^{\prime}\)), and curvature \(K=\hat{F}^{\prime\prime}/(1+\hat{F}^{\prime 2})^{3/2}\) (see Fig. 6a). We then obtain the first-order equation
\[K^{\prime}=-\frac{1+\epsilon K}{\epsilon K\sin\theta}\left(1+\frac{K}{\cos \theta}\right). \tag{3.19}\]
For a semi-infinite curve, the appropriate interval is \(-\pi/2<\theta<\pi/2\), where the nose at \(\theta=0\) is a singular point, at which we require \(K(0)=-1\).
We solve equation (3.19) numerically for different values of \(\epsilon\), and reconstruct the \(\hat{x},\hat{F}\) coordinates, using
\[\frac{\mathrm{d}\hat{x}}{\mathrm{d}\theta}=-\frac{\cos\theta}{K},\qquad\frac{ \mathrm{d}\hat{F}}{\mathrm{d}\theta}=\frac{\sin\theta}{K}.\]
The \(\epsilon=0\) and \(\epsilon\to\infty\) limits are amenable to exact solutions. If \(\epsilon=0\) then \(K=K_{0}=-\cos(\theta)\), which is the 'Grim Reaper' solution found by Farmer & Howison (2006) (also relevant for the zero-surface-tension Saffman-Taylor finger (Saffman & Taylor 1958) and for curve shortening flow (Angenent 1991)), corresponding to \(\hat{F}=\ln(\cos\hat{x})\). If \(\epsilon\to\infty\) the equation for \(K\) becomes linear, and has exact solution
\[K=K_{\infty}=-\cot\theta\log\left(\frac{\cos(\theta/2)+\sin(\theta/2)}{\cos( \theta/2)-\sin(\theta/2)}\right).\]
Plots of these quasi-travelling wave solutions are shown in Fig. 6b. The effect of \(\epsilon\), and thus of surface tension, is weak, as all solutions are bounded between the \(\epsilon=0\) and \(\epsilon\to\infty\) limits. The scaled curvature at the nose is required to be \(-1\) in all cases, and thus returning to the unscaled system, we expect the curvature at the nose to be \(-1/B\), that is, inversely proportional to the wave speed parameter. We will see in section 4, however, that these quasi-travelling wave solutions do not act as attractors for a generic initial value problem.
## 4 Numerical simulation and comparison
In this section we describe numerical simulations of the thin filament model (3.2), (3.4), and compare against simulations of the full two-interface problem (2.1).
Our solution method for the thin filament model (3.2), (3.4) is a front tracking method, whereby the centreline is represented by \(N\) points \(\mathbf{x}_{j}=(x_{j},y_{j}),j=1,\ldots,N\). The thickness also has a value \(h_{j}\) at each point. At a given time, the normal vector, curvature, and arclength derivatives of \(h\) and the flux \(q\) (3.3) are calculated using a central finite difference scheme. The points are then moved in time in the normal direction with velocity given by (3.2) (so that (3.4) is the correct evolution equation for \(h\)) using MATLAB's ode15s. Since moving points with normal velocity results in highly unevenly spaced points on the centreline, we
remesh onto an evenly spaced grid when the ratio between minimum and maximum node spacing drops below a threshold value. Typically speaking we use \(10\,000\)-\(40\,000\) points and remesh when the minimum-to-maximum node spacing is less than \(0.8\).
### Validation against solution of the two-interface problem
To validate solutions of the thin filament model (3.2), (3.4) it is valuable to compare it to solutions of the full two-interface system (2.1), which is a more challenging numerical problem. We solve (2.1) using the numerical framework proposed by Morrow _et al._ (2021, 2023), which we briefly summarise here. The framework is based on the level set method, in which we represent each interface, \(f_{L}\) and \(f_{U}\), as the zero level set of the associated level set functions \(\psi_{L}\) and \(\psi_{U}\). Each level set function is chosen such that the viscous fluid will occupy the region where both \(\psi_{L}\) and \(\psi_{U}>0\); otherwise the region is filled with inviscid fluid. Both level set functions are updated according to
\[\frac{\partial\psi_{L}}{\partial t}+F_{L}|\nabla\psi_{L}|=0\quad\text{and} \quad\frac{\partial\psi_{U}}{\partial t}+F_{U}|\nabla\psi_{U}|=0, \tag{4.1}\]
where
\[F_{L}=\nabla\phi\cdot\mathbf{n}_{L}\quad\text{and}\quad F_{U}=\nabla\phi\cdot\mathbf{n }_{U}, \tag{4.2}\]
and \(\mathbf{n}_{L}=\nabla\psi_{L}/|\nabla\psi_{L}|\) and \(\mathbf{n}_{U}=\nabla\psi_{U}/|\nabla\psi_{U}|\) are the unit (outward) normals. We discretise the spatial derivatives in (4.1) using a second-order essentially non-oscillatory scheme and integrate in time using second-order total-variation-diminishing Runge-Kutta with time step \(\Delta t=10^{-5}\). We perform simulations on the computational domain \(-\pi\leqslant x\leqslant\pi\) and \(-0.5\leqslant y\leqslant 4\), which is discretised into equally spaced nodes with mesh size \(\Delta x=2\pi/750\). Simulations are concluded when the minimum distance between the two interfaces is less than \(3\Delta x\). Further, we periodically perform reinitialisation to maintain both \(\psi_{L}\) and \(\psi_{U}\) as signed distance functions.
Figure 6: (a) Depiction of the curvature-angle coordinate system used to solve the quasi-travelling wave solutions (3.19). (b) Quasi-travelling wave solutions to (3.19), for \(\epsilon=0\) and \(\epsilon\to\infty\) (dashed curves), and \(\epsilon\in\{0.1,0.5,1,10\}\) (solid curves). The arrows indicate direction of increasing \(\epsilon\).
We solve (1a) for the velocity potential \(\phi\) via a finite difference stencil. Following Chen _et al._ (1997), we modify the stencil at nodes adjacent to either interface, corresponding to where \(\psi_{L}\) or \(\psi_{U}\) changes sign, by imposing a ghost node on the interface to incorporate the appropriate dynamic boundary conditions (1c) and (1d). Here, \(\kappa_{L}=\nabla\cdot\mathbf{n}_{L}\) and \(\kappa_{U}=\nabla\cdot\mathbf{n}_{U}\). By solving for \(\phi\), we can compute \(F_{L}\) and \(F_{U}\) from (2). These choices of \(F_{L}\) and \(F_{U}\) satisfy the kinematic boundary conditions (1b) and gives a continuous expression for \(F_{L}\) and \(F_{U}\) in the region occupied by the viscous fluid \(\mathbf{x}\in\Omega\). However, to solve (1), we require expressions for \(F_{L}\) and \(F_{U}\) over the entire computational domain. To extend our expressions for \(F_{L}\) and \(F_{U}\) into the gas regions, we follow Moroney _et al._ (2017) by solving the biharmonic equation
\[\nabla^{4}F_{L}=0\quad\text{and}\quad\nabla^{4}F_{U}=0\quad\text{in}\quad\mathbf{x }\in\mathbb{R}^{2}\backslash\Omega. \tag{3}\]
By doing so, we obtain smooth continuous normal velocities over the entire computational domain, allowing us to solve (1) for \(\psi_{L}\) and \(\psi_{U}\).
In Fig. 7 we compare the results of the filament model and full two-interface model, with initial conditions:
\[f_{U}(x,0)=1/24-0.00375\,\text{sech}^{2}\,x,\qquad f_{L}(x,0)=-1/24+0.00375\, \text{sech}^{2}\,x. \tag{4}\]
These initial conditions correspond to an initial centreline location of \(y=0\) and an initial thickness \(h(x,0)=1/12-0.0075\,\text{sech}^{2}\,x\), which is an almost flat filament with a small thinner region near the centre of the channel, \(x=0\). As this picture demonstrates, the agreement between the two methods is initially very good, and only starts to disagree quantitatively when the filament thins near the central region, leading to a large increase in velocity. This is to be expected, as the level set method will become inaccurate when the filament thickness becomes too thin to capture accurately using a level set function on discretised mesh.
Figure 7: A comparison between the interfaces predicted from numerical solution of the filament model, and those from numerical solution of the full problem using the level set method described in section 4.1. The initial condition is given by (4) with pressure and surface tension parameters \(P=1\) and \(\sigma=0.1\), respectively. For clarity, only the interfaces of the filament model, found from the actual variables of centreline and thickness, are plotted.
### Numerical results for late times
In this section we use the front-tracking scheme to solve the thin filament model for later times, in the regime where the thickness becomes too small for the level set method to accurately resolve. Here we use the initial condition
\[y=0,\qquad h=0.2[1-0.1\cos(x)],\]
which is the same as the initial condition of the exact solution (3.11) depicted in Fig. 4b. In order to observe the effect of different surface tensions \(\sigma\), we run simulations for \(\sigma=0.1\) (Fig. 8), as well as \(\sigma=0.05\) and \(\sigma=0.02\) (Fig. 9).
In the \(\sigma=0.1\) simulation (Fig. 8), the filament bulges outward in the centre near \(x=0\), where the thin filament is initially thinner (and so the filament moves faster). This bulge becomes a 'bubble' that rapidly expands in radius while the thickness rapidly decreases. The majority of the fluid is pushed into the outer regions of the channel. At a finite time, the bubble intersects the channel walls at \(x=\pm\pi\). Unlike a fluid-filled Hele-Shaw cell, there is nothing to prevent the filament reaching the channel walls; this does not correspond to singular behaviour in the mathematical model, but physically represents an area of gas of lower pressure being trapped by the filament.
To further understand this behaviour, we note that (3.2), (3.4) has as a solution a perfectly circular bubble of radius \(R(t)\), that evolves according to
\[R(t)=\frac{2\sigma}{P}+\left(R(0)-\frac{2\sigma}{P}\right)\mathrm{e}^{(P/c)t},\qquad h=\frac{c}{R}. \tag{4.5}\]
Here \(c\) is a constant that depends on the initial thickness. In such a solution the radius (if initially greater than \(2\sigma/P\)) grows exponentially, but does not exhibit a finite time singularity. It may be that this is the behaviour that is governing late stages of evolution of the filament depicted in this section. In Fig. 8e we plot the curvature \(\kappa_{\mathrm{nose}}\) at the nose, or front, of the bubble, while in Fig. 8f we plot \(\kappa/\kappa_{\mathrm{nose}}\). The curvature initially grows in magnitude (in our convention the curvature is negative in the bubble) when the bulge initially grows, but then rapidly decreases in magnitude at the time when the bubble expands outwards. When this happens, the curvature tends to a uniform state in the bubble region (\(\kappa/\kappa_{\mathrm{nose}}\to 1\)), implying the convergence to a circular shape. While we have only plotted the interfaces up to the time at which it intersects the channel wall, mathematically the simulation continues for longer, and the bubble becomes more circular in shape. Ultimately, the thickness becomes so small and the velocity so large that the numerical method no longer converges.
Much of this behaviour is also seen for \(\sigma=0.05\) and \(0.02\), as depicted in Fig. 9. The notable effect of smaller surface tension is that the bulging region that forms into a bubble is initially smaller in width. In addition, in the simulation for \(\sigma=0.02\), the circular bubble can be seen to begin to destabilise before it reaches the channel walls. While this instability is dependent on the numerical resolution, it does suggest that the stability of the circular solution (4.5) is worth further investigation.
## 5 Discussion
In this paper we have developed a simple but highly accurate geometric flow model (3.2), (3.4) that describes two-interface Hele-Shaw flow very well in regions where the thickness of the fluid region becomes thin. Due to the instability of one of the interfaces, this limit is one that is generally reached, even if initially the thickness is not very small.
Here we note some clear differences between the instability of a thin filament, and the classical Saffman-Taylor instability in a semi-infinite fluid region that results in the Saffman-Taylor finger with (in the small surface tension limit) width half that of the channel. One
Figure 8: Evolution of a filament with initial position \(y=0\) and thickness \(h=0.2(1-0.1\cos(x))\), with \(P=1\) and surface tension \(\sigma=0.1\). (a) the centreline profiles over time (solid) and the thickness (dashed lines), and (b) the thickness against \(x\), show the initially thinner part of the filament bulge outward into a ‘bubble’, while the bulk of the fluid is driven out to the edges of the filament. Ultimately the profile intersects with the channel boundary at \(x=\pm\pi\). The (c) maximum velocity \(\dot{y}_{\rm max}(t)\) and (d) minimum thickness \(h_{\rm min}(t)\) appear to become unbounded and go to zero, respectively. (e) The curvature at the nose (\(x=0\)) over time, initially increases in magnitude, then rapidly heads toward zero as the bubble expands. (f) the curvature scaled by the curvature at the nose, showing the curvature tending to a constant over the bubble. The dots marked in (c, d, e) correspond to the times at which profiles are plotted in (a, b, f), and the arrows in (a, b, f) indicate the direction of increasing time.
difference is that the thin filament model does not feel the effects of the channel wall away from the thick neck regions, and thus the rapidly expanding bubble may intersect the channel walls at finite time with no breakdown in the mathematical model. Physically speaking, this phenomenon would correspond to trapping a part of the lower-pressure gas inside the fluid region. In addition, as the walls have no strong effect, the orientation of the filament motion to be mainly in the positive \(y\)-direction is a somewhat artificial consequence of the initial condition.
Figure 9: Evolution of a filament with initial position \(y=0\) and thickness
\(h=0.2(1-0.1\cos(x))\), for (a,b) \(\sigma=0.05\), and (c,d) \(\sigma=0.02\). The formation of a circular bubble occurs in a similar fashion as happens for \(\sigma=0.1\), although for \(\sigma=0.02\)
(c) the bubble can be seen to destabilise before the bubble reaches the channel walls.
condition. For this reason it may be more natural to consider the fluid in an unbounded Hele-Shaw cell, with the length scale set by the initial thickness.
The specifics of the late stages of evolution of the filament (depicted in section 4), wherein the thickness becomes very small and the velocity correspondingly large, is not resolved. In order to understand the dynamics at late times of this system, an analysis of the stability of the quasi-travelling wave solutions, and the stability of the expanding circular bubble (4.5) to non radially symmetric perturbations, would be very valuable. We observe that in our model, the filament does not appear to exhibit finite-time 'bursting' behaviour, that is, the thickness does not go to zero at an isolated finite point in space and time. In the case of self-similar breakup in the manner of the unforced lubrication equation described in Almgren _et al._ (1996), the thickness goes to zero at a point where the curvature becomes infinite, while in our solutions depicted in section 4, the thickness becomes small in the expanding circular region where the curvature is also decreasing, while the curvature is largest in magnitude in the neck regions, where the thickness does not decrease. Bursting in physical systems is likely to require fully three dimensional effects to explain (that is, when the filament thickness becomes the same order as the separation between plates in the Hele-Shaw apparatus). The dynamics of the filament model studied in this work indicate that such a regime will generically be approached very rapidly in the form of the expanding bubble depicted in section 4.
## Acknowledgements
The authors would like to thank Colin Please for many fruitful discussions.
## Declaration of Interests
The authors report no conflict of interest.
|
2305.07551 | On-line Dose Calculation Using Deep Learning for Beams Selection in
Non-Coplanar Radiotherapy | Non-coplanar Intensity-Modulated Radiation Therapy (IMRT) goes a step further
by orienting the gantry carrying the radiation beam and the patient couch in a
non-coplanar manner to accurately target the cancer region and better avoid
organs-at-risk. The use of a non-coplanar treatment trajectory significantly
enhances the degree of freedom and flexibility but increases drastically the
complexity of the optimization. In inverse planning optimization the dose
contribution for all potential beam directions is usually pre-calculates and
pre-loads into the Treatment Planning System (TPS). The size the dose matrix
becomes more critical when moving from coplanar IMRT to non-coplanar IMRT since
the number of beams increases drastically. A solution would be to calculate
"on-the-fly" the dose contribution to each new candidate beam during
optimization. This is only possible if a dose calculation engine is fast enough
to be used online during optimization iterations, which is not the case in
standard method. Therefore, in this work we propose an IMRT optimization scheme
using deep learning based dose engine to compute the dose matrix on-line. The
proposed deep learning approach will be combined into a
simulated-annealing-based optimization method for non-coplanar IMRT. Since the
dose engine will compute the dose contribution on-line during the optimization,
the final main optimization method requires to keep in memory a very
lightweight dose matrix. The proposed method was compared with clinical data
showing a good agreement considering dosimetry of the treatment plans. The main
advantage of the proposed method was the reduction of the memory storage from
9GB to 10MB during the optimization process. | Fang Guo, Franklin Okoli, Ulrike Schick, Dimitris Visvikis, Antoine Valeri, Julien Bert | 2023-05-12T15:25:43Z | http://arxiv.org/abs/2305.07551v2 | # On-line Dose Calculation Using Deep Learning for Beams Selection in Non-Coplanar Radiotherapy
###### Abstract
Non-coplanar Intensity-Modulated Radiation Therapy (IMRT) goes a step further by orienting the gantry carrying the radiation beam and the patient couch in a non-coplanar manner to accurately target the cancer region and better avoid organs-at-risk. The use of a non-coplanar treatment trajectory significantly enhances the degree of freedom and flexibility but increases drastically the complexity of the optimization. In inverse planning optimization the dose contribution for all potential beam directions is usually pre-calculates and pre-loads into the Treatment Planning System (TPS). The size the dose matrix becomes more critical when moving from coplanar IMRT to non-coplanar IMRT since the number of beams increases drastically. A solution would be to calculate "on-the-fly" the dose contribution to each new candidate beam during optimization. This is only possible if a dose calculation engine is fast enough to be used online during optimization iterations, which is not the case in standard method. Therefore, in this work we propose an IMRT optimization scheme using deep learning based dose engine to compute the dose matrix on-line. The proposed deep learning approach will be combined into a simulated-annealing-based optimization method for non-coplanar IMRT. Since the dose engine will compute the dose contribution on-line during the optimization, the final main optimization method requires to keep in memory a very lightweight dose matrix. The proposed method was compared with clinical data showing a good agreement considering dosimetry of the treatment plans. The main advantage of the proposed method was the reduction of the memory storage from 9GB to 10MB during the optimization process.
* March 2023
## 1 Introduction
Non-coplanar Intensity-Modulated Radiation Therapy (IMRT) goes a step further by orienting the gantry carrying the radiation beam and the patient couch in a non-coplanar manner to accurately target the cancer region and better avoid organs-at-risk (Bortfeld 2006). Despite improvements achieved from using non-coplanar IMRT, they are still not adopted for use at most treatment centers. The use of a non-coplanar
treatment trajectory significantly enhances the degree of freedom and flexibility but increases drastically the complexity of the optimization (Smyth et al. 2019).
In inverse planning optimization the dose contribution of each beam is precalculated and stored into a dose influence matrix with a huge size. However, considering memory storage and optimization time, the matrix dimension has to be small-scale so it can be used clinically. Common methods limit the number of beams, i.e., searching space of the optimal beam orientations, by increasing the sampling angle's interval. Similarly, the number of voxels that describe the patient is also limited by resampling the CT image into large voxel spacing. Even fast dose algorithm, such collapsed cone dose algorithm (Hasenbalg et al. 2007) and the superposition-convolution algorithm (Jenkins et al. 2012) for examples, not allows to compute the dose contribution of a given in real time during the optimisation loop. Therefore, the dose influence matrix, is usually pre-calculates and pre-loads into the Treatment Planning System (TPS) for all potential beam directions. The size the dose influence matrix becomes more critical when moving from coplanar IMRT to non-coplanar IMRT since the number of beams increases drastically. Such a strategy is usually considered time-consuming and storage-occupying (Jelen et al. 2005).
One solution would be to further reduce the number of voxels and the number of possible beams candidate to obtain a reasonable size of the dose influence matrix. However, the full benefit of the non-coplanar could not be demonstrated because the quality of the treatment plan would deteriorate. Another solution would be to calculate the dose contribution to each new candidate beam during optimization. This corresponds to no longer pre-calculating the entire influence matrix but only the useful values "on the fly". This is only possible if a dose calculation engine is fast enough to be used online during optimization iterations.
Recently, many researchers have introduced Artificial Intelligence (AI), especially deep learning technology, into medical applications. Deep learning is a class of machine learning algorithms that uses multiple layers to extract higher-level features from the raw input progressively. There are several medical deep-learning applications for dose prediction that are proven to be successful.
There is numerous research work that is able to predict dosemap in a very fast manner (Nguyen et al. 2018, Nguyen et al. 2019, Fan et al. 2019). A first attempt to use an AI-based dose engine in a TPS was proposed in (Liu et al. 2021). However, the work was limited to the direct aperture optimization (DAO) for IMRT, meaning that the beam selection was not considered. But the aim was the same, trye to reduce the dose influence matrix.
In this work we propose to go further by considering a deep learning based dose engine that depend of the beam configurations, and then brings the work of (Liu et al. 2021) to a higher level, by considering beam angle selection as well. The proposed deep learning approach will be combined into a simulated-annealing-based optimization method for non-coplanar IMRT. Since the dose engine will compute the dose contribution on-line during the optimization, the final main optimization method requires to keep in memory a very lightweight influence matrix.
## 2 Materials and methods
The deep learning approach to predict the dose is first presented in the following sections. This work was similar to previous work from (Nguyen et al. 2018, Nguyen et al. 2019, Fan et al. 2019). However, for a fast execution the inputs parameters, like the beam angulation, were directly injected in the lattence space, which was different to the state-of-the art method. Then a dedicated section will described how the optimization algorithm was combined with the proposed dose engine. Finally an evaluation study will be presented.
### Training dataset
To fully train our network, we need adequate dosemaps under different beam directions and the corresponding 3D CT images. To create such a dataset, we use an open-source program package called _matRad_(Cisternas et al. 2015, Craft et al. 2014), which is a Matlab package for radiotherapy. We generated a dataset consisting of 40 patients with head and neck cancer. The CT images are retrieved from the Head-Neck-Radiomics-HN1 dataset (Wee & Dekker 2019) in the Cancer Image Archive (TCIA). We randomly split the first 30 patients' data into a training set and validation set (24 patients for the training dataset and 6 patients for the validation dataset), and the rest 10 patients form a testing set to evaluate the performance of the network and the whole algorithm. The original size of CT images were resized to \(64\times 64\times 64\) voxels. We use matRad to estimate the dose distribution. For a given beam angle the dose influence matrix is calculated from matRad and the MLC optimisation is solved using DAO method (1) and the planning target volume (PTV). This PTV was provided as an 3D image binary mask, and will be also used as an input of the deep learning method. The final 3D dose map from the colimated beam was used into the trainning data set. The size of the dose was the same that the CT image, \(64\times 64\times 64\) voxels. The sampling angle was \(5^{\circ}\) for both the gantry and couch angle, leading to 169 possible beam directions. Therefore for the 40 patients, we obtained 6760 sets in total. For each of this set we computed the corresponding dose map. All images and angles were normalized from 0 to 1. The final training data set was composed of the 3D CT image, the gantry and couch pair angle, the PTV mask, and the corresponding 3D dosemap.
### Network architecture
The network for the dose prediction was based on a 3D U-Net architecture (Ronneberger et al. 2015), which is wildly applied in medical applications. This 3D U-Net model, as shown in Figure 1, uses the 3D CT images, the value of gantry angle, and couch angles as inputs and the PTV mask. The output is the predicted 3D dosemap with a size of \(64\times 64\times 64\) voxels, which correspond to the dose distribution of the shaped beam according patient anatomy from the CT and the PTV. CT image and PTV mask have the same size and were fed to the network as two input channels.
Gantry and couch angles are encoded using only 2 parameters, while CT and PTV are encoded with almost half a million due to the number of voxels. Then, if we introduce angles parameters directly as standard input at the beginning of the network, these two angles will be not considered by the network compare to the other image parameter, due to the numerous of downsamplings and flatting. A solution consist encoding the beam angles into a 3D image with a same dimension that CT and PTV. This is mostly achieved by using raytracing approach which consist to draw in 3D the beam oriented and shaped in an image. However, this apporach increase the total time to recover a dose map, since a pre-processing step of raytracing is required for each new beam orientation.
Therefore, the standard architecture was slightly modified to encode the couch and ganrty angle directly in the latent space through the bottleneck part. The normalized couch and gantry angles were concatenated with the one-dimensional array of the bottleneck and then passed to a fully connected layer to let all neurons receive the beam angle information. After this fully connected layer, the result is reshaped and passed to the right part of the U-Net. In every layer, the activation function is ReLU (Rectified Linear Unit) in order to merge the information on the beam angle into the network. The network architecture was developed and trained on Keras, using the TensorFlow backend.
### Network training
In the training phase, we use the adaptive momentum algorithm (ADAM) (Kingma & Ba 2014) for minimizing the loss function with a learning rate of \(1\times 10^{-4}\), the loss function is a mean-square error function defined as follow,
\[MSE=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\hat{Y}_{i})^{2} \tag{1}\]
where \(i\) is the pixel index, \(n\) total number of pixels, \(Y_{i}\) is the ground truth image and \(\hat{Y}_{i}\) is the predicted image. During the network training process, both training and
Figure 1: Architecture of the proposed network where the bottleneck part was modified to consider the gantry and couch inputs.
validation loss was monitored. Consequently, the training process was stopped at the epoch when the validation loss ceased showing improvement with respect to training loss, and reach aproximatively 400 epochs. To balance training efficiency and GPU capacity, in every training epoch, the batch size was 8. The model was trained on an NVIDIA Corporation GP106GL [Quadro P2000] with 6GB of dedicated RAM.
### Non-coplanar IMRT optimization algorithm
Avoiding local minimum may be a difficulty when there are a large number of optimization variables and constraints. Simulated Annealing (SA) is a probabilistic technique for approximating the global optimum of a given objective function, by simulating a cooling process. This method is a meta-heuristic algorithm that allows solve problem optimization with ample search space. The key point of the SA method is that when the temperature is high, the algorithm is exploring solution, by discarding even candidate that improve the cost function, in order not to fall into a local minimum. This temperature is cooling down iteratively to slowly beginning accepting solution according a probability calculated by the amount of improvement that bring the considered solution, a random factor and the current temperature. At the end when the temperature is low, the method converge to a solution considered as an approximation of the global optimum.
This optimisation method was already and successfully used in numerous treatment planning optimisation [Villa et al. 2022]. We propose an optimization scheme that allows to combine the proposed dose engine and the SA method to optimize a non-coplanar IMRT treatment. This optimization algorithm scheme, named SA-DDL (Simulated Annealing Dose Deep Learning), is described in Figure 2.
The algorithm starts with a high temperature \(\mathbf{T}_{init}\), and an empty beam candidate set (formed of control points) \(\mathbf{B_{K}}\). A the begining of each iteration a new solution representing by beam candidates \(\mathbf{B_{K_{new}}}\) is proposed by updating the current set \(\mathbf{B_{K}}\). This update is a random process that decide to add new random control point or replace or remove one of the beam candidate randomly. Subsequently, the proposed AI-based dose engine is used to estimate the dose distribution according to the new solution \(\mathbf{B_{K_{new}}}\). Direct Aperture Optimization is not apply since it performed within the dose prediction based on the PTV mask. AI prediction implicitely shaped the beam. Then the computation of the objective function is performed. According to this new score the simulated annealing method determines whether or not to accept this solution. If accepted the beam candidates set \(\mathbf{B_{K_{new}}}\) is swap this the current set \(\mathbf{B_{K}}\). If not accepted the current set \(\mathbf{B_{K}}\) is kept as it is. Finally, the temperature of the SA method is cooling down according a cooling factor, and a new iteration start. The optimization process stop when severals conditions are reached, such as maximum number of iterations, minimal temperature or a cost function that reach a plateau.
The objective function is defined in Equation (2), where \(V_{PTV}\) denotes the PTV set and \(V_{OAR}\) denotes the Organ At Risk (OAR) set. A penalty factor \(p^{+}\) controlls the
relative importance of the target tumor voxels and \(p^{-}\) controlls the relative importance of organ-at-risk voxels. The variable \(d^{P}\) denotes the prescribed dose and \(d\) the dose computed from the predicted dosemap. The idea of the objective function is to minimize the least-square deviation between the prescribed dose and the actual dose received by the tumor voxels.
\[f(d)=\frac{1}{N_{t}}\sum_{j\in V_{PTV}}p^{+}(d_{j}-d^{P}{}_{j})^{2}+\frac{1}{N_{ o}}\sum_{j\in V_{OAR}}p^{-}(d_{j}-d^{P}{}_{j})^{2} \tag{2}\]
## 3 Evaluation Study
### AI-based dose engine validation
We evaluate first the dose engine provided by the deep learning method. We then compute the predicted dose map for the 10 patients from the testing dataset and compare the results with the dose map from matRad dose engine. Since each patient is treated with different beam directions, for each of these patients we calculated the dose map for severals beams. In order to estimate dose maps from realistic beam direction, for each patient a treatment plan was calculated using matRad. Then for each patient 169
Figure 2: Scheme that summarize the proposed SA optimization method with dose engine from deep learning approach
beam directions was considered. The network performance was mainly assessed using the mean dose and relative dose difference ratio for the PTV and OARs. There are defined as follows:
1. **Mean Dose** Mean of the dose absorbed by all voxels in a specific region of interest to the patient. Given \(\mathbf{N}\) voxels in a region of interest \(\mathbf{V}\) with each voxel \(j\) receiving dose \(d_{j}\), it is calculated using the relation: \[Dose_{Mean}=\frac{\sum_{j=1}^{N}}{d_{j}}\forall\in\mathbf{V}\] (3)
2. **Relative Dose Difference Ratio** Relative difference of the radiation dose between two dose values \(Dose_{1}\) and \(Dose_{2}\): \[Relative\ Difference\ Ratio=|\frac{Dose_{1}-Dose_{2}}{Dose_{1}}|\times\%\] (4)
### Non-coplanar Treatment Planning
The aim of this evaluation was to show that the dose engine from AI based approach not change the treatment plan to a standard dose engine, here the one provided by matRad. The purpose of using AI-based dose engine was not to improve the quality of the treatment plan but to reduce the optimisation time and storage needs.
This was evaluated by using the 10 patient from the test set. For each of this patient a treatment plan for non-coplanar IMRT treatment was optimized using SA and the dose engine from matRad (SA-matRad) and using SA and the dose deep learning engine (SA-DDL). The same parameters, prescribed dose, OAR dose constraints, PTV, etc. were used for both optimizations. In addition to the mean dose and the relative dose difference ratio, Dose Volume Histogram (DVH) were also estimated.
### Computing time and memory storage
For both optimization methods, computing time and the memory storage was monitored. The computing time was measure for each iteration. The memory storage measure the data size need to optimize the beam selection, especially the dose influence matrix but also the data size of the deep learning model.
## 4 Results
### AI-based dose engine
Table 3 shows the comparison between the matRad planned dosemap and the predicted dosemap. Column (a) is the dosemap generated by matRad, column (b) is the predicted dosemap, and column (c) is the difference between the first two columns.
Prediction accuracy for 10 patient test sets and for the target volumes and organs at risks are show in Figure 4. The result have shown that the average error between the most OARs and PTVs are in average close to \(5\pm 3\%\). For example in patient 1, the
brain shows a mean relative error of \(1.75\pm 0.49\%\), while the spinal cord which occupies few voxels yields the value of \(9.85\pm 2.37\%\). This average error is higher than the other PTVs and OARs, this is explained by the fact that the spinal cord's area in CT images is significantly small compared to other tissues, thus one small difference in pixel value may result in a noticeable influence on dose computation. DVH for an arbitrary selected patient wthin the test data set were plotted in Figure 5 for the dose map obtained by the deep learning approach and the dose engine provided by matRad. Results have shown a good agreement between the DVH. All of these results validate the suitability of thte deep learning solution to estimate dose map with an accuracy enough for treatment planning optimisation.
### Non-coplanar Treatment Planning
Table 1 present the result of dose comparison between treatment plan obtained by the simulated annealing approach with deep learning dose engine (SA-DDL) and the one obtained by the seimulated annealing and matRad dose engine. These results are given in average for the 10 patient within the test data set. We can see that there are no obvious signs stating that one method is better than the other regarding dose, for GTV.
DVH for both dose maps from the two treatment plan are shown in Figure 6. Doses from the SA-DDL were slighlty lower but except for spinal cord and brain. Therefore we can observed that the proposed method using deep learning dose engine performed at least at the same level of quality and can be considered as equivalent. Of course
Figure 3: Dosemap comparison example **(a)** dosemaps generated in matRad **(b)** predicted dosemaps **(c)** absolute difference between **(a)** and **(b)**. Each row represents a different 2D slice of the patient head.
extend evaluation with more clinical data should be investigated to confirm this first preliminary observation.
As shown in Table 2 the main advantage of the proposed method is the use of small amoung of memory storage 10MB for the optimization of non-coplanar treatment plan. Standard method that use pre-calculation dose matrix apporach, like SA-matRad need 9GB of memory storage. Since no memory access is large data size is required with deep learning approach, and since GPU-based prediciton is fast, SA-DDL was faster with 6s per iteration compare to SA-matRad with 180s.
Figure 4: Prediction accuracy for 10 patient test sets. Relative error between dose from prediction (dashed lines) and matRad dose engine (solid lines) were given for each target volume and organ at risks.
Figure 5: DVH comparison between dose map from deep learning prediction and the dose engine from matRad for an arbitrary patient within the test for a \(gantry=180^{\circ},couch=180^{\circ}\). (a) and (b) represents the different OARs and PTVs.
## 5 Discussion
In this manuscript, we developed a new optimization algorithm SA-DDL for non-coplanar IMRT treatment planning. This algorithm uses the simulated annealing optimization method to find the optimal beam directions, when computing the dose cost at different control points we use a pre-trained neural network to predict the dose distribution.
The performance of the deep learning approach has been evaluated. The average dose difference for most OARs and PTVs was around 5%, and the standard deviation close to 3%, which is acceptable for fast treatment planning optimisation. The DVH shows that the predicted dosemap was close to the matRad generated dosemap. Therefore, we can conclude that our method is reliable and is able to predict the dosemap according to different beam configurations.
Comparison between treatment plan from SA-matRad and the proposed method SA-DDL were studied. Results have mainly shown equivalence in term of dosimetry. Which was the main objectif of the study in oder to proof that the method that reduce memory storage by using on-line dose calculation was not introducing approximation within the optimization. However, both plan are not easy to interprate since some of the OAR are highers from the SA-matRad method compare to SA-DDL and some are
\begin{table}
\begin{tabular}{l l l} \hline \hline
**OARs**\(\&\) **PTVs** & **Relative Mean Dose** & **STD** \\ \hline Spinal Cord & \(-6.86\%\) & \(0.283\) \\ Neck Right & \(12.00\%\) & \(0.184\) \\ Neck Left & \(11.96\%\) & \(0.155\) \\ Submandibular Gland Right & \(2.35\%\) & \(0.089\) \\ Submandibular Gland Left & \(-0.89\%\) & \(0.261\) \\ Parotid Right & \(7.65\%\) & \(0.227\) \\ Parotid Left & \(2.92\%\) & \(0.301\) \\ Oral Cavity & \(4.07\%\) & \(0.165\) \\ Medulla Oblongata & \(-2.01\%\) & \(0.255\) \\ Brain & \(3.71\%\) & \(0.313\) \\ GTV2 & \(-3.54\%\) & \(0.079\) \\ GTV1 & \(3.04\%\) & \(0.097\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average relative dose difference between non-coplanar treatment plan from SA-matRad and SA-DDL for the 10 patient test set.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Methods** & **Iteration time** & **Data size** \\ \hline SA-matRad (pre-calculated dose matrix) & 180s & 9GB \\ SA-DDL (on-line dose calculation) & 6s & 10 MB \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of predicition time ans memory storage between SA-matRad and SA-DDL.
lowers. A large sutdy with more clinical data should be explored in order to confirm if both are statistically equivalent.
Any way, the aim of this paper was a proof of concept to show that deep learning dose engine may be used to reduce drastically the memory usage, from 9GB to 10MB, during the treatment planning optimisation in non-coplanar beam radiotherapy.
## Acknowledgement
This work was partially supported by Brest metropole oceane.
Figure 6: DVH comparison between dose map from treatment plan calculated using SA-matRad (solid lines) and SA-DDL (dashed lines) |
2306.11663 | Not so fast, not so furious: just magnetic | WD0810-353 is a white dwarf within the 20pc volume around the Sun. Using Gaia
astrometric distance and proper motions, and a radial velocity derived from
Gaia spectroscopy, it has been predicted that this star will pass within 1pc of
the Solar System in about 30kyr. However, WD0810-353 has been also shown to
host a magnetic field with strength of the order of 30MG. Its spectrum is
therefore not like those of normal DA stars of similar effective temperature.
We have obtained and analysed new polarised spectra of the star around Halpha.
Our analysis suggests that the visible surface of the star shows two regions of
different field strength (~30 and ~45MG, respectively), and opposite polarity.
The spectra do not change over a 4 year time span, meaning that either the
stellar rotation period is no shorter than several decades, or that the field
is symmetric about the rotation axis. Taking into account magnetic shift and
splitting, we obtain an estimate of the radial velocity of the star (+83+/-
140km/s); we reject both the value an the claimed precision deduced from the
Gaia DR3 spectroscopy (-373.7+/- 8.2km/s), and we conclude that there will
probably be no close encounter between the Solar System and WD0810-353. We also
reject the suggestion that the star is a hypervelocity runaway star, a survivor
of a Type Ia Supernova explosion. It is just a stellar remnant in the Solar
neighborhood with a very strong and complex magnetic field. | Landstreet, J. D., Villaver, E., Bagnulo, S | 2023-06-20T16:35:59Z | http://arxiv.org/abs/2306.11663v1 | # Not so fast, not so furious: just magnetic
###### Abstract
WD 0810-353 is a white dwarf within the 20 pc volume around the Sun. Using _Gaia_ astrometric distance and proper motions, and a radial velocity derived from _Gaia_ spectroscopy, it has been predicted that this star will pass within 1 pc of the Solar System in about 30 kyr. However, WD 0810-353 has been also shown to host a magnetic field with strength of the order of 30 MG. Its spectrum is therefore not like those of normal DA stars of similar effective temperature. We have obtained and analysed new polarised spectra of the star around H\(\alpha\). Our analysis suggests that the visible surface of the star shows two regions of different field strength (\(\sim 30\) and \(\sim 45\) MG, respectively), and opposite polarity. The spectra do not change over a 4 year time span, meaning that either the stellar rotation period is no shorter than several decades, or that the field is symmetric about the rotation axis. Taking into account magnetic shift and splitting, we obtain an estimate of the radial velocity of the star (\(+83\pm 140\) km s\({}^{-1}\)); we reject both the value an the claimed precision deduced from the _Gaia_ DR3 spectroscopy (\(-373.7\pm 8.2\) km s\({}^{-1}\)), and we conclude that there will probably be no close encounter between the Solar System and WD 0810\(-\)353. We also reject the suggestion that the star is a hypervelocity runaway star, a survivor of a Type Ia Supernova explosion. It is just a stellar remnant in the Solar neighborhood with a very strong and complex magnetic field.
Stellar magnetic fields(1610) --white dwarf stars(1799)--Close encounters(255)-Spectropolarimetry(1973)-Solar Neighborhood (1509) 0000-0002-4882-2878]John D. Landstreet
0000-0002-1881-8885]Eva Villaver
0000-0002-4882-2878]Stefano Bagnulo
## 1 Introduction
Finding the stars that might have experienced any form of interaction with the Solar System in the past, or those that will do so in the future, is an interesting endeavour which carries important long-term overall implications. A close stellar encounter, or flyby, if it happens with a small impact parameter, has the potential to cause a major disruption in the structure of our planetary system. But even stellar encounters at larger distances, of the order of 0.5-1 pc, are expected to be very disruptive, as they can dynamically stir the collection of small bodies that populate the outer Solar System (Wysoczanska et al., 2020). Hypothetically, even those long-range encounters have the potential to affect life on Earth via the temporary enhancement of the cometary influx from the Oort cloud (see e.g. Fernandez & Ip, 1987; Dybczynski, 2006). Furthermore, close stellar encounters, along with the interaction with the tidal field of the galaxy (Heisler & Tremaine, 1986; Portegies Zwart et al., 2021), play an important role in the complex chronology of events that lead to the shaping of the Oort cloud (Oort, 1950).
Building the recent history and near future of rubs and scuffs of the Sun with other stars requires an accurate knowledge of current positions and velocities, and the identification of these encounters has proliferated (see e.g. Dybczynski & Berski, 2015; de la Fuente Marcos & de la Fuente Marcos, 2019; Bailer-Jones et al., 2018; Torres et al., 2019; Bobylev & Bajkova, 2020) as expected with the advent of the _Gaia_ satellite data (Prusti et al., 2016), a game changer in this field.
Recently, the exquisite quality astrometric and photometric data provided by the ESA mission _Gaia_ has been complemented with the publication in its third data release (DR3 Vallenari et al., 2022). DR3 includes low-resolution (XP) spectral scans based on the BP/RP spectrometers of some 220 million spectra (De Angeli et al., 2022; Montegriffo et al., 2022). DR3 also reports about 30 million radial velocities obtained with the Radial Velocity Spectrometer (RVS) by cross-correlating medium-resolution spectra of a short spectral window
around the Ca ii infrared triplet with an appropriate template for each star (Vallenari et al., 2022). The resulting RVs are an extremely valuable source of information for the study of the movement of stars that otherwise do not have any radial velocity (RV) measurement in the literature. This is the case for the star of this paper, the stellar remnant WD 0810-353, a white dwarf (WD) reported to have a RV of \(-373.7\pm 8.2\) km s\({}^{-1}\) that, among the millions of stars analyzed in the _Gaia_ DR3, is one of the handful of special stars that might be heading straight towards us in the near future.
WD 0810-353 was first identified as a new candidate predicted to experience a close encounter with the Solar System using the _Gaia_ RV value by Bobylev & Bajkova (2022). In only \(\approx 29\) Kyr, the WD is expected to pass at a minimum distance of 0.150\(\pm\)0.003 pc from the Sun, the third closest of such identified encounters. This close encounter was discussed again by Bailer-Jones (2022) who flagged it as suspicious based on a possibly incorrect RV determination by _Gaia_. The argument not to trust the _Gaia_ RV measurement was twofold: the expected typical featureless WD IR spectrum (especially in presence of a strong magnetic field), and the fact that the DR3 RV pipeline does not include any WD template. de la Fuente Marcos & de la Fuente Marcos (2022) studied this star in more detail, and considered various interpretations of the feature seen in the _Gaia_ spectra. Their conclusion was that the star is a likely on a trajectory to approach the Sun in the near future. However, as an alternative, they suggested, on the basis of a weak, possibly blue-shifted H\(\alpha\) absorption feature in the XP spectrum, that the WD might instead have a very large radial velocity of about \(-4300\) km s\({}^{-1}\) and thus might be a hypervelocity runaway star. In any case, de la Fuente Marcos & de la Fuente Marcos (2022) called for a new, independent measurement of the radial velocity of the star.
We recall that WD 0810\(-\)353 has been clearly identified as a magnetic WD (MWD) on the basis of distinct circular polarisation features seen in the spectrum around 4700-5000 A (Bagnulo & Landstreet, 2020). Using the blue polarised discovery spectra, it was not possible to model the field morphology but the star's atmosphere appeared to be hydrogenic, and the field strength was estimated to be of the order of 30 MG. The presence of such a strong magnetic field has a profound effect on the spectral lines of a star. In a "weak" field of, say, 1 MG or less, the H Balmer lines split into a simple triplet pattern of a central, undisplaced \(\pi\) component flanked by symmetrically displaced \(\sigma\) components. In a field of tens of MG or more, however, the splitting of hydrogen lines is far more complex. Each of the \(\pi\) and \(\sigma\) components of the normal Zeeman effect further splits and shifts, and the wavelengths of the resulting dozen or more components vary strongly with field strength in ways that have been calculated with high precision by the atomic physics community (e.g. Schimeczek & Wunner, 2014). The variations of component wavelengths with field strength are often represented graphically by a _spaghetti diagram_ like that of Fig. 1.
In principle, if we can establish accurately the strength of the field affecting one or several spectral line components, and so compute "rest" wavelengths for these features, we can still measure the stellar radial velocity. This requires that we have a sufficiently good knowledge of the magnetic field structure, and are able to identify the field value affecting discrete, even sharp, features. As no-one has modelled the magnetic field of WD 0810-353, we have no reason to assume that these effects were included in the _Gaia_ RV, and we conclude that _Gaia_ pipeline RV determination of this object is likely to be incorrect.
In this paper we present new polarised spectra of WD 0810\(-\)353, and derive a qualitative magnetic model of this intrinsically very interesting magnetic WD. Next, we estimate the RV of the star using the guidelines described above, and we re-examine the question of a prob
Figure 1: The spaghetti diagram showing the variations of the wavelengths of components of the Balmer H\(\alpha\) as a function of magnetic field strength between 0 and 55 MG (the thickness of the various lines indicates schematically the relative strengths of the components). Data from Schimeczek & Wunner (2014).
able impact parameter of the near future encounter of this star with the Solar System.
## 2 The Star
WD 0810\(-\)353 (= UPM J0812-3529 = Gaia DR3 5544743925212648320) was discovered to be a nearby WD by Finch et al. (2018). It was subsequently listed as a hydrogen-rich WD in the _Gaia_ DR2-based 20 pc sample WD catalogue of Hollands et al. (2018), who estimated its \(T_{\rm eff}\) to be 6217\(\pm\)9 K and \(\log g=8.17\pm 0.01\) [dex]. Jimenez-Esteban et al. (2018) classified this object as a non-DA, based on VOSA1 WD model spectra fits to the SED using synthetic J-PAS photometry derived from the Gaia spectra. No Sloan Digital Sky Survey (SDSS) spectrum of the target is reported by Gentile Fusillo et al. (2019), but recently, O'Brien et al. (2023) have classified this star as a DC using a Goodman flux spectrum.
Footnote 1: [http://svo2.cab.inta-csic.es/theory/vosa/](http://svo2.cab.inta-csic.es/theory/vosa/)
Spectropolarimetric data of the star were published by Bagnulo and Landstreet (2020). These observations covered the range 3700-5100 A with a spectral resolving power \(R\sim 1200\), and the range 3600\(-\)6200 A at \(R\sim 600\) (see Table 1). They revealed a flux spectrum displaying only a few very shallow features, possibly due to flat-field artifacts. However, the circular polarisation spectra reveal firmly detected features of 0.5% amplitude (but no obvious continuum polarisation). Bagnulo and Landstreet (2020) associated these observed features with the position of two components of the magnetically split H\(\beta\) line, and a broad polarisation hump around 5900 A with the position of the blue \(\sigma\) components of H\(\alpha\), concluding that the WD is a DAH star with an atmosphere composed of H. They estimated a field strength of the order of 30 MG. Figure 6 of Bagnulo and Landstreet (2020), which displays the variations of the various components of H\(\beta\) as a function of field strength (Schimeczek and Winner, 2014), reveals that with such a large field, virtually none of the magnetically split components of that line are near their zero-field wavelengths.
## 3 New Observations
The observations obtained by Bagnulo and Landstreet (2020) were missing most of the region expected to contain the components of the H\(\alpha\) line, which is often the easiest line to interpret. Two new circular polarisation spectra were therefore obtained with the FORS2 instrument (Appenzeller et al., 1998) of the ESO VLT, one around H\(\alpha\) with \(R\sim 2100\), and one covering the range \(3700-9000\) A at \(R\sim 440\). One additional high-resolution spectrum (\(R\sim 60,000\)) was obtained with the ESPaDOnS instrument of the Canada-France-Hawaii Telescope (CFHT), but its S/N was not high enough to reveal the very weak spectral features, either in flux or in polarisation. All data were obtained and reduced as explained by Bagnulo and Landstreet (2018), to which we refer the reader also for the definition of the sign of \(V/I\). Spectra were aligned with sky lines, and corrected for a gravitational redshift of 47 km s\({}^{-1}\), as well as for heliocentric velocity. The observing log is given in Table 1.
The new FORS2 flux and polarisation spectra are shown in Fig 2, together with those previously obtained by Bagnulo and Landstreet (2020). All unpolarised flux (\(I\)) spectra were divided by spectra of DC WDs obtained with the same setting, then normalised with a polynomial. This normalisation procedure is somewhat arbitrary, and we were not able to assess whether some shallow and broad feature are flatfield artifacts, or the effect of a strong magnetic field on the spectral energy distribution. For example, with our polynomial fitting we removed a broad but shallow flux depression between 5500 and 6500 A, which may be in fact associated with the non-zero circular polarisation observed between 5300 and 6300 A. The aim of our flux normalisation was to enhance at least the strongest absorption features, particularly from 6375 to 6525 A, and from about 6975 to at least 7100 A (in between the strong O\({}_{2}\) B band from 6865 to 6935 A and the H\({}_{2}\)O \(\alpha\) band starting at about 7160 A). The \(V/I\) spectrum shows various structures, which are consistently seen in spectra obtained with different settings (in the common regions). Thanks to the use of the beam-swapping technique (e.g. Bagnulo et al., 2009), \(V/I\) has virtually no normalisation or flat-fielding problems; furthermore, it is largely unaffected by atmospheric absorption bands, except for the decreased S/N (because the Earth atmosphere does not polarise the radiation in transmission). There is little doubt that all polarisation features seen in our spectra are real and intrinsic to the star.
The new low-resolution spectrum taken with the 300V grism allow us to assess the _Gaia_ RVS radial velocity of the star based on template matching to three lines of the Ca ii infrared triplet at 8498, 8542 and 8662 A. In our spectrum there are no features whatever anywhere near the positions of these three lines. The flux spectrum is slightly noisy (at the level of \(\sim 1\) % or less), perhaps due to very weak hydrogen Paschen lines shifted into this region by magnetic splitting. It appears that the RVS template matching procedure probably settled on weak noise features to produce the RV value, and that the reported value of \(-373.7\pm 8.2\) km s\({}^{-1}\) is definitely spurious.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline DATE & UT & Exp & S/N & Instrument & Grism & Spectral & \(R\) & Reference & Color code \\ yyyy-dd-mm & hh:mm & (s) & per Å & & & range (Å) & & & in Fig. 2 \\ \hline
2019-03-24 & 01:14 & 2400 & 470 & FORS2 & 1200B & 3700–5100 & 1400 & Bagnulo \& Landstreet (2020) & \\
2020-01-08 & 03:03 & 3680 & 520 & FORS2 & 1200B & 3700–5100 & 1400 & Bagnulo \& Landstreet (2020) & blue \\
2020-01-08 & 04:18 & 3600 & 440 & FORS2 & 600B & 3700–6200 & 800 & Bagnulo \& Landstreet (2020) & light blue \\
2019-03-21 & 06:32 & 4188 & 146 & ESPaDOnS & & 3790–9200 & 60000 & this work & \\
2023-01-14 & 04:58 & 2400 & 365 & FORS2 & 1200R & 5700–7100 & 2100 & this work & red \\
2023-01-14 & 05:31 & 900 & 290 & FORS2 & 300V & 3700–9000 & 440 & this work & green \\ \hline \end{tabular}
\end{table}
Table 1: Log of the spectropolarimetric observations of WD 0810-353.
Figure 2: The various \(I\) and \(V/I\) spectra of WD 0810–353 obtained with the FORS2 instrument with different settings, as detailed in Table 1. The intensity spectrum was normalised to the pseudo-continuum, ignoring the regions around the major telluric bands. For the sake of clarity, the spectra obtained with grism 600B is plotted only at \(\lambda\gtrsim 5000\) Å.
## 4 The Magnetic Field
The new FORS2 spectra assist us to obtain a clearer view of the stellar surface magnetic field.
First, by comparing the overlap regions of the new spectra and those obtained in 2019, we see that there is no evidence for variability either in \(I\) or in \(V/I\) on a time-scale of \(\sim 4\) yrs. Therefore, the field is either symmetric about the star's rotation axis, or the stellar rotation period is at least many decades long, a situation found for some other relatively old large-field MWDs such as Grw+70\({}^{\circ}\) 8247 (e.g. Bagnulo and Landstreet, 2019).
Next, consider the weak, complex flux absorption feature between about 6375 and 6525 A, which has no corresponding polarisation feature. The previous estimate of the field strength of the order of 30 MG by Bagnulo and Landstreet (2020) suggests that this feature is produced by three strong \(\pi\) components of H\(\alpha\). The breadth of the observed flux feature hints at a fairly large spread in field strength over the visible hemisphere, running from roughly 30 to 45 MG. This interpretation rules out the possibility suggested by de la Fuente Marcos and de la Fuente Marcos (2022) that the star has a radial velocity of the order of \(-4300\) km s\({}^{-1}\), which they obtained by assuming that the broad absorption feature in the low resolution _Gaia_ XP scan of WD 0810-353, corresponding to the H\(\alpha\)\(\pi\) components, is due to extreme Doppler shifting of a weak H\(\alpha\) line.
For a field of the order of 25-50 MG, Fig. 1 predicts the presence of the blue \(\sigma\) components of H\(\alpha\) at \(\sim 5700\) A, and of the red \(\sigma\) components at \(\sim 7150\) A. Assuming a smooth magnetic morphology, like that of a centred dipolar field, these \(\sigma\) components should have significant \(V/I\) signatures of opposite sign. However, the polarisation features of each group of \(\sigma\) components show both senses of polarisation at slightly different wavelengths. More specifically, the \(V/I\) spectrum shows a broad S-wave around 5700 A, and a more compact and complex S-wave around 7150 A. _The simplest interpretation of these features is that they reveal the existence of two areas on the WD surface of opposite polarity and somewhat different field strength._ The smooth positive polarisation bump between 5730 and 5960 A is paired with three negative features between 6930 and 7140 A, suggesting the presence of a region of positive longitudinal field (emerging flux lines) and mean field modulus \(\sim 30\) MG. The broad negative \(V/I\) feature around 5600 A is paired with the strong sharp positive \(V/I\) feature at 7190 A. The greater separation of these two features suggests that they reveal a region with field lines entering the star, and a larger field strength of \(\sim 45\) MG.
In Figs. 3 we explore this preliminary modelling more quantitatively. In the top panel, we again show the positions of the components of H\(\alpha\) and H\(\beta\) as a function of field strength, just as in Fig. 1. Superposed on this spaghetti diagram are two copies of the flux spectrum, one with the continuum offset at the level of \(\sim 31\) MG, and one with the continuum at the level of \(\sim 45\) MG. It can be seen that the assumption that the WD has one large region characterised by a field of \(\sim 31\) MG, and one with a field of roughly 45 MG, predicts strong magnetic \(\pi\) (non polarised) components over the full width of the observed absorption feature around 6450 A. The weak absorption lines sandwiched between the O\({}_{2}\) B band and the H\({}_{2}\)O \(\alpha\) band also coincide with red \(\sigma\) components produced in the two main field strength areas.
Next, the bottom panel compares the Schimeczek-Wunner spaghetti diagram to polarisation spectra. This plot shows two copies of the \(V/I\) spectrum, one with its
Figure 3: The wavelengths of the components of H\(\beta\) and H\(\alpha\) as computed by Schimeczek and Wunner (2014) compared to the spectra of WD 0810–353. _Top panel:_ the normalised \(I\) spectrum is overplotted twice, once with the continuum offset to correspond with the 31 MG level and once with the 45 MG level. This shows the coincidence between the computed wavelength of the various line components and the observed absorption features. _Bottom panel:_ same as top panel, but for the \(V/I\) spectrum. (Note that the WD polarisation spectrum is not affected by atmospheric absorption bands.)
(zero) continuum offset to the level of 31 MG, and one offset to the level of 45 MG.
The \(V/I\) spectrum offset to 31 MG shows that the theoretical components of H\(\alpha\) coincide with a positive blue bump around 5800 A, and with four negative features close to 7000 A. The lack of individual features in the blue \(\sigma\) components arises because there is a significant spread in field strength in the region of positive field, which broadens the effect of each component, as shown by the rapid change with field strength of each of the blue magnetic \(\sigma\) components of H\(\alpha\). In the red, the rate of change of wavelength of the \(\sigma\) components is much slower, and the limited range of \(|B|\) in this region allows individual components to produce sharper absorption line-like features. Notice that the red components of H\(\beta\) also coincide with line-like features of the same sign around 4800 A. The copy of the \(V/I\) spectrum placed at a level of about 45 MG shows that computed \(\sigma\) components correspond to the broad negative depression around 5600 A, and to the two strong line-like features in the region of 7200-7300 A.
These two good matches thus reveal the presence of a region of field strength around 30 to 31 MG on the visible surface from which field lines emerge, and, somewhere else on the surface, of another region of field strength around 40 to 45 MG, with inward pointing field lines.
Without detailed polarised spectral modelling, for which we currently do not have adequate tools, we cannot provide a more detailed description of the WD surface field. A further obstacle to a more accurate modelling is that either the star does not rotate, or the field is symmetric about the rotation axis. Either way, this prevents us from seeing the field morphology from different viewing points.
## 5 The Stellar Radial Velocity
We next consider with what accuracy it is possible to estimate the radial velocity of the WD. All the features that one could use for this purpose have wavelengths that depend on the exact value of local field strength, which has substantial dispersion over the visible stellar disk. In the range between 30 and 45 MG, the wavelengths of the blue \(\sigma\) components of the lines change very rapidly with field strength, leading to very broad, blended features. In comparison, the \(\pi\) and the red \(\sigma\) line components vary more slowly with the field strength. Therefore we will concentrate on these components. Specifically, the \(\pi\) absorption line in the flux spectrum appears to have clear edges, which reflect the limited range in field strength in the two magnetic regions identified on the visible surface of WD 0810-353. Similarly, there are also a few red \(\sigma\) component absorption lines and polarisation features that have reasonably well-defined edges. We can simultaneously match the positions of the blue edge of the \(\pi\) absorption line and the red limit of one or more \(\sigma\) components by varying both the upper limit of the field present in the stronger-field region and the RV shift of the spectrum simultaneously. The same exercise can be carried out for the weaker-field region using the red edge of the \(\pi\) absorption line and the blue edge of one or more \(\sigma\) features to determine simultaneously the strength in the weaker field region and again the RV shift. This procedure provides two independent measurements of the stellar RV that _a posteriori_ we found consistent with each other.
We carried out the measurements described above, by superposing copies of the flux and polarisation spectra on a plot of the Schimeczek-Wunner spaghetti diagram of component wavelengths as a function of field strength, shifting these spectra vertically (to optimise field strength limits) and shifting (and stretching) them horizontally (to optimise radial velocity shift). Because the edges of the features used are noisy, this procedure has been carried out visually. Uncertainties were also estimated visually by varying the positions of the fitted features until the fit is judged unsatisfactory.
We made our measurements on the FORS 1200R spectrum. Using the red-most edge of the broad \(\pi\) component of H\(\alpha\) near 6510 A, and the blue edge of the strongest of the red \(\sigma\) components near 7550 A, we confirmed that the weakest field strength is approximately \(30\pm 0.5\) MG, while we found \(RV=183\pm 200\) km s\({}^{-1}\). Using the blue edge of the apparent \(\pi\)\(I\) absorption component at about 6375 A and the red edge of the strongest \(\sigma\)\(V\) component at 7210 A we estimated that the strongest field present is about \(45.75\pm 0.5\) MG, and the corresponding radial velocity is \(-17\pm 200\) km s\({}^{-1}\). The average of these two measurements is \(RV=+83\pm 140\) km s\({}^{-1}\).
The two measurements support our earlier conclusions about the range of local field strengths on the visible stellar surface, and yield two independent measurements of the RV of the star in satisfactory agreement with one another. The mean RV determined from our measurements is, not surprisingly, inconsistent with the large negative RV of \(-373.7\pm 8.2\) km s\({}^{-1}\) reported on the basis of the _Gaia_ RVS spectrum of WD 0810-353, and our large uncertainties are consistent with the questions that have been raised about the reported precision of that measurement.
## 6 Conclusions
We have obtained and analysed the polarised spectrum of WD 0810-353, a strongly magnetic cool WD in
the solar neighborhood. This spectrum shows complex weak flux absorption and polarisation features throughout most of the optical range. The H\(\alpha\)\(\pi\) component, which is not polarised, is magnetically shifted to the blue, between 6370 and 6520 A, where it appears as a broad and very shallow feature. The H\(\alpha\)\(\sigma\) components are shifted by hundreds of A to the blue and to the red. The higher Balmer lines, significantly weaker than H\(\alpha\), are similarly split and shifted, leading to further complex weak features in \(I\) and \(V\). We have interpreted these weak features as produced by the atmosphere of a strongly magnetic DA WD, with two important regions of overall opposite field line polarity, and typical strengths of about 31 (outward field) and 45 MG (inward field).
Simultaneously to our magnetic modelling, we have also estimated the star's radial velocity (\(+83\pm 140\) km s\({}^{-1}\)). The large uncertainty in our determination of the radial velocity is due to the fact that the dispersion of the magnetic field strength of WD 0810-353 leads to large and poorly defined dispersion in the wavelengths of all absorption and polarisation features. Our result is in strong disagreement with that obtained in previous works (\(RV=-373.7\pm 8.2\) km s\({}^{-1}\)), which had led to the erroneously precise conclusion that WD 0810-353 will pass within a fraction of a parsec of the Solar System within \(\sim 30\) kyr (Bobylev & Bajkova, 2022). Furthermore, we are able to rule out the possibility that the strongly blue-shifted \(\pi\) component of H\(\alpha\) indicates that this star has a huge radial velocity of the order of \(-4300\) km s\({}^{-1}\)(de la Fuente Marcos & de la Fuente Marcos, 2022).
Nevertheless, WD 810-353 is an intrinsically very interesting star. It is one of the closest strongly magnetic WDs to the Earth. With an age of almost 3 Gyr, it is entering the phase of its cooling life during which very strong magnetic fields emerge to the surface of middle aged-WDs (Bagnulo & Landstreet, 2021, 2022). It appears to have a fairly complex distribution of local field strength over the visible surface. It will certainly be worthwhile to carry out further spectropolarimetric monitoring and more detailed modelling of this object.
## 7 Acknowledgements
Based on observations obtained with data collected at the Paranal Observatory under program ID 110.25A1.001, and with the ESPaDOnS instrument on the Canada-France-Hawaii Telescope (CFHT) (operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii). All raw data and calibrations of FORS2 and ESPaDOnS data are available at the observatory archives: ESO archive at archive.eso.org and the Canadian Astronomical Data Centre at [https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/](https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/).
EV acknowledges support from the DISCBOOLO project funded by the Spanish Ministerio de Ciencia, Innovacion y Universidades under grant PID2021-127289NB-I00. JDL acknowledges the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number 6377-2016. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. We thank the Kavli Institute for Theoretical Physics (KITP) for hosting the program "White Dwarfs as Probes of the Evolution of Planets, Stars, the Milky Way and the Expanding Universe".
|
2302.12845 | Stochastic Operator Variance: an observable to diagnose noise and
scrambling | Noise is ubiquitous in nature, so it is essential to characterize its
effects. Considering a fluctuating Hamiltonian, we introduce an observable, the
stochastic operator variance (SOV), which measures the spread of different
stochastic trajectories in the space of operators. The SOV obeys an uncertainty
relation and allows finding the initial state that minimizes the spread of
these trajectories. We show that the dynamics of the SOV is intimately linked
to that of out-of-time-order correlators (OTOCs), which define the quantum
Lyapunov exponent $\lambda$. Our findings are illustrated analytically and
numerically in a stochastic Lipkin-Meshkov-Glick (sLMG) Hamiltonian undergoing
energy dephasing. | Pablo Martinez-Azcona, Aritra Kundu, Adolfo del Campo, Aurelia Chenu | 2023-02-24T19:00:01Z | http://arxiv.org/abs/2302.12845v2 | # Unveiling out-of-time-order correlators from stochastic operator variance
###### Abstract
We consider the dynamics generated by a fluctuating Hamiltonian. We introduce the concept of stochastic variance of operators and find their equation of motion. We show that the stochastic operator variance (SOV) is related to the out-of-time-order correlator (OTOC) introduced in the theory of quantum chaos to define a quantum Lyapunov exponent \(\lambda\). Our findings are illustrated in a stochastic Lipkin-Meshkov-Glick (sLMG) Hamiltonian undergoing energy dephasing, where the action of noise changes the stability region compared to the noiseless LMG, as demonstrated from the SOV-OTOC relation.
Noise is ubiquitous in Nature. At the quantum level, it often arises from interactions of the system under consideration with degrees of freedom whose detailed description is prohibitively difficult, i.e., an environment. Current quantum technologies are limited by the action of noise, motivating the focus on Noisy-Intermediate-Scale Quantum (NISQ) devices [1; 2]. In any experimental setting, tunable parameters such as Hamiltonian coupling constants may exhibit fluctuations due to interactions with the surrounding environment [3; 4; 5]. In this context, the dynamics of an ensemble of noisy realizations can be described in terms of the noise-averaged density matrix, which evolves according to a master equation describing non-unitary evolution [3; 4; 6]. Alternatively, noise can be utilized as a resource for the quantum simulation of open systems [4]. The study of fluctuations in noisy quantum systems is also connected to free probability [7; 8; 9].
The quest for understanding noise in chaotic systems has recently led to a flurry of activities exploring the signatures of quantum chaos when the dynamics is no longer unitary [10; 11; 12; 13; 14; 15; 16; 17]. Out-of-time-order correlators (OTOCs) offer an important diagnostic tool, which was initially proposed in the theory of superconductivity [18]. Their use experienced renewed interest in defining a quantum analog of the Lyapunov exponent [19; 20], which measures the exponential sensitivity to the initial conditions in chaotic systems and is universally bounded by the system's temperature [21]. The existence of a positive Lyapunov exponent classically is a necessary but not sufficient condition for the system to be chaotic--see e.g. [22; 23]. Similarly, the exponential growth of the OTOC is not a sufficient signature for quantum chaos but rather indicates _scrambling_[24; 25]. OTOC's have been studied experimentally, where an evolution with the negative counterpart of the Hamiltonian is needed [26; 27], and in open systems, where their evolution is changed by dissipation [28; 29; 30; 31; 20].
In this Letter, we consider the dynamics generated by a stochastic Hamiltonian and go beyond the noise-averaged density matrix by defining the variance over the noise of an observable and characterizing its evolution. This notion is directly relevant to experiments, particularly in NISQ devices subject to various noise sources. Surprisingly, we find that the evolution of the noise-averaged variance relates to an OTOC, which connects fluctuations of the system with scrambling. This is pictorially represented in Fig. 1. Importantly this connection may allow experimental measurement of OTOC's without need to invert the sign of the Hamiltonian. We illustrate the SOV-OTOC relation in a stochastic generalization of the Lipkin-Meshkov-Glick (sLMG) model. The LMG model [32] describes an Ising spin chain with infinite range interactions and exhibits scrambling at an unstable fixed point [24; 33; 34]. It can be realized experimentally with trapped ions [35] and is amenable to dynamical control techniques such as shortcuts to adiabaticity [36]. We consider fluctuations in the energy scale, which generate energy dephasing dynamics at the noise-average level. We find that the fluctuations alter the stability of the unstable fixed point of LMG, characterized by the phase diagram of sLMG.
Figure 1: **The SOV-OTOC relation.** Illustration of the connection between the stochastic operator variance (SOV) and the out-of-time-order correlator (OTOC). An operator \(\hat{A}\) evolves through different realizations (gray) of a stochastic Hamiltonian, as illustrated by its projections over the identity \(\hat{1}\) and another operator \(\hat{X}\). The noise-averaged evolution (black) follows Lindblad dissipative dynamics. The SOV \(\Delta\hat{A}_{t}^{2}\) characterizes the deviation of different trajectories (red). Its projection over the identity (blue) describes the evolution of the OTOC.
_Stochastic operator variance (SOV) and OTOC._-- Let us consider a system evolving under a Hermitian Hamiltonian \(\hat{H}_{0}\), and subject to classical noise \(\xi_{t}\) modulating the coupling constant of a Hermitian operator \(\hat{L}\), i.e.,
\[\hat{H}_{t}=\hat{H}_{0}+\sqrt{2\gamma}\,\xi_{t}\hat{L}, \tag{1}\]
where \(\gamma\) measures the coupling strength between the system and noise--we set \(\hbar=1\). The stochastic process is taken as real Gaussian white noise, that is, \(\langle\xi_{t}\rangle=0\) and \(\langle\xi_{t}\xi_{t^{\prime}}\rangle=\delta(t-t^{\prime})\). We introduce the Wiener process \(\mathrm{d}W_{t}\equiv\xi_{t}\mathrm{d}t\), which is convenient to deal with the formal treatment of Stochastic Differential Equations (SDEs) [37], and represent the stochastic averages by \(\langle\bullet\rangle\); quantum expectation values will be written explicitly taking the trace over the state density matrix, \(\mathrm{Tr}(\bullet\rho)\). The Schrodinger equation can be solved using the Stratonovich convention to deal with the noise and gives the evolution of any wave function as \(\ket{\psi_{t+\mathrm{d}t}}=\hat{U}_{\mathrm{d}t}\ket{\psi_{t}}\), with the propagator over a short time \(\mathrm{d}t\) reading
\[\hat{U}_{\mathrm{d}t}=e^{-i\hat{H}_{0}\mathrm{d}t-i\sqrt{2\gamma}\mathrm{d}W_ {t}\hat{L}}. \tag{2}\]
The evolution of a Hermitian operator \(\hat{A}\) in the Heisenberg picture is given by \(\hat{A}_{t+\mathrm{d}t}=\hat{U}_{\mathrm{d}t}^{\dagger}\hat{A}_{t}\hat{U}_{ \mathrm{d}t}\). Its equation of motion is conveniently obtained using Ito calculus rules [37], that set \(\mathrm{d}W_{t}^{2}=\mathrm{d}t\) and vanishing higher-order terms, \(\mathrm{d}t\,\mathrm{d}W_{t}=\mathrm{d}W_{t}^{k+1}=\mathrm{d}t^{k}=0\;\forall k>1\). Expanding the propagator according to those rules and introducing the differential \(\mathrm{d}\hat{A}_{t}=\hat{A}_{t+\mathrm{d}t}-\hat{A}_{t}\) yields the SDE for the evolution of \(\hat{A}_{t}\) under the stochastic Hamiltonian (1),
\[\mathrm{d}\hat{A}_{t}=\mathcal{L}^{\dagger}[\hat{A}_{t}]\mathrm{d}t+i\sqrt{2 \gamma}[\hat{L},\hat{A}_{t}]\mathrm{d}W_{t}, \tag{3}\]
where \(\mathcal{L}^{\dagger}[\hat{A}_{t}]=i[\hat{H}_{0},\hat{A}_{t}]-\gamma[\hat{L},[ \hat{L},\hat{A}_{t}]]\) is the adjoint Lindbladian. Note that, by Ito differentiation rule \(\mathrm{d}f[W_{t},t]=(\frac{\partial\hat{A}}{\partial t}+\frac{1}{2}\frac{ \partial^{2}\hat{t}^{\prime}}{\mathrm{d}W^{2}})\mathrm{d}t+\frac{\partial f}{ \partial W}\mathrm{d}W_{t}\), the Lindbladian reads \(\mathcal{L}^{\dagger}[\hat{A}_{t}]=\frac{\partial\hat{A}_{t}}{\partial t}+ \frac{1}{2}\frac{\partial^{2}\hat{A}_{t}}{\partial W^{2}}\). So the dissipator can be seen as the double derivative over the noise. Upon averaging (3), all the linear terms in \(\mathrm{d}W_{t}\) vanish [37] and we find that the noise-averaged operator evolves with an adjoint Lindblad equation \(\mathrm{d}_{t}\langle\hat{A}_{t}\rangle=\mathcal{L}^{\dagger}[\langle\hat{A}_ {t}\rangle]\). This corresponds to the standard evolution of an observable in an open quantum system with a Hermitian jump operator \(\hat{L}\)[38]. The formalism described so far has been introduced in [3] and used in [4] to engineer long-range and many-body interactions. Here, we focus on the stochastic variance of an observable.
In order to find the variance, the second stochastic moment is needed \(\langle\hat{A}_{t}^{2}\rangle\). One may consider the initial operator to be \(\hat{A}^{2}\) instead of \(\hat{A}\), and find that it also follows the Lindblad evolution, \(\mathrm{d}_{t}\langle\hat{A}_{t}^{2}\rangle=\mathcal{L}^{\dagger}[\langle\hat {A}_{t}^{2}\rangle]\). Recall that the average is over realizations of the noise and that \(\langle\hat{A}_{t}^{2}\rangle\) is still an operator acting on the Hilbert space. Subtracting \(\mathrm{d}_{t}\langle\hat{A}_{t}\rangle^{2}\)\(=\langle\hat{A}_{t}\rangle\mathcal{L}^{\dagger}[\langle\hat{A}_{t}\rangle]+ \mathcal{L}^{\dagger}[\langle\hat{A}_{t}\rangle]\langle\hat{A}_{t}\rangle= \mathcal{L}^{\dagger}[\langle\hat{A}_{t}\rangle^{2}]+2\gamma[\hat{L},\langle \hat{A}_{t}\rangle]^{2}\) from both sides, we find the evolution of the SOV, \(\Delta\hat{A}_{t}^{2}=\langle\hat{A}_{t}^{2}\rangle-\langle\hat{A}_{t}\rangle^ {2}\), given by
\[\frac{\mathrm{d}(\Delta\hat{A}_{t}^{2})}{\mathrm{d}t}=\mathcal{L}^{\dagger}[ \Delta\hat{A}_{t}^{2}]-2\gamma[\hat{L},\langle\hat{A}_{t}\rangle]^{2}. \tag{4}\]
The SOV \(\Delta\hat{A}_{t}^{2}\) is an operator that characterizes the deviation of any (stochastic) operator \(\hat{A}_{t}\) from the noise-averaged operator in a stochastic evolution governed by the Hamiltonian (1)--see Fig. 1 for a scheme. Although its equation of motion depends on out-of-time-order terms like \(\hat{L}\langle\hat{A}_{t}\rangle\hat{L}\langle\hat{A}_{t}\rangle\), it can easily be computed from the evolution of \(\langle\hat{A}_{t}\rangle\) and \(\langle\hat{A}_{t}^{2}\rangle\). Indeed,
\[\Delta\hat{A}_{t}^{2}=e^{\mathcal{L}^{\dagger}t}[\hat{A}_{0}^{2}]-(e^{ \mathcal{L}^{\dagger}t}[\hat{A}_{0}])^{2}. \tag{5}\]
This operator is particularly relevant in the quantum simulation of open systems using NISQ devices, e.g., to engineer a given dissipative evolution. The average \(\langle\hat{A}_{t}\rangle\) evolves with the desired master equation, but the individual trajectories deviate as dictated by the variance \(\Delta\hat{A}_{t}^{2}\). Since the map \(e^{\mathcal{L}^{\dagger}t}[\bullet]\) is positive and unital, \(e^{\mathcal{L}^{\dagger}t}[\natural]=\mathbb{1}\), Kadison's inequality [39; 40] ensures that \(e^{\mathcal{L}^{\dagger}t}[\hat{A}_{0}^{2}]\geq(e^{\mathcal{L}^{\dagger}t}[ \hat{A}_{0}])^{2}\). Therefore the SOV is positive semidefinite, \(\Delta\hat{A}_{t}^{2}\geq 0\).
Remarkably, the expectation value of (4) for the fully-mixed state, \(\hat{\rho}=\hat{\mathbb{1}}/N\), gives a dissipative version of the OTOC, namely
\[\frac{1}{N}\frac{\mathrm{d}\mathrm{Tr}(\Delta\hat{A}_{t}^{2})}{\mathrm{d}t}=- \frac{2\gamma}{N}\mathrm{Tr}([\hat{L},\langle\hat{A}_{t}\rangle]^{2}). \tag{6}\]
OTOCs are typically defined from two operators as \(C_{t}=-\mathrm{Tr}([\hat{B}_{0},\hat{A}_{t}]^{2})/N\), and measure the exponential sensitivity on initial conditions in quantum chaotic systems [19]. Indeed, in a quantum system with scrambling, one expects \(C_{t}\sim ee^{\lambda_{\mathrm{d}}t}\) in the time window \(t_{s}\ll t\ll t_{\mathrm{f}}\) between the saturation time of two-point functions, \(t_{s}\sim 1/\lambda_{\mathrm{q}}\), and that of the OTOC, known as the Ehrenfest time, \(t_{\mathrm{c}}\sim\ln(\hbar^{-1})/\lambda_{\mathrm{q}}\)[21]. The main difference in our setting is that the evolved operator follows a dissipative dynamics, \(e^{\mathcal{L}^{\dagger}t}[\hat{A}_{0}]\), instead of a unitary evolution, \(e^{i\hat{H}t}\hat{A}_{0}e^{-i\hat{H}t}\). The connection between this OTOC and the SOV is pictorially shown in Fig. 1. It can be used to compute the Lyapunov exponent through
\[C_{t}=\frac{1}{2\gamma N}\frac{\mathrm{d}\mathrm{Tr}(\Delta\hat{A}_{t}^{2})}{ \mathrm{d}t}\sim ee^{\lambda_{\mathrm{d}}t}, \tag{7}\]
where the exponential behavior holds only in systems with scrambling over the appropriate period \(t_{s}\ll t\ll t_{\mathrm{f}}\). Note that the SOV \(\Delta\hat{A}_{t}^{2}\) is an operator constructed from the knowledge of the evolution under different noise realizations. It gives an alternative experimental way to measure OTOCs without the need for reversing the sign of the Hamiltonian. The classical limit of this equation is similar, the only difference being that \(\Delta A_{t}^{2}\) becomes a function of time and the trace an average over a region of phase space [41].
It is also relevant to compute the short-time decay \(C_{t}\sim C_{0}e^{-t/\tau_{D}}\) of the OTOC characterized by the dissipation time \(\tau_{D}=\left(2\gamma\text{Tr}([\hat{L},[\hat{L},\hat{A}_{0}]]^{2})/(C_{0}N) \right)^{-1}\) and the initial value \(C_{0}=\text{Tr}([\hat{L},\hat{A}_{0}]^{2})/N\). Interestingly, the dissipation time is related to the Hilbert-Schmidt norm of the dissipator acting on the initial operator--see SM. We next illustrate our findings in the Lipkin-Meshkov-Glick (LMG) model subject to energy dephasing, and characterize its Lyapunov exponent using the SOV-OTOC relation.
_Stochastic Lipkin-Meshkov-Glick (sLMG) model._--The LMG model describes the collective motion of \(N\) identical two-level systems fully connected to each other with the same coupling strength [32]. Its quantum Hamiltonian reads
\[\hat{H}_{\text{\tiny{LMG}}}=\Omega\hat{S}_{z}-\frac{2}{N}\hat{S}_{x}^{2}, \tag{8}\]
where \(\Omega\) is the energy difference of the two-level systems divided by their coupling strength, and \(\hat{S}_{j}\) are the general spin operators of dimension \(2S+1\). We stay in the sector \(S=N/2\). Since the total spin \(\hat{\mathbf{S}}^{2}=\hat{S}_{x}^{2}+\hat{S}_{y}^{2}+\hat{S}_{z}^{2}\) commutes with the spin operators, \([\hat{S}_{j},\hat{\mathbf{S}}^{2}]=0\), the total angular momentum is conserved. Due to time-translational symmetry, energy is conserved, and since there is only one degree of freedom, this model is integrable. If this continuous symmetry is broken by periodic kicks in \(\hat{S}_{x}^{2}\), the model turns into the known kicked top, which is chaotic and has been extensively used as a playground to study classical and quantum chaos [10].
Here, we break time-translational symmetry by adding noise in the energy scale and consider
\[\hat{H}_{t}=\hat{H}_{\text{\tiny{LMG}}}(1+\sqrt{2\gamma}\xi_{t}). \tag{9}\]
This leads to dephasing in the energy eigenbasis at the ensemble level. The evolution of a noise-averaged observable and its SOV is depicted in Fig. 2(a). It shows the non-trivial evolution of the variance over the different operators. Fig. 2(b) shows the evolution of the Lyapunov exponent, computed from the SOV using Eq. (7). We recover the behavior expected for the dissipative OTOC in a finite system--with discrete non-degenerate energies--under strong dephasing, involving two exponential-decay regimes [42], the first one being dictated by \(\tau_{D}\). This illustrates how the SOV can be used to obtain the dissipative OTOC. One advantage of our method is that it also applies in the classical limit, when the energies are no longer discrete, as we detail below.
The classical limit of the Hamiltonian (8) is obtained by taking its expectation value over the SU(2) coherent state, \(|\zeta\rangle=\frac{e^{\zeta\hat{S}_{+}}}{(1+|\zeta|^{2})^{2}}\left|S,-S\right>\), in the thermodynamic limit, \(N\rightarrow\infty\). Following [33; 34], we introduce the canonical variables \(Q\) and \(P\) as \(\zeta=\frac{Q-iP}{\sqrt{4-(Q^{2}+P^{2})}}\) that yield the classical LMG
\[\begin{split} H_{\text{\tiny{LMG}}}&=\lim_{S \rightarrow\infty}\frac{1}{S}\left\langle\zeta|\hat{H}_{\text{\tiny{LMG}}}| \zeta\right\rangle\\ &=\frac{\Omega}{2}(Q^{2}+P^{2})-\left(Q^{2}-\frac{Q^{2}P^{2}}{4}- \frac{Q^{4}}{4}\right),\end{split} \tag{10}\]
where the terms of \(\mathcal{O}(1/N)\) are neglected--see SM for details. This model is integrable and exhibits an unstable fixed point at the origin, \(Q^{*}=P^{*}=0\), for \(0<\Omega<2\). Since scrambling originates from an unstable point, it is already present in the semiclassical limit and has been characterized in [33; 34; 24].
Here, we consider the classical equivalent of (9), namely, \(H_{t}=H_{\text{\tiny{LMG}}}(1+\sqrt{2\gamma}\xi_{t})\). A few realizations of this model are presented in Fig. 3. The evolution of the noise-averaged observable displays the classical analog of energy dephasing, namely \(\partial_{t}\langle A_{t}\rangle=-\{H_{\text{\tiny{LMG}}},\langle A_{t} \rangle\}_{p}+2\gamma\{H_{\text{\tiny{LMG}}},\{H_{\text{\tiny{LMG}}},\langle A_ {t}\rangle\}_{p}\}_{p}\), where \(\{f,g\}_{p}\) denotes the Poisson bracket of \(f\) and \(g\). We characterize the Lyapunov exponent by three complementary methods: (i) First, analytically. At the origin, the LMG can be linearized into the harmonic oscillator \(H=\frac{1}{2}[\Omega P^{2}+(\Omega-2)Q^{2}]\). Hamilton's equation of motion gives \(\dot{Q}\) and \(P\), from which the evolution of the quadratic terms \(\mathbf{u}_{t}\equiv(Q_{t}^{2},P_{t}^{2},Q_{t}P_{t})^{T}\) follows as \(\dot{\mathbf{u}}_{t}=\mathbb{A}_{d}\mathbf{u}_{t}\)--the matrix \(\mathbb{A}_{d}\) is explicit in the SM. Its largest eigenvalue, \(\sqrt{2\Omega-\Omega^{2}}\), gives the known Lyapunov of the LMG [33]. For the sLMG, the equations of motion of the vector \(\mathbf{u}_{t}\) are described by a SDE. Following van Kampen [43; 44], we find that \(\partial_{t}\langle\mathbf{u}_{t}\rangle=(\mathbb{A}_{d}-\gamma\mathbb{A}_{d }^{2})\langle\mathbf{u}_{t}\rangle\). This gives the average Lyapunov exponent [45] as
\[\lambda=\sqrt{2\Omega-\Omega^{2}}-\gamma(2\Omega-\Omega^{2}). \tag{11}\]
(ii) Second, numerically. The standard, classical defini
Figure 2: **Evolution of (a) the operator noise average and stochastic variance, and (b) the OTOC \(C_{t}\) for the quantum sLMG model**. The operator \(\hat{A}_{0}=(\hat{S}_{x}+\hat{S}_{y}+\hat{S}_{z})/\sqrt{3}\) evolves under the stochastic Hamiltonian (9) with \(\gamma=2\), \(\Omega=1.5\), and \(S=20\). (a) Its components over the spin \(\hat{S}_{j}\) as a function of time (color scale) are represented using the Hilbert-Schmidt inner product, \(s_{j}(t)=(\langle\hat{A}_{t}\rangle,\hat{S}_{j})\) (see SM). The error bars represent the square root of the SOV \(((\Delta\hat{A}_{t}^{2})^{1/2},\hat{S}_{j})\). (b) Dissipative OTOC obtained from the SOV-OTOC relation (6) (solid line) and short-time expansion (dashed line) across the phase transition—at \(\Omega_{c}=2\).
tion of the average Lyapunov exponent gives
\[\lambda=\left\langle\lim_{t\to\infty}\frac{1}{t}\ln\!\left(\frac{\sqrt{(Q_{t}\!-\!Q _{t}^{\prime})^{2}+(P_{t}\!-\!P_{t}^{\prime})^{2}}}{\sqrt{(Q_{0}\!-\!Q_{0}^{ \prime})^{2}+(P_{0}\!-\!P_{0}^{\prime})^{2}}}\right)\right\rangle, \tag{12}\]
where \((Q_{t},P_{t})\) and \((Q_{t}^{\prime},P_{t}^{\prime})\) are two initially close trajectories evolving with the same realization of the noise. We compute this average by solving the stochastic Hamilton's equations of motion for the sLMG model through a stochastic Runge-Kutta method [46]. (iii) Finally, our formalism gives the Lyapunov from Eq. (6), by taking as observable the position \(A_{t}=Q_{t}\), the jump operator being the Hamiltonian itself, \(L=H_{0}\). Namely,
\[\lambda=\lim_{t\to\infty}\frac{1}{2t}\ln\left(\frac{\mathrm{d}_{t}\Delta Q_{t }^{2}}{\epsilon}\right). \tag{13}\]
This method has the advantage of using only one averaged trajectory instead of a pair, directly giving the average Lyapunov exponent.
Figure 4(a) shows the Lyapunov exponent \(\lambda\) obtained from the above methods as a function of \(\Omega\) for different noise strengths. We verify that the three methods are in perfect agreement. For \(\gamma=0\) (black line), we recover the LMG Hamiltonian with two distinct phases: the double well (DW) phase for \(\Omega<2\) in which \((Q^{*},P^{*})=(0,0)\) is an unstable fixed point and where the exponent is positive, and the single well (SW) phase for \(\Omega\geq 2\) with a null exponent. Introducing a weak stochastic perturbation, the Lyapunov exponent becomes smaller in the DW phase--trajectories diverge more slowly--while the SW phase acquires a positive \(\lambda\). Increasing the level of noise causes the Lyapunov exponent in the DW phase to decrease and even reach negative values, when trajectories converge exponentially, while the value of \(\lambda\) in the SW phase increases. This rich behavior is summarized in the phase diagram presented in Fig. 4(b). The latter further shows that, under the application of strong noise, the origin \((Q^{*},P^{*})=(0,0)\) is stable in the DW phase--trajectories converge exponentially to it--while it is unstable in the SW phase--trajectories diverge exponentially from it. Therefore the sLMG in the DW phase shows a noise-induced transition to stability [47].
This stabilization can be understood in terms of the energy landscape associated with the stochastic Hamiltonian, illustrated in Fig. 3. In the limit that \(\sqrt{2\gamma}\ll 1\), most realizations are as in the noiseless LMG. The behavior is then close to that of the deterministic model, with the DW phase showing an unstable point and the SW phase being stable. In the opposite limit, \(\sqrt{2\gamma}\gg 1\), almost half of the realizations flip the Hamiltonian since \((1+\sqrt{2\gamma}\xi_{t})<0\), as shown in Fig. 3 (a). This brings a major difference between the DW and SW phases. In the SW phase, all points in phase space \((Q,P)\) either gain (d)\(\to\)(e) or lose (e)\(\to\)(d) energy. In the DW phase, however, when going from (b)\(\to\)(c), the points inside the wells lose energy while the points outside the wells gain it, and viceversa for (c)\(\to\)(b). This difference between the points of phase space provides a rationale for the stochas
Figure 3: **Visualization of the stochastic LMG model**. (**a**) Histogram of the Gaussian white noise. The standard deviation \(\sqrt{2\gamma}\) is indicated by the horizontal blue line and the vertical dotted line delimits the sign flip of \((1+\sqrt{2\gamma}\xi_{t})\). LMG Hamiltonian in the double well (bc) and single well (d;e) phases multiplied by a negative (b;d) and positive (c;e) number corresponding to \((1+\sqrt{2\gamma}\xi_{t})\).
Figure 4: **Lyapunov exponent of the classical sLMG model at the saddle point \(Q^{*}=P^{*}=0\)** as function of \(\Omega\) (a) for different values of the noise strength \(\gamma\) and (b) over the phase diagram. (a) \(\lambda\) as computed (i) analytically using van Kampen’s method (11) (solid lines), (ii) from the standard definition (12) (circles with errorbar), and (iii) from the stochastic variance of the position (13) (triangles). The known results for the LMG correspond to \(\gamma=0\) (black). (b) **Phase diagram**. The color scale represents the Lyapunov exponent \(\lambda\) as a function of the model parameter \(\Omega\) and the noise strength \(\gamma\). A positive value of \(\lambda\) (red) implies exponential divergence of close initial conditions, while a negative value (blue) indicates exponential convergence. The dotted horizontal lines represent the values of \(\gamma\) sampled in (a). The vertical dashed gray line represents the transition between the double well (\(\Omega<2\)) and single well (\(\Omega\geq 2\)) phase.
tic stabilization seen for the DW phase--blue region in Fig. 4(b).
In summary, we have introduced the stochastic operator variance as a valuable tool in the study of many-body quantum systems driven by noise. Building on it, we unveil the SOV-OTOC relation, which provides an operational protocol harnessing noise as a resource to probe OTOC and extract the Lyapunov exponent in noisy quantum chaotic systems. To illustrate our results, we introduced a stochastic generalization of LMG model and characterized its behavior in the quantum and classical realms. Our results provide the means to elucidate the fate of quantum chaos in noisy systems and benchmark NISQ devices.
_Acknowledgements.--_ We thank Niklas Hornedal and Federico Roccati for insightful discussions and comments on the manuscript. This work was partially funded by the Luxembourg National Research Fund (FNR, Attract grant 15382998) and the John Templeton Foundation (Grant 62171). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.
|
2305.08662 | Precision prediction at the LHC of a democratic up-family philic KSVZ
axion model | In this work, we study the $SU(2)_L$ singlet complex scalar extended KSVZ
model that, in addition to providing a natural solution to the strong-CP
problem by including a global Peccei-Quinn symmetry, also furnishes two
components of dark matter that satisfy observer relic density without
fine-tuning of model parameters. Furthermore, this model provides a rich
phenomenology by introducing a vector-like quark whose presence can be sensed
in collider experiments and dark matter production mechanisms. We explore the
possibility of democratic Yukawa interaction of the vector-like quark with all
up-type quarks and scalar dark matter candidate. We also employ next-to-leading
order NLO-QCD correction for VLQ pair production to study a unique search at
the LHC, generating a pair of boosted tops with sizeable missing transverse
momentum. Multivariate analysis with jet substructure variables has a strong
ability to explore a significant parameter space of this model at the 14 TeV
LHC. | Anupam Ghosh, Partha Konar | 2023-05-15T14:11:57Z | http://arxiv.org/abs/2305.08662v1 | # Precision prediction at the LHC of a democratic up-family philic KSVZ axion model
###### Abstract
In this work, we study the \(SU(2)_{L}\) singlet complex scalar extended KSVZ model that, in addition to providing a natural solution to the strong-CP problem by including a global Peccei-Quinn symmetry, also furnishes two components of dark matter that satisfy observer relic density without fine-tuning of model parameters. Furthermore, this model provides a rich phenomenology by introducing a vector-like quark whose presence can be sensed in collider experiments and dark matter production mechanisms. We explore the possibility of democratic Yukawa interaction of the vector-like quark with all up-type quarks and scalar dark matter candidate. We also employ next-to-leading order NLO-QCD correction for VLQ pair production to study a unique search at the LHC, generating a pair of boosted tops with sizeable missing transverse momentum. Multivariate analysis with jet substructure variables has a strong ability to explore a significant parameter space of this model at the 14 TeV LHC.
Keywords:KSVZ, QCD corrections, boosted top, jet substructure, LHC
## 1 Introduction
The extended KSVZ model and Constraints
Dark Matter Phenomenology
Pair production of vector-like quark at NLO+PS accuracy
Multivariate Analysis (MVA)
Conclusions
Introduction
The Standard Model (SM) of particle physics harbours several shortcomings despite providing the most successful theory of the underlying nature of fundamental particles and their interactions, verified in many low to high-energy experiments consistently delivering excellent agreement. Among many such issues relating to some internal inconsistencies to the inability to explain some of the observations, significant flaws like the strong-CP problem [1; 2; 3], the existence of dark matter (DM) [4; 5], baryon-antibaryon asymmetry of the universe [6; 7], neutrino masses [8; 9; 10], etc. are the ones which generated lots of attention in motivating to go beyond the Standard Model (BSM) and their searches at different experiments.
The \(SU(3)_{C}\) symmetry of SM allows a term like \(\theta\frac{g_{S}^{2}}{32\pi^{2}}\tilde{G}_{\mu\nu}G^{\mu\nu}\), where \(G^{\mu\nu}\) is the gluon field strength tensor. This term contributes to the neutron electric dipole moment, and the experimental measurement [11] constraints the parameter \(\overline{\theta}\leq 10^{-10}\). These two parameters \(\overline{\theta}\) and \(\theta\) are related through quark field chiral rotation. Since \(\overline{\theta}\to 0\) does not promote any symmetry of the theory, one would anticipate \(\overline{\theta}\sim\mathcal{O}(1)\), which is known as a strong charge-parity (CP) problem [1; 2; 3]. Roberto Peccei and Helen Quinn proposed a classic resolution to this critical issue in 1977 by extending the SM with a global Peccei-Quinn (PQ) symmetry, expected to be broken spontaneously at a scale far larger than the Electroweak (EW) scale. The breaking of \(U(1)_{PQ}\) predicts the existence of a pseudo-Goldstone particle, also known as the QCD axion. It is even more interesting to note that although QCD axion is not entirely stable, it can have a lifetime comparable to the age of the Universe, thanks to the sizeable breaking scale, to play the role of dark matter [12; 13]. Hence such models can concurrently explain the presence of the DM in the Universe while also solving the strong-CP problem.
After breaking \(U(1)_{PQ}\) symmetry, one may formulate an axion-gluon effective Lagrangian where \(F_{a}\) is the Peccei-Quinn breaking scale and this Lagrangian appears to be
\[\mathcal{L}=\mathcal{L}_{\rm QCD}+\theta\frac{g_{S}^{2}}{32\pi^{2}}\tilde{G}_ {\mu\nu}G^{\mu\nu}+\frac{1}{2}\partial_{\mu}a\partial^{\mu}a-\frac{g_{S}^{2}} {32\pi^{2}F_{a}}a(x)\tilde{G}_{\mu\nu}G^{\mu\nu}. \tag{1}\]
After minimizing the axion potential, one can write \(\overline{\theta}=\theta-\frac{1}{F_{a}}<a(x)>\). As a result, the vacuum expectation value of the axion field cancels the original \(\theta\), and the strong CP issue is resolved. Among different existing models Kim-Shifman-Vainshtein-Zakharov (KSVZ) [14; 15] provides some of the exciting phenomenological implementations above and over its capability to solve two of the most outstanding problems of SM. This model includes a PQ-breaking complex scalar that is singlet under the SM gauge groups \(\eta\sim(1,1,0)\), as well as a vector-like quark (VLQ) \(\Psi=\Psi_{L}+\Psi_{R}\) that is \(SU(3)_{C}\) colour triplet but \(SU(2)_{L}\) singlet with hypercharge zero, so that, \(\Psi\sim(3,1,0)\). The interaction term between VLQ and the scalar is \(f_{\Psi}\eta\overline{\Psi}_{L}\Psi_{R}+h.c\), where \(f_{\Psi}\) is the interaction strength. The scalar field can be written as,
\[\eta=\frac{1}{\sqrt{2}}(F_{a}+\sigma_{0})\ e^{\frac{ia(x)}{F_{a}}}. \tag{2}\]
Here \(F_{a}\) is the PQ breaking scale, \(\sigma_{0}\) and \(a(x)\) are the radial mode and the axion field, respectively. Following the symmetry breaking, VLQ gains mass proportionate to the PQ breaking scale, and they are thus isolated from the axion field and may be safely integrated out. This leads to an axion-gluon effective Lagrangian (the final term of Equation 1), and the strong-CP problem is resolved. Axion can get a tiny mass (at the order of a few MeV) after QCD confinement and decay into a pair of gluons through loop-induced VLQ. Since its decay rate is inversely proportional to the PQ breaking scale, if the braking scale is tuned correctly, the axion lifetime can be larger than the age of the Universe and behaves as a DM.
It is also worth noting that the breaking of PQ symmetry in these models leaves a residual \(\mathbb{Z}_{2}\) symmetry that remains intact. Axion can provide the correct relic density of DM as measured by the Planck collaboration [16], but after the corresponding breaking scale is fine-tuned. We analyze an extended KSVZ model which circumvents such fine-tuning by adding another complex scalar \(S\sim(1,1,0)\), a singlet under SM. Under the residual \(\mathbb{Z}_{2}\) symmetry of the KSVZ model, VLQ is odd in this setup, and \(S\) is likewise \(\mathbb{Z}_{2}\)-odd. Therefore the lightest component of \(S\) serves as the second dark matter candidate. VLQ interacts with Standard Model quarks and the scalar \(S\) in the present configuration. The hypercharge of VLQ is determined by the kind (up or down) of SM quarks considered.
Given that we are considering up-type quarks, the hypercharge of VLQ is \(\frac{2}{3}\). VLQ plays a critical role in dark matter phenomenology because it opens up new coannihilation and annihilation channels, such as coannihilation between scalar DM and VLQ and annihilation of VLQs into SM particles, which has a significant impact on relic density calculations. Since DM interacts with the SM quarks through VLQ, additional direct detection channels open up, such as VLQ-mediated t-channel elastic scattering between the SM-quark and scalar DM. Moreover, the VLQ and its interaction with SM quarks also affect the LHC phenomenology. VLQ decays into an SM-quark and a missing DM particle after being produced at the LHC. As a result, multijet plus missing transverse momentum may be employed as a possible probe. If the mass difference between VLQ and DM is more than the top quark mass, VLQ can be probed from its decay into top quarks, along with a sizeable missing transverse energy from dark matter in the final state.
The Yukawa interaction takes the form \(f_{i}S\overline{\Psi}_{L}u_{i}h+\)\(h.c\), where \(u_{R}\) denotes right-handed up-type SM-quarks with \(i=u,\ c,\ t\). We first consider the parameter spaces that yield the correct relic density and are permissible from other experimental observations such as direct detection (DD), collider data, etc., with equal (democratic) coupling strengths \(f_{u}=f_{c}=f_{t}\) at all three generations. Interestingly, one would find that the flavour constraint strongly disfavours this democratic option, although such models can be allowed from observed relic density and all other constraints (please follow the flavour constraints part in Section 2). The flavour constraint requires either or both lighter flavour couplings (\(f_{u}\), \(f_{c}\)) tiny to be allowed. Instead of taking a tiny value for \(f_{c}\), we set \(f_{c}=0\) while keeping the other two democratic \(f_{u}=f_{t}=f\)1. We found that the parameter spaces that support correct relic density hold considerably higher \(f\) values. We would observe further that the choice of
this parameter generates a very different physics outlook from DM phenomenology and the constraints in LHC compared to the prior work [17], which investigated the effect of large \(f_{t}\) with negligible couplings for other two generations.
The present study investigates the reach of this compelling parameter space at the 14 TeV LHC. We especially employ next-to-leading order (NLO) correction for VLQ pair production for precise computation. The partonic leading-order (LO) cross section has the order \(\sigma(pp\to\Psi\bar{\Psi})_{\rm LO}={\cal O}(\alpha_{S}^{2})+{\cal O}(f^{4})+{ \cal O}(f^{2}\alpha_{S})\). Although the dominant contribution comes from the pure QCD sector (\({\cal O}(\alpha_{S}^{2})\)), the new physics contribution (\({\cal O}(f^{4})\)) can be sizeable since \(f\) is substantial. We also note that the mixing term (\({\cal O}(f^{2}\alpha_{S})\)) has a non-negligible contribution and interferes negatively. Hence, this interesting interplay went missing in [17], where the BSM couplings retain only a minor influence on the partonic cross section giving QCD coupling to play the primary role. In this work, we explore the NLO-QCD correction of the leading term (\({\cal O}(\alpha_{S}^{2})\)) while keeping both leading and subleading terms (\({\cal O}(f^{4})\), \({\cal O}(f^{2}\alpha_{S})\)) at LO for more accurate results. The integrated NLO-K factor for pure QCD couplings can be as large as about 1.3, which is quite significant.
Another interesting point is that the scalar DM parameter spaces that provide the correct relic density while simultaneously being allowed by direct detection constraints differ dramatically. In previous work, for example, when the mass of the scalar DM is more than the mass of the top quark (\(m_{t}\)), DM annihilates into \(t\bar{t}\) through VLQ exchange t-channel, giving the correct relic density when \(f_{t}\sim 1\) while other two couplings \(f_{u},f_{c}\) are tiny. Small \(f_{u}\) is also required from the direct detection constraint. In the current study, when the DM is heavier than the top quark, DM annihilation into \(t\bar{u},\ \bar{t}u\) contributes the most in relic density, followed by the annihilation into \(t\bar{t}\) final state. Likewise, allowed parameter space can neither support arbitrarily large coupling \(f\) (\(=f_{u}=f_{t}\)) from direct detection nor the too-small value of it to obtain correct relic density. Therefore, their interplay remains vital for selecting the available parameter spaces. Contrary to the prior study, only a minuscule model space is left when the mass difference between the scalar DM and the VLQ (\(\Delta M_{\Psi S_{1}}\)) is smaller than \(m_{t}\) since DD constraints prohibit such spaces due to having a high \(f\) value, although having the correct relic density.
After the pair production of VLQs at the LHC, each VLQ decays into the top quark associated with the scalar since the majority of parameter spaces are present when \(\Delta M_{\Psi S_{1}}>m_{t}\). The branching ratio BR(\(\Psi\to tS)<0.5\) and counts on the coupling \(f\), in contrast to the past study, when VLQ entirely decaying into the top quark while kinematically allowed. Our signal comprises two boosted top-like fatjets and missing transverse energy (MET). We consider NLO signal events with Parton-shower (PS) and all the SM background processes that mimic the signal and do a multivariate analysis using the BDT technique for completeness. The available higher-order QCD cross section is used to normalize all the background processes. The parameter spaces of this model are shown to be well within the scope of the 14 TeV LHC with 139 fb\({}^{-1}\) luminosity.
The paper is structured as follows. Section 2 introduces our model and a brief outlook on different theoretical and experimental constraints. The dark matter phenomenology of this model is discussed in Section 3. Section 4 demonstrates the impact of NLO+PS calculations, the differential k-factor, and the scale uncertainty of NLO+PS compared
to LO+PS. Section 5 displays our collider analysis technique using relevant high-level observables, including jet substructure variables, with multivariate analysis (MVA). Finally, we summarise our findings in Section 6.
## 2 The extended KSVZ model and Constraints
As expressed in the introduction, the KSVZ model contains a complex scalar \(\eta\) (Equation 1.2), which breaks the PQ-symmetry spontaneously, and a massive colour triplet state known as a vector-like quark (VLQ) \(\Psi\). The generic Lagrangian of this model can be expressed as,
\[\mathcal{L}^{\rm KSVZ}=\ \partial_{\mu}\eta^{\dagger}\partial^{\mu}\eta+\bar{ \Psi}i\gamma^{\mu}D_{\mu}\Psi-(f_{\Psi}\eta\bar{\Psi}_{L}\Psi_{R}+h.c)-\lambda _{\eta}(|\eta|^{2}-F_{a}^{2}/2)^{2}. \tag{1}\]
We extend the model by a complex singlet scalar field \(S\) with the same PQ charge as VLQ. Suppose this scalar does not contain any vacuum expectation value (VEV). In that case, the residual \(\mathbb{Z}_{2}\) symmetry after the spontaneous broken of PQ-symmetry of the KSVZ model will remain intact, and the lightest component of the scalar remains stable. Hence this takes a role of a second dark matter candidate in this theory as a weakly interacting massive particle (WIMP). The scalar can be written as,
\[S=\frac{S_{1}+iS_{2}}{\sqrt{2}} \tag{2}\]
We consider VLQ to have non-zero hypercharge so that the scalar DM can interact with the SM up-type quarks through the mediator VLQ. Such a construction with the up-type quark instead of the down-type has an interesting consequence. Pair of scalar DM can annihilate into a top-pair or a single top associated with a light quark (\(q=u,c\)) through t-channel VLQ mediated diagrams depending on whether DM mass satisfies such kinematic limit and provide the observed relic density. Therefore, aside from the Higgs portal, when the DM having a mass of nearly half the Higgs boson mass, annihilate into the SM particle through on-shell Higgs resonance and thus gives the correct relic, a heavier variety opens up new parameter spaces, also yielding the observed relic density. Another compelling reason comes from the interesting phenomenological viewpoint. Heavy VLQ can be produced copiously at a high-energy collider, which in turn decays into the top quark and a missing particle whenever kinematically feasible. This can result in a unique topology in the LHC search, such as the possibility of a boosted top-fatjet along with a sizable amount of missing transverse momentum from dark matter. This is a likely scenario if one notes down the present constraint on VLQ mass, which is already in the vicinity of the TeV scale, and such a heavy state would naturally produce decay products which are sufficiently boosted.
The interaction terms of VLQ are given below.
\[\mathcal{L}^{\rm VLQ}=\ -(f_{i}S\overline{\Psi}_{L}u_{iR}+f_{\Psi}\eta \overline{\Psi}_{L}\Psi_{R}+h.c.),\qquad\{i=u,\ c,\ t\} \tag{3}\]
The full scalar potential of the model can be written as,
\[\begin{split} V=&\ \lambda_{H}(|H|^{2}-v_{H}^{2}/2)^{2}+ \lambda_{\eta}(|\eta|^{2}-F_{a}^{2}/2)^{2}+\lambda_{\eta H}(|H|^{2}-v_{H}^{2}/ 2)(|\eta|^{2}-F_{a}^{2}/2)\\ &+\mu_{S}^{2}|S|^{2}+\lambda_{S}|S|^{4}+\lambda_{SH}|H|^{2}|S|^{2 }+\lambda_{S\eta}|\eta|^{2}|S|^{2}+[\epsilon_{S}\eta^{*}S^{2}+h.c].\end{split} \tag{4}\]
Where \(v_{H}\) is the VEV of the SM Higgs potential, the second term is the Mexican-hat potential of the KSVZ model. Because of the third term, the mixing between the SM Higgs field and the radial part of the PQ-braking scalar \(\eta\) happens, which leads to a non-diagonal mass matrix. After the diagonalization of the mass matrix, one can get the physical masses, which can be expressed below.
\[M_{h,\sigma}^{2}=(\lambda_{H}v^{2}+\lambda_{\eta}F_{a}^{2})\pm\sqrt{(\lambda_{ H}v^{2}-\lambda_{\eta}F_{a}^{2})^{2}+F_{a}^{2}v^{2}\lambda_{\eta H}^{2}} \tag{5}\]
The physical scalar Higgs boson mass is set to \(M_{h}=125\) GeV, and \(v_{H}\) is equated with the SM electroweak VEV, \(v_{H}=v=246\) GeV. However, \(\lambda_{H}\) differs from the SM value when \(\lambda_{\eta H}\neq 0\). The masses of the different components of the scalar and the VLQ can be expressed as,
\[M_{S_{1,2}}^{2}=\ \frac{1}{2}(2\mu_{S}^{2}+v_{H}^{2}\lambda_{SH}+F_{a}^{2} \lambda_{S\eta}\mp 2\sqrt{2}\epsilon_{s}F_{a}),\qquad M_{\Psi}=\ f_{\Psi}\frac{F_{a} }{\sqrt{2}}. \tag{6}\]
We consider \(F_{a}\) and \(M_{\Psi}\) as the independent parameters, and \(f_{\Psi}\) is the dependent parameter according to the formula above. Without losing generality, we can assume that \(S_{1}\) is lighter than \(S_{2}\) (in other words, one picks \(\epsilon_{s}>0\)). This lighter component \(S_{1}\) is the scalar DM. The \(\epsilon_{s}\) term of Equation 4 is the only term that causes the mass splitting between the scalar components, and \(\epsilon_{s}\) is the dependent parameter when \(M_{S_{1}}\) and the mass difference \(\Delta M=M_{S_{2}}-M_{S_{1}}\) are assumed to be the independent parameters. Next, we briefly outline different constraints for this model.
Constraints:Theoretical bounds, different experimental data, and cosmological observations severely restrict the parameter space of the extended KSVZ model. We quickly outline each of these constraints before establishing benchmark points that yield the correct relic density while accommodating all the other constraints. For more details, see Reference [17].
The scalar potential should be bounded from below, and the perturbativity demands all the \(\lambda\) in Equation 4 should be less than \(4\pi\), and \(|f_{i}|<\sqrt{4\pi}\). Being two component DM, the total relic density comprises both the scalar and axion DM.
\[\Omega_{\rm T}h^{2}=\ \Omega_{a}h^{2}+\Omega_{S_{1}}h^{2} \tag{7}\]
The parameter spaces ought to match the Planck measurements [18] of the observed abundance of DM relics.
\[\Omega_{\rm DM}h^{2}=\ 0.120\pm 0.001. \tag{8}\]
Axions can be created non-thermally due to the misalignment mechanism, and axion relic density is as follows [19; 20; 21].
\[\Omega_{a}h^{2}\simeq 0.18\ \theta_{a}^{2}\left(\frac{F_{a}}{10^{12}{\rm GeV}} \right)^{1.19} \tag{9}\]
Where \(\theta_{a}\) is the axion's initial misalignment angle, and for illustration purposes, we set \(\theta_{a}=1.0\), and the decay constant should lie in the range \(10^{10}\) GeV \(\leq F_{a}\leq 10^{12}\) GeV.
The lower bound comes from supernova cooling data [22], but the upper bound comes from axion overproduction. Equation 9 shows that axion alone may provide 100% of the DM relic density if \(F_{a}\) is properly calibrated. As previously indicated, we added the additional scalar to prevent this kind of fine-tuning. As a result, the axion maintains its underabundance, and the axion's relic density fulfils the Planck limit when combined with the scalar. Hence, for demonstrative purposes, we used \(F_{a}=10^{11}\) GeV and \(\theta_{a}=1.0\), which gives the axion relic density \(\Omega_{a}h^{2}\simeq 0.012\) (approximately 10% of the total observed relic), and the scalar DM delivers the rest.
Because \(F_{a}=10^{11}\) GeV is very large, we can see from Equation 5 that the mass of the radial excitation of the field \(\eta\) is huge; thus, we assign a larger mass \(M_{\sigma}=50\) TeV. \(S_{1}\) particles can annihilate into SM particles via the mediator \(\sigma\) and due to the mixing of \(\sigma\) with the Higgs boson. This annihilation cross section shows considerable suppression since the Sine of the mixing angle is relatively tiny, and the mass of the mediator is quite large. Nevertheless, the \(S_{1}S_{1}\sigma\) coupling is proportional to \(F_{a}\lambda_{S\eta}\), and because \(F_{a}\) is very large, this cross section might have a substantial value. Unless \(\lambda_{S\eta}\) is tiny, it may violate perturbative unitarity [23]. Hence, for the sake of simplicity, we kept \(\lambda_{S\eta}=0\) throughout our analysis.
Because of the mixing between the Higgs boson and \(\sigma\), the strength of the LHC diphoton channel turns out to be
\[\mu_{\gamma\gamma}=\cos^{2}\theta\frac{BR_{h\rightarrow\gamma\gamma}}{BR_{h \rightarrow\gamma\gamma}^{\text{SM}}}\, \tag{10}\]
where \(\cos\theta\) is the cosine of the mixing angle, and LHC limits this mixing angle to \(|\sin\theta|<0.36\)[24]. Although we are primarily interested in the parameter space where \(m_{S_{1}}>~{}\frac{m_{h}}{2}\), Higgs can decay into a pair of DM if \(m_{S_{1}}\leq~{}\frac{m_{h}}{2}\), contributing to the invisible Higgs decay branching ratio. In our study, we assign the \(hS_{1}S_{1}\) coupling to a tiny value, \(\lambda_{SH}=0.01\), so that the Higgs invisible decay branching-ratio constraint is satisfied.
The reinterpreted LEPII squark search results [25, 26] exclude masses of VLQ up to 100 GeV. When the mass difference between VLQ and DM, \(\Delta M_{\Psi S_{1}}<m_{t}\), VLQ can decay into DM with light quarks; hence, the ATLAS search [27] for multijet plus missing transverse momentum can further limit this scenario. The noteworthy feature is that, unlike the earlier work, when \(\Delta M_{\Psi S_{1}}<m_{t}\), minor parameter spaces exist in this situation (please see Figure 3, below the red dotted line). Nevertheless, when \(\Delta M_{\Psi S_{1}}>m_{t}\), obtaining an exclusion contour by reinterpreting the current ATLAS and CMS searches [28, 29, 30, 31, 32, 33, 34] is not straightforward. This is because the VLQ pair production cross section significantly relies on the BSM couplings. The branching ratio of the VLQ decay into the top quark and scalar is not 100% and depends on the BSM coupling.
Flavour constraints can appear through the interaction term (the first term in Equation 3) contributing to the \(D^{0}-\bar{D}^{0}\) oscillation [35]. The box diagram \(u\bar{c}\rightarrow\bar{u}c\) through VLQ and the scalars \((S_{1},S_{2})\) at the loop are the Feynman diagrams contributing to this oscillation. In the current setup, the effective operator contributing to this mixing is as follows.
\[\mathcal{L}_{\text{eff}}=\frac{\tilde{z}}{M_{\Psi}^{2}}\bar{u}_{R}^{\alpha} \gamma^{\mu}c_{R}^{\alpha}\bar{u}_{R}^{\beta}\gamma_{\mu}c_{R}^{\beta}, \tag{11}\]
where \(\tilde{z}=-\frac{f_{a}^{2}f_{c}^{2}}{96\pi^{2}}[g_{\psi}(M_{S_{1}}^{2}/M_{\Psi}^{2 })+g_{\psi}(M_{S_{2}}^{2}/M_{\Psi}^{2})-2g_{\psi}(M_{S_{1}}M_{S_{2}}/M_{\Psi}^{2 })]\). The \(g_{\Psi}(x)\) expression may be found in Reference [36]. The measurement of the D-meson mass splitting yields the restriction [35; 36]
\[|\tilde{z}|\lesssim 5.7\times 10^{-7}(M_{\Psi}/\text{TeV})^{2}. \tag{12}\]
The correct relic density is achieved through \(S_{1}S_{1}\to t\bar{q},\ \bar{t}q\ (q=u,c)\) or \(S_{1}S_{1}\to t\bar{t}\) annihilation processes in parameter spaces where \(S_{1}\) and VLQ are not degenerate and apart from the Higgs resonance. Democratic choice of all equal coupling strength \(f_{u}=f_{c}=f_{t}\) is not favourable to generate the correct relic density and is practically forbidden by the preceding flavour restriction (Equation 12). In contrast, in our prior work, [17], we set \(f_{u}=f_{c}\) which remains tiny, but \(f_{t}\) was free to take any large value and shown such a combination can yield the correct relic density and enables by this flavour constraint. Instead of making two of these three couplings negligibly small, another interesting scenario emerges if we choose one of \(f_{u}\) or \(f_{c}\) vanishingly small (or zero) while the other remains democratically as large as \(f_{t}\). We set \(f_{c}=0\) and \(f_{u}=f_{t}\) such that all parameter spaces that generate correct relic density are concurrently allowed by the flavour constraint. The direct detection experiment yields more limited parameter spaces with this arrangement since the nucleon comprises the light quarks and the gluon, \(f_{u}\neq 0\) results in a tree-level DD scattering diagram, \(S_{1}u(\bar{u})\to S_{1}u(\bar{u})\), via t-channel VLQ exchange (see direct detection diagrams 10).
## 3 Dark Matter Phenomenology
In our present framework with multi-component dark matter, the component of relic density which axion is imparting is determined by two parameters \(F_{a}\) and misalignment angle \(\theta_{a}\). That gives some fraction of the total observed relic, set at 10% for our demonstration (see Equation 9 and following discussion). This section examines the dark matter phenomenology due to the scalar component. Before we get into the details, let's look at the relevant free parameters. Since axion couplings with scalar DM or SM particles are inversely proportional to \(F_{a}\), such couplings are severely suppressed and have practically no role in scalar DM phenomenology. Moreover, the radial excitation of the field \(\eta\) does not affect DM phenomenology due to its huge mass and tiny coupling with the scalar DM. The relevant parameters are \(\{M_{\Psi},M_{S_{1}},\Delta M,f\}\). As previously stated, one way to bypass the prohibitory flavour constraint is by setting \(f_{c}=0\).
Relic density of DM:To estimate the component of relic density offered by scalar DM, we solve the Boltzmann equation using micrOMEGAs -v5 [37]. We first construct our model in Feynrules [38]. The variation of the scalar DM relic density with its mass is displayed in Figure 1 while we fix \(F_{a}=10^{11}\ \text{GeV},\ \lambda_{SH}=0.01\), \(f_{c}=0\), and \(\Delta M=100\) GeV. We present three solid lines for three distinct values of democratic coupling \(f=0.1,\ 0.5\), and \(1.0\) for the 500 GeV mass of the VLQ. In these variation curves, the first sharp dip ensues due to the Higgs resonance, in which pair of DM annihilate into SM particles through the resonant Higgs boson when \(M_{S_{1}}\sim\frac{m_{h}}{2}\), while the second dip occurs when \(M_{S_{1}}\sim M_{W}\), in t
which pair of \(S_{1}\) annihilate into a \(W\) boson pair through s-channel Higgs-mediated diagram, see Figure 9.
For \(f=0.1\) (solid purple line), after the second dip, the relic density increases along with the increase in the DM mass, and a third dip is observed at \(M_{S_{1}}=m_{h}\). The pair of \(S_{1}\) begin to annihilate into the Higgs bosons via contact interaction, Higgs-mediated s-channel, and \(S_{1}\)-mediated t-channel diagrams (Figure 9) and produce the third dip. Ultimately, when the mass difference between VLQ and DM becomes smaller, the impact of DM co-annihilation with the VLQ and annihilation of the VLQ pair into gluons becomes apparent, and a final decline in DM relic density is observed. Further increasing \(f\) (0.5 with solid blue line and 1.0 with solid red line) reveals that relic density declines just after the second dip due to the significant contribution of \(S_{1}S_{1}\to t\bar{u},\ \bar{t}u\) annihilation channels via the VLQ-exchange t-channel processes. The correct relic density is achieved for \(f=1.0\) when DM mass is around 96 GeV.
Blue (red) dotted and dashed lines correspond to the same values of \(f\) as in solid lines, except with a heavier choice of mediator \(\Psi\). Because the annihilation cross section decreases as propagator mass increases, and relic density is inversely proportional to the annihilation cross section, the dotted and dashed lines move to higher relic density than the solid line. One clearly follows from these variations that significant parameter space for heavier dark matter masses can open up for different choices of these parameters (over and above the typical Higgs portal). Interestingly, in the case of a pure scalar singlet DM scenario, the DM does not satisfy the correct relic density for \(\lambda_{SH}=0.01\). However, the interaction of the DM with the SM top quark in the present model affords many parameter spaces that satisfy the Planck limit.
Figure 1: Variations of the scalar DM relic density with its mass (\(M_{S_{1}}\)) for different values of \(f\) and \(M_{\Psi}\) are shown. Here, we fix \(F_{a}=10^{11}\) GeV, \(\lambda_{SH}=0.01\), \(f_{c}=0\), and \(\Delta M=100\) GeV. The Black dashed line corresponds to \(0.120-\Omega_{a}h^{2}\).
Direct and indirect detection of DM:WIMPs may also scatter off nuclei, depositing energy that can be detected by detectors like LUX [39], PandaX-II [40; 41], and XEXON1T [42]. These experiments can set strong constraints on the scattering cross section and DM mass. All Direct detection channels and the square amplitude of these diagrams are shown in Appendix A. For demonstration purposes, we present a spin-independent direct detection cross section of \(S_{1}\) with its mass shown in the left panel of Figure 2. All solid lines correspond to \(\lambda_{SH}=0.01\) but for different values of \(f=0.1\) (solid purple), \(0.5\) (solid blue), and \(1.0\) (solid red). Because \(f\) and \(\lambda_{SH}\) are both non-zero, the Higgs-mediated and VLQ-mediated channels and their interference diagrams contribute. It is instructive to note how individual channel contributes. One can first set \(f=0,~{}\lambda_{SH}=0.01\) (dashed green line) so only the Higgs-mediated channel contributes. Subsequently, setting \(\lambda_{SH}=0\) for two choices of \(f=1.0\) (\(0.5\)) in the dashed-red (blue) line demonstrates the contribution from pure VLQ-mediated s and t-channels and their interference diagram. The Higgs-mediated diagram does not contribute here.
The amplitude square of the Higgs-mediated diagram does not rely on the mass of \(S_{1}\) (Equation A.5); nevertheless, the cross section of the dashed green line decreases with the DM mass, which comes from the phase space part of the integral. We see dashed red and blue lines strongly depend on the \(M_{S_{1}}\) since the amplitude square of the VLQ mediated s and t- channels and their interference explicitly depends on \(M_{S_{1}}\) (see Equations A.1- A.4), and the cross section is minimum when \(M_{S_{1}}=\dfrac{M_{\Psi}}{2\sqrt{2}}\). When comparing dashed-green (only Higgs-mediated channel contributes), dashed-red (only VLQ-mediated channels contribute), and solid-red (total cross section) lines, one can witness a substantial negative (positive) interference between Higgs and VLQ-mediated diagrams when
Figure 2: **Left panel:** Spin-independent cross section of scattering between scalar DM and nucleon as a function of dark matter mass \(M_{S_{1}}\). We set \(f=0,\lambda_{SH}=0.01\) (green dashed line), and \(\lambda_{SH}=0\) for the blue-dashed (\(f=0.5\)) and red-dashed (\(f=1.0\)) lines to illustrate the individual contributions. **Right panel:** Effective spin-independent scattering cross section (Equation 3.1) vs dark matter mass. All of the plots on the left and right are for \(M_{\Psi}=500\) GeV. The solid colours purple, blue, and red in both panels stand for \(f=0.1,~{}0.5,\) and \(1.0,\) respectively, and \(\lambda_{SH}=0.01\) for all solid lines.
(\(M_{S_{1}}>\frac{M_{\Psi}}{2\sqrt{2}}\)). Finally, when DM mass is large, we see a sharp rise because of the on-shell production of VLQ (Figure 10a).
In a two-component DM scenario, the direct detection cross section of the scalar DM should be rescaled as
\[\sigma_{S_{1}}^{\text{SI,eff}}=\left(\frac{\Omega_{S_{1}}h^{2}}{\Omega_{\text{T }}h^{2}}\right)\,\sigma_{S_{1}}^{\text{SI}}, \tag{10}\]
where \(\Omega_{\text{T}}h^{2}\) is given in Equation 7. The spin-independent effective direct detection cross section for three distinct values of \(f=0.1,\ 0.5,\) and \(1.0\) are presented in the right panel of Figure 2 (same as solid lines in the left panel). The black lines show the experimental upper bounds. A dip around \(M_{S_{1}}\sim\frac{m_{h}}{2}\) is found because of rescaling, as given in Equation 10. Here, one notices that the DD experiments disallow the region with a considerable mass difference between the VLQ and DM for significantly large \(f\) values. For example, the regions when \(M_{S_{1}}>390\) GeV for \(M_{\Psi}=500\) GeV and \(f=1\) (solid red line) are disallowed by DD. Additionally, Figures 1 and 2, for \(f=1\), suggest that the parameter spaces that offer correct relic density are likewise allowed from the DD.
WIMPs may self-annihilate, emitting a significant amount of gamma and cosmic rays while looking at the dense DM regions at the galactic centre. Indirect detection experiments like PAMELA [43], Fermi-LAT [44], MAGIC [45] etc. can constrain model parameter spaces substantially. Because this is a two-component DM scenario, the scalar DM's indirect detection cross section should also be rescaled as,
\[\sigma_{S_{1},\text{eff}}^{ID}=\left(\frac{\Omega_{S_{1}}h^{2}}{\Omega_{\text {T}}h^{2}}\right)^{2}\,\sigma_{S_{1}}^{ID}. \tag{11}\]
We note that most indirect detection experiments enable our model because the necessity for the axion keeps \(S_{1}\) under abundance, which lowers the indirect detection constraints since it depends on the fractional scalar DM relic density squared. For very small \(\Delta M\)\(=(M_{S_{2}}-M_{S_{1}})\), \(S_{1}\) and \(S_{2}\) are nearly degenerate and can open an additional channel where \(S_{1}\) and \(S_{2}\) co-annihilate into top anti-top pair via VLQ and contribute to the indirect detection cross section, which may be disallowed by antiproton cosmic ray data [46], so we set \(\Delta M=100\) GeV throughout our analysis, ensuring that no co-annihilation channel exists.
Parameter scan and benchmark points:To demonstrate the relevant parameter space that offers correct relic abundance while being allowed by direct detection and all other constraints as specified in the last section, we identify the three most important parameters, that is, the masses \(M_{S_{1}}\), \(\Delta M_{\Psi S_{1}}\) and the Yukawa coupling \(f\). Figure 3 display such points on the plane of \(M_{S_{1}}\) vs \(\frac{\Delta M_{\Psi S_{1}}}{M_{S_{1}}}\), while the Yukawa coupling \(f\) is colour coded, with \(f\) ranging from \(0.1\) to \(1.5\). The red dash-dot line corresponds to \(\Delta M_{\Psi S_{1}}=m_{t}\). Hence, the upper portions of this line may be investigated at the LHC with top quark on-shell production as VLQ decays into a top quark and invisible DM, while the lower area can be probed with jets + MET as VLQ decays into an \(u\)-quark associated with DM. Points along two vertical lines at the top left region correspond to the part satisfied by Higgs resonance.
It is enlightening to note that the lower sections of the plot, which correspond to the small mediator mass \(M_{\Psi}\), typically generate an increased DD cross section; therefore, those regions are excluded from the DD bounds despite having the correct relic density. Only a few points exist at the lower right corner when \(f\) is tiny. Those regions have correct relic density because being nearly degenerate, the VLQ and DM co-annihilate and the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \hline & \(M_{S_{1}}\) & \(\Delta M_{\Psi S_{1}}\) & \(\Delta M\) & \(f\) & \(\Omega_{S_{1}}h^{2}\) & \(\sigma_{S_{1},eff}^{SI}\) & Processes & \\ & (GeV) & (GeV) & (GeV) & & & (pb) & (percentage) & BR(\(\Psi\to tS_{1,2}\)) \\ \hline \hline BP1 & 332 & 500 & 100 & 0.83 & 0.109 & \(7.98\times 10^{-12}\) & \(S_{1}S_{1}\to t\bar{u},\bar{t}u\) (60\%) & 0.4907 \\ & & & & & & & \(S_{1}S_{1}\to t\bar{t}\) (40\%) & \\ \hline BP2 & 402 & 407 & 100 & 0.82 & 0.109 & \(7.65\times 10^{-12}\) & \(S_{1}S_{1}\to t\bar{u},\bar{t}u\) (56\%) & 0.4875 \\ & & & & & & \(S_{1}S_{1}\to t\bar{t}\) (44\%) & \\ \hline BP3 & 450 & 300 & 100 & 0.79 & 0.107 & \(1.14\times 10^{-11}\) & \(S_{1}S_{1}\to t\bar{u},\bar{t}u\) (54\%) & 0.435 \\ & & & & & & \(S_{1}S_{1}\to t\bar{t}\) (45\%) & \\ \hline \end{tabular}
\end{table}
Table 1: A few representative benchmark points (BPs) from the scan plot are presented; these BPs satisfy the correct relic density and are permissible under all constraints. \(\Omega_{S_{1}}h^{2}\) and \(\sigma_{S_{1},eff}^{SI}\) (Equation 3) are the relic density and the effective direct detection cross section of the scalar DM, \(S_{1}\), respectively. \(\Delta M_{\Psi S_{1}}=M_{\Psi}-M_{S_{1}}\) and \(\Delta M=M_{S_{2}}-M_{S_{1}}\). Other parameters are \(F_{a}=10^{11}\) GeV, \(\lambda_{SH}=0.01,\ \mathrm{and}\ f_{c}=0\). The second last column shows the different processes that contribute to the relic density with the percentage contributions in the bracket. The branching fraction of VLQ decays into the top quark associated with the scalar is shown in the last column.
Figure 3: On the \(\frac{\Delta M_{\Psi S_{1}}}{M_{S_{1}}}-\ M_{S_{1}}\) plane, the parameter spaces that satisfy the measured DM abundance, permitted by the direct search experiments, and comply with other restrictions as stated in the text are displayed. The color coding is done with respect to \(f\), with \(f\) varying from 0.1 to 1.5. Here, we fix \(F_{a}=10^{11}\) GeV, \(\lambda_{SH}=0.01,\ \Delta M=100\) GeV, and \(f_{c}=0\).
pair of VLQ annihilates into gluons. Note that for larger \(f\) values, those co-annihilation regions are ruled out by the DD experiments, as we already see in Figure 2 (right panel, red, blue lines). Interestingly, non-perturbative effects like Sommerfeld enhancement and bound state formation can significantly affect relic density in those co-annihilation regions. Further study of this region is beyond the scope of the present discussion. Such points are challenging to probe at the LHC as the DM mass is quite large and VLQ is degenerate to the DM, so the partonic cross section of VLQ production will be small, and VLQ will emit a soft jet that is very difficult to detect.
A few representative benchmark points (BPs) from the scan plot are listed in Table 1, which are allowed from all the constraints and provide correct relic density. The scalar DM relic density, spin-independent DD scattering cross section of \(S_{1}\), the percentage contribution of each process to the relic density, and the branching ratio of VLQ decay into the top quark are also given. Table 2 shows the total cross section of indirect detection (ID) and the percentage contribution of the various processes to the indirect detection. The theoretical ID cross section in the final state of \(t\bar{t}\) and the experimental upper bound are given in the last two columns, where we find that all of those BPs are well inside the experimental upper bound.
## 4 Pair production of vector-like quark at NLO+PS accuracy
We implement the model Lagrangian discussed in Equation 1 together with the interaction terms of Equations 3 and 4 in FeynRules[38] and employ the NLOCT [47] package to generate UV and \(R_{2}\) counterterms of the virtual contribution in NLO UFO model that we finally use under the MadGraph5_aMC@NLO[48] environment. Inside this, the real corrections are performed using the FKS subtraction method [49; 50], whereas the OPP technique [51] takes care of the virtual contributions. Showering of the events is done using Pythia8[52; 53]. For leading order (LO) and next-to-leading order (NLO) event generation, we use NN23LO and NN23NLO PDF sets, respectively.
All the tree-level diagrams in the pair production of VLQ at the LHC are shown in Figure 4, where the top three Feynman diagrams depend only on the QCD coupling,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \hline & \(\sigma_{S_{1}}^{\rm ID}\) & \(S_{1}S_{1}\to t\bar{u},\bar{t}u\) & \(S_{1}S_{1}\to t\bar{t}\) & \(\sigma_{S_{1},\rm eff}^{ID}\) (in \(t\bar{t}\)) & \(\sigma_{exp}^{ID}\) (in \(t\bar{t}\)) \\ BP & \((cm^{3}/s)\) & (in \%) & (in \%) & \((cm^{3}/s)\) & \((cm^{3}/s)\) \\ \hline \hline BP1 & \(2.30\times 10^{-26}\) & 59.8 & 40.0 & \(7.59\times 10^{-27}\) & \(2.37\times 10^{-26}\) \\ \hline BP2 & \(2.36\times 10^{-26}\) & 56.0 & 43.7 & \(8.51\times 10^{-27}\) & \(2.54\times 10^{-26}\) \\ \hline BP3 & \(2.43\times 10^{-26}\) & 54.4 & 45.4 & \(8.77\times 10^{-27}\) & \(2.60\times 10^{-26}\) \\ \hline \end{tabular}
\end{table}
Table 2: Total indirect detection cross section, \(\sigma_{S_{1}}^{\rm ID}\) and the percentage of the different processes that contribute to indirect detection are presented in the third and fourth columns. Effective indirect detection cross section in the \(t\bar{t}\) final state for different benchmark points is shown in the fifth column, which is defined as (% contribution in \(t\bar{t}\) final state) \(\times\sigma_{S_{1},\rm eff}^{ID}\), where \(\sigma_{S_{1},\rm eff}^{ID}\) is given in Equation 3. The last column is the experimental upper bound in the \(t\bar{t}\) final state [46].
whereas the bottom two channels depend only on the BSM Yukawa coupling, \(f\). The LO cross section has the order \(\sigma_{\rm LO}=\mathcal{O}(\alpha_{S}^{2})+\mathcal{O}(f^{4})+\mathcal{O}(f^{2 }\alpha_{S})\), where the interference between the top and bottom channels provides the \(\mathcal{O}(f^{2}\alpha_{S})\) term.
At LO, we keep all contributions coming from pure QCD (\(\mathcal{O}(\alpha_{S}^{2})\)), which gives the leading contribution, and from BSM coupling and their interference in the pair production of VLQ at the LHC. We do one-loop QCD correction of the processes that have only strong couplings at the tree level; therefore, NLO corrections have order \(\mathcal{O}(\alpha_{S}^{3})\). Few representative Feynman diagrams at NLO-QCD are shown in Figure 5. The total LO cross section is given in the left panel of Table 3. The leading contribution of the VLQ pair production at the tree level and its next-to-leading order cross section, along with the integrated K-factor, are given in the right panel of Table 3. The K-factor is defined as the ratio of NLO to LO cross section. We find a significant enhancement of about 30% in the NLO-QCD cross section over LO. Table 3 shows that the interference term has a non-negligible negative contribution to the total LO cross section.
Figure 4: The Feynman diagrams for the pair production of VLQ at Leading order. The three diagrams at the top only contain QCD coupling, whereas the two diagrams at the bottom only have pure BSM couplings (\(f\)), and their interference terms are also present. The LO cross section, \(\sigma_{\rm LO}=\mathcal{O}(\alpha_{S}^{2})+\mathcal{O}(f^{4})+\mathcal{O}(f^ {2}\alpha_{S})\).
Figure 5: Representative Feynman diagrams for the pair production of VLQ at NLO-QCD for the processes where the tree-level diagrams only have QCD coupling (Figure 3(a)). \(\sigma_{\rm NLO}^{a}\propto\mathcal{M}_{V}^{\dagger}\mathcal{M}_{\rm LO}^{a}= \mathcal{O}(\alpha_{S}^{3})\).
We designate the partonic centre-of-mass energy of the event as the central choice for both the factorization and renormalization scales. To compute the scale variance, we vary both the factorization and renormalization scales from a factor of two to half of this central scale, resulting in nine different data sets. The superscripts and subscripts in the tables indicate the envelopes of the nine data sets, although all of the cross sections shown in Table 3 correspond to the central scale.
LO+PS and NLO+PS distributions of \(\log_{10}[P_{T}(\Psi\bar{\Psi})/GeV]\) (upper panel) and the differential K-factor (lower panel) are given in Figure 5(a). \(P_{T}(\Psi\bar{\Psi})\) is the transverse momentum of the VLQ pair. For \(\log_{10}[P_{T}(\Psi\bar{\Psi})/GeV]<2.6\), the left plot shows that K-factor is more
\begin{table}
\begin{tabular}{|c|c||c|c|c|} \hline & \multicolumn{2}{|c||}{\(\sigma(pp\to\Psi\bar{\Psi})\) (fb)} & \multicolumn{2}{|c|}{\(\sigma(pp\to\Psi\bar{\Psi})\) (fb) for} \\ \cline{2-5} BP & \multicolumn{2}{|c||}{LO} & \multicolumn{2}{|c|}{leading production processes at LO and NLO} \\ \cline{2-5} & \(\sigma_{\rm LO}={\cal O}(\alpha_{S}^{2})+{\cal O}(f^{4})+{\cal O}(f^{2}\alpha_ {S})\) & LO, \(\ {\cal O}(\alpha_{S}^{2})\) & NLO, \(\ {\cal O}(\alpha_{S}^{3})\) & K-fac \\ \hline \hline BP1 & \(96.39^{+31.5\%}_{-22.5\%}\) & \(105.8^{+31.3\%}_{-22.2\%}\) & \(138.5^{+9.6\%}_{-11.3\%}\) & 1.31 \\ \hline BP2 & \(114.0^{+31.9\%}_{-22.5\%}\) & \(125.7^{+31.4\%}_{-22.4\%}\) & \(162.1^{+10.1\%}_{-11.5\%}\) & 1.29 \\ \hline BP3 & \(181.5^{+32.1\%}_{-22.7\%}\) & \(201.6^{+31.3\%}_{-22.3\%}\) & \(257.3^{+9.8\%}_{-11.4\%}\) & 1.28 \\ \hline \end{tabular}
\end{table}
Table 3: Total leading-order cross section, including QCD and BSM coupling, and their interference in the pair production of VLQ at 14 TeV LHC before their decay is given in the left panel. Right panel: Leading contribution of the tree-level VLQ pair production process (\({\cal O}(\alpha_{S}^{2})\)) and its next-to-leading order cross section, along with the integrated K-factor, are given. The superscript and subscript denote the scale uncertainties (in percentage) of the total cross section. Five massless quark flavors are used for computation.
Figure 6: (a) Distribution of \(\log_{10}[P_{T}(\Psi\bar{\Psi})/GeV]\) at LO+PS and NLO+PS, and the differential K-factor for VLQ pair production at the LHC. (b) The distribution of invariant mass of the VLQ pair is shown in the upper panel, and the differential K-factor and the scale uncertainties are shown in the middle and bottom panels, respectively. The plots correspond to BP1, and LO consists only of QCD coupling.
than 1, but for \(\log_{10}[P_{T}(\Psi\bar{\Psi})/GeV]>2.6\), the K-factor is less than 1, indicating that the NLO cross section is less than the LO cross section. The differential K-factor is not flat everywhere. It is almost flat at the lower values and then starts to go down, so scaling the LO events by a constant K-factor would not give accurate results.
The invariant mass distribution of the VLQ pair is shown on the top panels of Figure (b)b for BP1. The differential K-factor is shown in the middle panel. Invariant mass distribution peaks around 1800 GeV, and the differential K-factor is almost flat around the peak. The bottom panel shows the envelope of the factorization and renormalization scale uncertainties. The red solid and dashed lines show the width of the scale uncertainty for NLO+PS, while the blue solid and dashed lines show the LO+PS scale uncertainties. We can see that both LO+PS and NLO+PS results are stable, but the NLO+PS result has much-reduced scale uncertainty. Although these are figurative findings because \(\Psi\) decay is not taken into account, they demonstrate the need to do \({\cal O}(\alpha_{s})\) corrections on the pair production channels to predict the production rate and lower scale uncertainty.
## 5 Multivariate Analysis (MVA)
For completeness, we further carried out the collider analysis on this model using the \(t\bar{t}+{\rm MET}\) final state, as in our earlier study [17]. The reference includes all the backgrounds and kinematic cuts. However, here, we use the NLO event to generate the signal. Moreover, the previous analysis assumed the BR(\(\Psi\to tS_{1,2}\)) \(=1.0\), based on the top-philic coupling, which is invalid for a democratic coupling with other generations. Hence, based on calculated BR(\(\Psi\to tS_{1,2}\)) \(<0.5\) (Table 1) for all the BPs when the pair of \(\Psi\) decay into the top quark, the cross section is reduced by a factor of at least 0.25 than the previous. In contrast to the prior assessment, the QCD correction significantly increases the overall cross section, and the Yukawa coupling considerably impacts the partonic cross section.
The signal topology is given by 2,
Footnote 2: Since we are considering two top-like fatjet (\(J_{t}\)) without measuring the jet charge, \(uu\to\Psi\Psi\) (through the t-channel scalar mediator) can also contribute to the same signature followed by the decay of the VLQ into the top. Interestingly, since scalars and \(\Psi\) have the same PQ charge, this type of t-channels exchange is impossible unless PQ symmetry is spontaneously broken finally to contribute negligibly. To give some perspective in our benchmark point BP1, we find \(\sigma_{\rm LO}(uu\to\Psi\Psi)=0.3\) fb, and \(\sigma_{\rm LO}(\bar{u}\bar{u}\to\bar{\Psi}\bar{\Psi})=0\), so we safely ignore those processes.
\[pp\to\Psi\bar{\Psi}\;[{\rm QCD}]\to(t\;S_{1,2})(\bar{t}\;S_{1,2})\;j\Rightarrow 2 J_{t}+\not{E}_{T}+X \tag{10}\]
The expected number of signal (BP1) and background events (in fb, expected event numbers are obtained by multiplying them with the luminosity) is listed as cut flow, along with the cut efficiencies, after each set of event selection criteria is shown in Table 4. In preselection cut (C1) we demand at least two fatjets of radius \(R=1.5\), each with a transverse momentum \(P_{T}(J_{0}),P_{T}(J_{1})>200\) GeV, missing transverse momentum \(\not{E}_{T}>100\) GeV, a lepton-veto, and \(|\Delta\Phi(J_{0,1},\not{E}_{T})|>0.2\) (to minimize jet mismeasurement contribution to \(\not{E}_{T}\)). The other cuts are: (C2) \(\not{E}_{T}>150\) GeV, (C3) a b-tag within the leading or subleading fatjet, and (C4) pruned mass of the two leading jets \(M_{J_{0}},M_{J_{1}}>120\) GeV.
After applying the preselection cut (C1), we find \(V\)+jets (\(V=Z,W\)) are the principal background while \(t\bar{t}\)+jets is the subdominant background. However, after a b-tag within \(J_{0}\) or \(J_{1}\) and demanding large fatjet masses, we found \(t\bar{t}\)+jets becomes the primary background, while \(V\)+jets are the subdominant. Applying all those cuts, we still retain a substantial number of signal events while the background reduces significantly. All the signal and background processes are passed through all these event selection criteria up to C4 before passing events to MVA. We create two separate signal and background classes. The combined background is the weighted combination of all the different background processes. Each signal and background class is randomly divided into 50% for training and the rest 50% for testing. We use boosted decision tree (BDT) algorithm and choose a set of kinematic variables from a wider collection of variables for MVA. The variables with high relative importance distinguishing the signal class from the background class are preferable. Table 5 lists the relative importance of the various kinematic variables involved in the MVA. The left (signal) and right (background) tables of Figure 7 show the linear correlation coefficients among the variables employed in MVA for BP1.
Reference [17] provides the normalized distributions of all background processes after performing all event selections up to C4. We avoid demonstrating these distributions since the shapes are qualitatively similar for physics understanding. The normalized distribution of the BDT response for test and train samples of both signal (BP1) and background classes is plotted on the left side of Figure 8. We find signal and background are well separated. With the cut applied to the BDT output, the signal and background efficiency, as well as the statistical significance (\(\frac{N_{S}}{\sqrt{N_{S}+N_{B}}}\)) for 139 fb\({}^{-1}\) data, are presented in the right plot of
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline & \multicolumn{1}{|c|}{**Signal**} & \multicolumn{1}{|c|}{\(Z\)+**jets**} & \multicolumn{1}{|c|}{\(W\)+**jets**} & \multicolumn{1}{|c|}{\(t\bar{t}\)+**jets**} & \multicolumn{1}{|c|}{\(tW\)+**jets**} & \multicolumn{1}{|c|}{\(WZ\)+**j**} & \multicolumn{1}{|c|}{\(WW\)+**j**} & \multicolumn{1}{|c|}{\(ZZ\)+**j**} & \multicolumn{1}{|c|}{\(t\bar{t}\)\(V\)} & \multicolumn{1}{|c|}{**tot BG**} \\ & \multicolumn{1}{|c|}{**(BP1)**} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} & \multicolumn{1}{|c|}{(fb)} \\ \hline \hline C1 & 5.99 & 2517.99 & 1366.91 & 690.65 & 366.91 & 93.53 & 25.90 & 11.51 & 8.34 & 5081.74 \\ & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) & \([100\%]\) \\ \hline C2 & 5.49 & 1640.29 & 762.59 & 302.16 & 152.52 & 58.35 & 11.51 & 6.973 & 6.17 & 2940.56 \\ & \([91.65\%]\) & \([65.14\%]\) & \([55.79\%]\) & \([43.75\%]\) & \([41.57\%]\) & \([62.39\%]\) & \([44.44\%]\) & \([60.58\%]\) & \([73.98\%]\) & \([57.87\%]\) \\ \hline C3 & 4.58 & 241.73 & 117.99 & 230.94 & 114.39 & 10.79 & 2.45 & 1.92 & 5.11 & 725.32 \\ & \([76.46\%]\) & \([9.60\%]\) & \([8.63\%]\) & \([33.44\%]\) & \([31.18\%]\) & \([11.54\%]\) & \([9.46\%]\) & \([16.69\%]\) & \([61.27\%]\) & \([14.27\%]\) \\ \hline C4 & 2.23 & 25.38 & 17.33 & 64.23 & 27.45 & 1.24 & 0.33 & 0.2 & 2.30 & 138.46 \\ & \([37.23\%]\) & \([1.01\%]\) & \([1.27\%]\) & \([9.30\%]\) & \([7.48\%]\) & \([1.33\%]\) & \([1.27\%]\) & \([1.74\%]\) & \([28.13\%]\) & \([2.72\%]\) \\ \hline \end{tabular}
\end{table}
Table 4: After applying various kinematic event selection cuts, signal and background events (in fb) indicate the efficiency for each set of cuts to reduce the backgrounds. The kinematic cuts (C1-C4) are described in the text. After applying the C4 cut, the remaining events are passed for the multivariate analysis.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline & \(\not{E}_{T}\) & \(\Delta\phi(J_{1},\not{E}_{T})\) & \(\Delta R(J_{0},J_{1})\) & \(\tau_{32}(J_{1})\) & \(\tau_{32}(J_{0})\) & \(\Delta\phi(J_{0},\not{E}_{T})\) & M\({}_{\rm eff}\) & \(M(J_{1})\) & \(M(J_{0})\) & \(\tau_{33}(J_{0})\) & \(\tau_{31}(J_{1})\) \\ \hline \hline BP1 & 31.29 & 20.19 & 17.82 & 8.61 & 8.49 & 8.38 & 3.29 & 2.10 & 1.48 & 1.10 & 0.9 \\ \hline BP2 & 19.39 & 16.74 & 17.39 & 6.75 & 6.99 & 8.04 & 2.26 & 0.67 & 0.72 & 1.11 & 0.73 \\ \hline BP3 & 9.25 & 11.30 & 12.11 & 6.52 & 5.74 & 6.55 & 1.29 & 0.95 & 0.38 & 0.52 & 0.72 \\ \hline \end{tabular}
\end{table}
Table 5: Method unspecific relative separation power of different kinematic variables in separating the signal and background classes.
Figure 8. Before applying any cuts to the BDT output, Table 6 shows the number of signals (\(N_{S}^{bc}\)) and background (\(N_{SM}\)) events for various BPs. It also shows the expected number of signal events (\(N_{S}\)) and background events (\(N_{B}\)) that remain after applying an optimal cut (BD\(T_{opt}\)) to the BDT output. The last two columns show the statistical significance of the signal at 139 \(\mathrm{fb}^{-1}\) luminosity and the signal-to-background ratio. We optimize each of the three BPs separately.
Table 6 shows that the statistical significance of BP3 is lower than that of the other two benchmark points even though it has the most significant partonic cross section of VLQ pair
Figure 8: The left panel shows the normalized distribution of the BDT output for training and testing samples of both signal and background. The statistical significance of the signal with the cut applied to the BDT output is shown in the right panel, along with signal and background efficiency.
Figure 7: Coefficients of linear correlation (in percentage) between various kinematical variables for the signal (BP1, left panel) and background (right panel) are presented. Missing entries have an insignificant correlation of less than one. Two variables are correlated or anti-correlated based on positive and negative coefficients.
production since the mass of the VLQ is the smallest for BP3. This is attributed to a smaller mass difference between VLQ and DM than the other two BPs, which results in a relatively less boosted top quark and a smaller signal efficiency. Table 6 also demonstrate that a significant parameter space of this model can be explored with more than \(5\sigma\) significance using the 139 fb\({}^{-1}\) data at the 14 TeV LHC.
## 6 Conclusions
We explore a complex scalar extended KSVZ axion framework, where the scalar is singlet under SM gauge groups but only has the Peccei-Quinn charge. This model has the capability to solve two of the most outstanding problems of SM, that is, the strong-CP problem and a natural candidate for dark matter in the form of QCD axion having a lifetime comparable to the age of the Universe. Axion can satisfy the correct dark matter relic density, measured by the Planck collaboration, but at the expense of fine-tuning the corresponding breaking scale. The residual \(\mathbb{Z}_{2}\) symmetry in this model ensures that the lightest component of scalar extension is stable and thus plays the role of a second dark matter, removing the need for any such fine-tuning.
KSVZ axion framework also provides a rich phenomenology by introducing a vector-like quark which can be explored at a hadron collider like LHC. In the extended scenario, VLQ interacts with the scalar (DM) candidate and SM quarks (up or down) based on its hypercharge. Hence the VLQ now plays a critical role in dark matter phenomenology because it opens up new annihilation and coannihilation channels.
Here, we explore the possibility of democratic Yukawa interaction of the vector-like quark with all up-type quarks and scalar dark matter candidate. One must find the allowed parameter spaces that provide the correct relic density and agree with other experimental observations such as direct detection (DD), collider data, etc. It is found that the flavour constraint strongly disfavours this democratic option, which requires either one or both lighter flavour couplings (\(f_{u},f_{c}\)) needs to be tiny. For simplicity, we consider \(f_{c}=0\) while keeping the other two democratic. It is interesting to note that the allowed parameter space can neither support arbitrarily large coupling \(f(=f_{u}=f_{t})\) from direct detection nor the too-small value of it to obtain the correct relic density. Therefore, their interplay remains vital for selecting the available parameter spaces.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(N_{S}^{bc}\) (fb) & BDT\({}_{opt}\) & \(N_{S}\) (fb) & \(N_{B}\) (fb) & \(\frac{N_{S}}{\sqrt{N_{S}+N_{B}}}\) for 139 fb\({}^{-1}\) & \(\frac{N_{S}}{N_{B}}\) \\ \hline \hline BP1 & 2.23 & 0.3883 & 0.8012 & 1.0783 & 6.89 & 0.743 \\ \hline BP2 & 2.53 & 0.2582 & 1.2207 & 6.1302 & 5.31 & 0.199 \\ \hline BP3 & 1.67 & 0.2961 & 0.4252 & 2.3529 & 3.0 & 0.180 \\ \hline \(N_{SM}\) & 138.46 & & & & & \\ \hline \end{tabular}
\end{table}
Table 6: The table shows the efficacy of the current search in terms of statistical significance for various benchmarks. \(N_{S}^{bc}\) and \(N_{SM}\) are the total numbers of signal and background events before performing MVA (see Table 4), while \(N_{S}\) and \(N_{B}\), respectively, provide those following BDT analysis. BDT\({}_{opt}\) is the optimal BDT cut. The second-to-last column provides the statistical significance of the signal for 139 fb\({}^{-1}\) luminosity.
We employ next-to-leading order NLO-QCD correction for VLQ pair production to study a unique search strategy at the LHC, generating a pair of boosted tops with sizeable missing transverse momentum. Boosted top-like fatjets generated from the hadronic decay of top quark still carry different characteristics of it, which are primarily captured in a dedicated jet analysis and different substructure variables. Multivariate analysis with these variables and attributes of event topology is demonstrated to establish a strong ability to explore a significant parameter space of this model at the 14 TeV LHC.
We thank Dr. Satyajit Seth for the fruitful discussions. This work is supported by the Physical Research Laboratory (PRL), Department of Space, Government of India. Computational work was performed using the HPC resources (Vikram-100 HPC) and TDP project at PRL.
Direct detection channels:Three different channels (Figure 10) are possible at the tree level for the scattering process \(S_{1}(p_{1})\ u(p_{2})\to S_{1}(p_{4})\ u(p_{3})\), VLQ-mediated s-channel, VLQ-mediated t-channel, and Higgs-mediated t-channel diagrams. The total cross section comprises the amplitude square of the individual channels and the interference between different diagrams. The interference with different diagrams and the amplitude square of the individual diagrams are provided below.
Amplitude square of VLQ-mediated s-channel diagram:
\[{\cal M}_{1}^{\dagger}{\cal M}_{1}=\frac{f^{4}}{4}\ \frac{{\cal N}+m_{u}^{2}\ (p_{1}.p_{2}+p_{1}.p_{3})}{[(p_{1}+p_{2})^{2}-M_{\Psi}^{2}\ ]^{2}} \tag{104}\]
Amplitude square of VLQ-mediated t-channel diagram:
\[{\cal M}_{2}^{\dagger}{\cal M}_{2}=\frac{f^{4}}{4}\ \frac{{\cal N}-m_{u}^{2}\ (p_{1}.p_{2}+p_{1}.p_{3})}{[(p_{3}-p_{1})^{2}-M_{\Psi}^{2}\ ]^{2}} \tag{105}\]
Interference between VLQ-mediated s and t-channel diagrams:
\[2{\cal M}_{1}^{\dagger}{\cal M}_{2}=-2\times\frac{f^{4}}{4}\ \frac{{\cal N}+m_{u}^{2} \ (-p_{1}.p_{2}+p_{1}.p_{3})}{[(p_{1}+p_{2})^{2}-M_{\Psi}^{2}\ ][(p_{3}-p_{1})^{2}-M_{\Psi}^{2}\ ]} \tag{106}\]
Where \(p_{2}^{2}=p_{3}^{2}=m_{u}^{2}\), \(p_{1}^{2}=p_{4}^{2}=M_{S_{1}}^{2}\), and \({\cal N}\) is given below.
\[{\cal N}=2(p_{1}.p_{3})(p_{1}.p_{2})+M_{S_{1}}^{2}\ (p_{1}.p_{3}-p_{1}.p_{2}-m_{u} ^{2})+m_{u}^{4} \tag{107}\]
Amplitude square of Higgs-mediated t-channel diagram:
\[{\cal M}_{3}^{\dagger}{\cal M}_{3}=2m_{q}^{2}\lambda_{SH}^{2}\cos^{2}\theta\ \frac{p_{1}.p_{3}-p_{1}.p_{2}-2m_{u}^{2}}{[(p_{4}-p_{1})^{2}-M_{h}^{2}\ ]^{2}} \tag{108}\]
Interference between VLQ-mediated s-channel and Higgs-mediated t-channel diagrams:
\[2{\cal M}_{1}^{\dagger}{\cal M}_{3}=2m_{u}^{2}\lambda_{SH}\cos\theta f^{2}\ \frac{p_{1}.p_{2}+m_{u}^{2}}{[(p_{1}+p_{2})^{2}-M_{\Psi}^{2}\ ][(p_{4}-p_{1})^{2}-M_{h}^{2}\ ]} \tag{109}\]
Interference between VLQ-mediated t-channel and Higgs-mediated t-channel diagrams:
\[2{\cal M}_{2}^{\dagger}{\cal M}_{3}=-\ 2m_{u}^{2}\lambda_{SH}\cos\theta f^{2}\ \frac{p_{1}.p_{3}-m_{u}^{2}}{[(p_{3}-p_{1})^{2}-M_{\Psi}^{2}\ ][(p_{4}-p_{1})^{2}-M_{h}^{2}\ ]} \tag{110}\]
|
2302.03790 | GraphGUIDE: interpretable and controllable conditional graph generation
with discrete Bernoulli diffusion | Diffusion models achieve state-of-the-art performance in generating realistic
objects and have been successfully applied to images, text, and videos. Recent
work has shown that diffusion can also be defined on graphs, including graph
representations of drug-like molecules. Unfortunately, it remains difficult to
perform conditional generation on graphs in a way which is interpretable and
controllable. In this work, we propose GraphGUIDE, a novel framework for graph
generation using diffusion models, where edges in the graph are flipped or set
at each discrete time step. We demonstrate GraphGUIDE on several graph
datasets, and show that it enables full control over the conditional generation
of arbitrary structural properties without relying on predefined labels. Our
framework for graph diffusion can have a large impact on the interpretable
conditional generation of graphs, including the generation of drug-like
molecules with desired properties in a way which is informed by experimental
evidence. | Alex M. Tseng, Nathaniel Diamant, Tommaso Biancalani, Gabriele Scalia | 2023-02-07T22:58:29Z | http://arxiv.org/abs/2302.03790v1 | GraphGUIDE: interpretable and controllable conditional graph generation with discrete Bernoulli diffusion
###### Abstract
Diffusion models achieve state-of-the-art performance in generating realistic objects and have been successfully applied to images, text, and videos. Recent work has shown that diffusion can also be defined on graphs, including graph representations of drug-like molecules. Unfortunately, it remains difficult to perform conditional generation on graphs in a way which is interpretable and controllable. In this work, we propose GraphGUIDE, a novel framework for graph generation using diffusion models, where edges in the graph are flipped or set at each discrete time step. We demonstrate GraphGUIDE on several graph datasets, and show that it enables full control over the conditional generation of arbitrary structural properties without relying on predefined labels. Our framework for graph diffusion can have a large impact on the interpretable conditional generation of graphs, including the generation of drug-like molecules with desired properties in a way which is informed by experimental evidence.
Machine Learning, GraphGUIDE, GraphGUIDE, GraphGUIDE, GraphGUIDE, GraphGUIDE, GraphGUIDE, GraphGUIDE: interpretable and controllable conditional graph generation with discrete Bernoulli diffusion
## 1 Introduction
Diffusion models have rapidly become the state-of-the-art method for generating many kinds of data (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021), including images and videos (Dhariwal and Nichol, 2021; Ho et al., 2022), text (Li et al., 2022), and tabular data (Kotelnikov et al., 2022). In order to generate data objects from some distribution \(q_{0}(x)\), a diffusion model defines a _forward-diffusion_ process which iteratively adds noise (typically continuous Gaussian noise) to an object \(x_{0}\sim q_{0}(x)\). The model then learns a _reverse-diffusion_ process to iteratively denoise a diffused object back to the original \(x_{0}\). This allows the diffusion model to effectively sample new objects from \(q_{0}(x)\). The most useful and impactful developments in diffusion models, however, arguably stem from the ability to _conditionally_ generate objects--that is, generating objects \(x_{0}\) which satisfy some desired property. There are a few methods for conditional generation today, which effectively supply the model with a label \(y\) (out of a predefined set) to steer the reverse-diffusion process toward generating an object \(x_{0}\) which satisfies the property defined by \(y\) (Section 2). Using these methods, many of the most prominent recent works in diffusion models have centered on conditional generation; these include image generation by class (Song et al., 2021; Dhariwal and Nichol, 2021) and image/video synthesis from text (Rombach et al., 2021; Ho et al., 2022).
Although diffusion models have been successful in generating and conditionally generating many different data types, applying these models to generating _graphs_ poses several somewhat unique challenges, particularly for conditional generation. These challenges are major obstacles for many real-world problems, such as drug discovery, where it is critically important to be able to generate molecular graphs which have certain desired physiological or chemical properties. Although methods exist today for generating graphs using diffusion models (Niu et al., 2020; Jo et al., 2022; Vignac et al., 2022), they are limited in several ways.
One such limitation is that many current graph-diffusion methods are _continuous_ and generate reverse-diffusion intermediates which are not well-defined discrete graphs; instead, their intermediates include fractional or negative-valued edges. This severely hinders the interpretability and controllability of these methods; after all, it is not easy to decipher what \(-0.2\) in an adjacency matrix means, or how to control generation where the probability of an edge is 0.6.
A bigger set of issues lies in the current available methods for _conditional generation_, which are rather limiting for graph diffusion (including for diffusion frameworks which are discrete). Firstly, current conditional-generation methods supply a label \(y\) to influence reverse diffusion as a soft constraint. This renders conditional generation an inscrutable process which prevents humans from interpreting or controlling the generated graph sample (this is even worse for continuous-diffusion frameworks). For many real-world applications such as drug discovery, being able to very pre
cisely control the generated outputs for specific features (e.g. functional groups) is an important feature for a generative algorithm. Secondly, current conditional-generation methods rely on a predefined set of possible labels to condition on, and the addition of new properties necessitates the retraining of all or part of the model. For many graph applications, this is particularly limiting, as there are a huge number of specific structural properties that may be conditioned on. For example, in the fields of drug discovery and molecule design, it is not uncommon to desire a molecular graph which might contain a benzene, or toluene, or aniline, etc. Thus, for many real-world graph-generation problems such as drug design, both these limitations in current conditional-generation methods can pose a serious obstacle that prevents more widespread adoption of graph diffusion.
In this work, we present GraphGUIDE (**Graph** Generation **U**sing **I**nterpretable **D**iffusion on **E**dges), an alternative graph-diffusion framework that addresses these limitations. GraphGUIDE relies on a diffusion process which is fully discrete, defined by flipping edges in and out of existence based on a Bernoulli random process. We define three different diffusion kernels, which add, delete, or flip edges randomly. Thus, at each point in both forward and reverse diffusion, the intermediate is an interpretable, well-defined graph. More importantly, this allows for _full, interpretable control_ of conditional generation using the appropriate kernel _without any reliance on a predefined set of labels_.
To summarize, our main contributions are the following:
* We present a novel framework, GraphGUIDE, for interpretable and controllable graph generation based on discrete diffusion of graph edges.
* We derive three discrete diffusion kernels based on the Bernoulli distribution, including a more efficient and stable parameterization of a kernel based on symmetrically flipping bits, and two novel kernels based on asymmetrically setting bits.
* We compare our method to other recent state-of-the-art graph-generation methods on benchmark datasets, showing that it achieves comparable generation quality.
* We demonstrate that GraphGUIDE enables full control of arbitrary structural properties--which do not need to be predefined at training time--during generation, thus allowing the injection of custom priors such as the presence or absence of specific graph motifs.
## 2 Related work
### Diffusion models
Sohl-Dickstein et al. (2015) first described diffusion models as a method of generating data from some distribution. Given data samples \(x_{0}\) drawn from some complex (and potentially high-dimensional) data distribution \(q_{0}(x)\), the challenge of generative modeling is to find a way to model and generate novel data samples from the underlying distribution \(q_{0}(x)\), even though it is intractable to describe and only a limited number of examples are available. To address this challenge, a diffusion model learns a _bidirectional_ mapping between the original data distribution \(q_{0}(x)\) and some _prior_ distribution \(\pi(x)\), where the prior distribution is tractable and can be sampled from easily. In a diffusion model, progressive amounts of random noise are added to some data sample \(x_{0}\sim q_{0}(x)\) in the forward direction, thereby obtaining a noisy sample \(x_{t}\sim q_{t}(x)\). As \(t\) approaches the time horizon \(T\) through many time steps, the distribution of noisy objects \(q_{t}(x)\) approaches the tractable prior \(q_{T}(x)=\pi(x)\). The core complexity of the diffusion model is to _learn_ the reverse-diffusion process by fitting a model \(p_{\theta}(x_{t},t)\) (typically a neural network) to effectively denoise any \(x_{t}\) into a slightly less noisy \(x_{t-1}\). After training the neural network on available data samples and many time points \(t\), novel samples can be generated from \(q_{0}(x)\) by first sampling from \(\pi(x)\), and iteratively applying the neural network predictions to obtain progressively less and less noisy samples until reaching a final \(x_{0}\sim q_{0}(x)\).
### Conditional generation
Beyond the generation of samples from \(q_{0}(x)\), a major goal of generative modeling is to perform _conditional generation_, where we wish to draw a sample from \(q_{0}(x)\) which satisfies a specific label or property. Within the diffusion-model literature, there are effectively two methods for conditional generation.
Classifier-guided conditional generation was proposed in Song et al. (2021), in which an external classifier \(f(x_{t})\) is trained on \(x_{t}\) to predict some label \(y\). Input gradients from this classifier are then used during the generative process to bias the generation of an object toward one which has the label \(y\). While elegant in its mathematical justification (a simple invocation of Bayes' Rule), it relies on an external classifier which is trained on noisy inputs \(x_{t}\) from across the diffusion timeline and a predefined set of labels \(y\). This method is also only readily applied to diffusion models trained in a continuous-time and continuous-noise setting, due to its reliance on gradients.
In contrast to classifier-guided conditional generation, Ho et al. (2021) proposed an alternative method: classifier-free conditional generation. Instead of relying on an external classifier, the neural network \(p_{\theta}\) (which defines the reverse-diffusion/generative process) is trained with labels as an input: \(p_{\theta}(x_{t},t,y)\). This method for conditional generation has been exceedingly popular, and has been shown to generate state-of-the-art samples (Rombach et al., 2021; Ho
et al., 2022). Unlike classifier-guided conditional generation, this method enjoys the freedom of not relying on any external classifier, and it can be applied to discrete-time and/or discrete-noise diffusion settings.
Unfortunately, both methods for conditional generation suffer from some limitations. Firstly, both methods merely supply a reverse-diffusion-influencing signal to the generative process (i.e. through biasing gradients or through an auxiliary input). This acts as a _soft constraint_ which not only is uninterpretable, but also cannot be controlled or modified manually by a human during reverse diffusion. Secondly, both methods require a predefined set of labels \(y\) (to train either an external classifier or the diffusion model itself). The addition of new labels or properties to conditionally generate would necessitate the retraining of an entire model; in the more popular case of classifier-free conditional generation, the entire diffusion model would need to be retrained.
### Discrete diffusion kernels
The vast majority of diffusion models are trained with Gaussian noise, as this is the most well-developed kernel. Methods for discrete diffusion, however, have been proposed and utilized.
In the paper which first demonstrated diffusion models, Sohl-Dickstein et al. (2015) briefly proposed a discrete diffusion kernel based on the Bernoulli distribution--termed the "binomial" kernel--which flipped black-and-white pixels back and forth according to some probability. Unfortunately, the binomial kernel was parameterized in a way which made it inefficient for forward diffusion and unstable for reverse diffusion. Because of the large focus on Gaussian kernels in the literature, this discrete kernel has remained underdeveloped and unused until this point, and until now has not been reparameterized to behave efficiently and stably.
Later on, Hoogeboom et al. (2021) proposed an alternative way to diffuse over discrete states based on the multinomial distribution, where the forward-diffusion process slowly transforms a one-hot-encoded category vector into a uniform multinomial distribution. Austin et al. (2021) then demonstrated a similar framework where the forward-diffusion process is defined as Markov transitions over discrete states; the authors showed that multinomial diffusion is a special case of this framework. Although these methods for discrete diffusion have been successfully applied to unconditional generation, there is very limited usage of these kernels for conditional generation (in graphs or other data types).
### Graph generation
For the problem of graph generation, there is already a large body of literature which spans many techniques, from autoregressive to one-shot. This includes GraphRNN (You et al., 2018), GRAN (Liao et al., 2019), MolGAN (Cao and Kipf, 2018), and SPECTRE (Martinkus et al., 2022), among others.
For graph generation using diffusion models specifically, a few methods have been proposed, each of which solves the problem of graph discreteness differently. Niu et al. (2020) demonstrated continuous diffusion on the adjacency matrix using Gaussian noise--a relaxation of the discreteness of edges. This generated graph-diffusion intermediates which had fractional and negative edges. Lee et al. (2022) then adapted this framework and performed conditional generation on molecular graphs by incorporating gradients from an external property-prediction network--an application of classifier guidance, made possible by the continuous relaxation (Song et al., 2021). Methods such as these, however--which diffuse on adjacency matrices--are severely hindered in their ability to conditionally generate structural properties. There are many equivalent adjacency-matrix orderings which satisfy the same structural property, which makes it difficult to inject this form of inductive bias into the generative process. Of course, these methods also suffer from the aforementioned limitations inherent to classifier-guided conditional generation.
Later on, Vignac et al. (2022) applied the Markovian diffusion framework proposed by Austin et al. (2021) on discrete graph adjacency matrices and node features. The authors showed that their method--DiGress--achieved state-of-the-art generative performance compared to other graph-generation methods, including those listed above. The authors also attempted to apply classifier-guided conditional generation using a discretized approximation of gradients, although this was limited by the conflict between continuous gradients and discrete diffusion, as well as the limitations inherent to current conditional-generation methods (i.e. they are uninterpretable soft constraints which require predefining a set of labels at training time).
## 3 GraphGUIDE for conditional graph generation
Previous work in graph diffusion created uninterpretable diffusion intermediates, or they lacked a component which allowed for successful control over conditional generation. Here, we propose GraphGUIDE as an alternative framework for graph generation which allows for more interpretable and controllable generation (Figure 1).
In this section, we present discrete diffusion kernels for graph generation (Section 3.1), demonstrate that they are capable of achieving generative performance comparable to other state-of-the-art methods (Section 3.2), and showcase the ease at which graph generation can be controlled for arbitrary structural properties using our GraphGUIDE
framework (Section 3.3).
### Bernoulli diffusion on edges
In order for GraphGUIDE to diffuse on graphs in a discrete and controllable manner, We define three discrete diffusion kernels based on the Bernoulli distribution (Figure 2). Consider a binary vector \(x_{t}\) to be diffused. The diffusion kernels will alter the bits randomly until a prior distribution is reached. Importantly, instead of performing diffusion on some continuous states, we define the Bernoulli diffusion kernels to _directly_ flip or set bits, thereby ensuring that every \(x_{t}\) in the forward-diffusion process is a fully well-defined binary vector. We define three diffusion kernels for this process:
1. Bit-flip kernel: at time \(t\), flip each bit with probability \(\beta_{t}\).
2. Bit-one kernel: at time \(t\), set each bit to 1 with probability \(\beta_{t}\) (if the bit is already 1, do nothing).
3. Bit-zero kernel: at time \(t\), set each bit to 0 with probability \(\beta_{t}\) (if the bit is already 0, do nothing).
For all three kernels, we assume there is a fixed noise schedule \(\beta_{t}\) with \(t\in\{1,...,T\}\). A typical noise schedule monotonically increases from \(\beta_{1}=0\) to \(\beta_{T}=\frac{1}{2}\). Note that the forward-diffusion process defines an independent Bernoulli distribution for each entry in \(x_{t}\) at every time \(t\).
For each of these three kernels, we derive an analytical formula for the marginal forward distribution \(q(x_{t}|x_{0})\) and for the marginal posterior distribution \(q(x_{t-1}|x_{t},x_{0})\). Importantly, we derive parameterizations for these kernels which satisfy two properties: 1) \(q(x_{t}|x_{0})\) is also a Bernoulli distribution for any \(t\), and it is very tractable to compute its parameters; and 2) the reverse-diffusion posterior distribution \(q(x_{t-1}|x_{t},x_{0})\) is also a tractable Bernoulli distribution, so the reverse-diffusion process can be modeled by learning the posterior's parameters directly, leading to more stable behavior. These two properties are critical for the efficient training of diffusion models which stably generate high-quality samples (Ho et al., 2020).
We present the formulae for our three Bernoulli kernels, including the forward distribution, the posterior distribution, and the prior distribution (Table 1). See Appendix A for derivations.
These three Bernoulli kernels are defined on general bits. In order to generate undirected graphs, we define \(x_{t}\) as a binary vector denoting which edges exist in the graph (1 if the edge exists, and 0 otherwise). This vector has size \(\binom{n}{2}\), where \(n\) is the number of nodes in the graph (we do not allow self-edges or multi-edges, but our work can be extended easily to accommodate both cases; see Section 4). We diffuse on an undirected graph by applying a Bernoulli kernel to this
Figure 1: Interpretable and controllable graph diffusion. In order to generate discrete graphs using a diffusion model, we define the forward-diffusion noising process to flip edges in and out of existence to reach a prior distribution. All diffusion intermediates are interpretable well-defined graphs. In reverse diffusion, intermediates are also well formed, and therefore are fully manually controllable. In the GraphGUIDE framework, if a certain set of edges or graph motifs are desired (or undesired), appropriate edges may be manually added (or pruned, respectively) to generate a graph sample with the desired property.
Figure 2: Summary of Bernoulli diffusion kernels applied to graph edges. The Bernoulli diffusion kernels operate on binary states. At each stage of forward diffusion, bits are flipped or set according to some probability. When applied to binary edge states in a graph (i.e. whether or not an edge exists), the Bernoulli kernels diffuse graphs by flipping (or setting) edges in or out of existence. This generates intermediates which are all well-defined graphs in both forward and reverse diffusion. We propose three Bernoulli diffusion kernels which flip edges on or off randomly (top), add edges randomly (middle), or delete edges randomly (bottom).
binary edge vector. That is, the forward-diffusion process adds or removes edges (or both) randomly, and the reverse-diffusion process reconstructs a graph by deciding which nodes to link or unlink. The precise behavior depends on which of the three kernels is being used.
When applied to graph edges, the bit-flip kernel slowly flips edges in and out of existence with increasing probability until the graph approaches a prior which is the Erdos-Renyi graph (with \(p=0.5\) for a typical noise schedule). In reverse diffusion, the bit-flip kernel starts with an Erdos-Renyi graph and iteratively toggles edges on and off until a final graph sample is recovered. The bit-one kernel slowly adds edges randomly to the graph until reaching the prior, which is the complete graph (i.e. all possible edges exist). In the reverse direction, bit-one diffusion successively removes edges until obtaining a graph sample. Finally, the bit-zero kernel slowly removes random edges, approaching the prior of the empty graph (i.e. no edges exist at all). In reverse diffusion with the bit-zero kernel, edges are slowly added until a final graph sample is formed. In our experiments below, we focus our work on discrete diffusion over edge existence alone, as this is sufficient for many applications, including for molecule design (See Section 4 for more details).
### Generative performance
We compare the generative performance of graph-diffusion models trained with our Bernoulli kernels compared to other graph-generation methods. Over two well-known benchmark datasets (community-small and stochastic block models), we use the maximum-mean-discrepancy (MMD) metric to quantify how similar the generated graphs are to the training set (Tables 2-3). We compute the MMD between the distributions of degrees, clustering coefficients, and orbit counts (Hocevar and Demsar, 2014). We report an MMD ratio: the MMD of generated graphs and the training set, normalized by the MMD of the training set and an independently sampled validation set. We compare our MMD ratios to those reported by other graph-generation methods (when available). A lower MMD is better. Note that in some cases, these other methods erroneously label MMD _squared_ as MMD in their text, whereas we report MMD (i.e. values taken from other works which report MMD squared have been square rooted).
These experiments demonstrate that diffusion using the discrete Bernoulli kernels on graph edges achieves comparable performance compared to other state-of-the-art methods for graph generation, including other discrete graph-diffusion methods such as DiGress (Vignac et al., 2022).
### Graph generation with interpretable control
GraphGUIDE employs these discrete Bernoulli kernels because they result in perfectly well-defined graphs at each intermediate stage of the diffusion process. Not only are these intermediates more readily interpretable, but they also
\begin{table}
\begin{tabular}{l l l l} Kernel & Forward \(q(x_{t}=1|x_{0})\) & Posterior \(q(x_{t-1}=1|x_{t},x_{0})\) & Prior \(\pi(x_{T}=1)\) \\ \hline Bit-flip & \((1-x_{0})\bar{\beta_{t}}+x_{0}(1-\bar{\beta_{t}})\), & \(\frac{(x_{t}+\beta_{t}-2x_{t}\beta_{t})(\frac{1}{2}+(2x_{0}-1)2^{t-2}\prod \limits_{i=1}^{t-1}(\frac{1}{2}-\beta_{i}))}{\frac{1}{2}+(1-2(x_{t}-x_{0})^{2}) 2^{t-1}\prod\limits_{i=1}^{t}(\frac{1}{2}-\beta_{i})}\) & \(\frac{1}{2}\)2 \\ Bit-one & \(x_{0}+(1-x_{0})(1-\prod\limits_{i=1}^{t}(1-\beta_{i}))\) & \(\frac{x_{t}(x_{0}+(1-x_{0})(1-\prod\limits_{i=1}^{t-1}(1-\beta_{i})))}{x_{0}x_{ t}+(1-x_{0})x_{t}(1-\prod\limits_{i=1}^{t}(1-\beta_{i}))(1-x_{0})(1-x_{t})\prod \limits_{i=1}^{t}(1-\beta_{i})}\) & \(1^{\ddagger}\) \\ Bit-zero & \(x_{0}\prod\limits_{i=1}^{t}(1-\beta_{i})\) & \(\frac{(x_{t}+\beta_{t}-2x_{t}\beta_{t})x_{0}\sum\limits_{i=1}^{t}(1-\beta_{i}) }{(1-x_{0})(1-x_{t})+x_{0}(1-x_{t})(1-\prod\limits_{i=1}^{t}(1-\beta_{i}))+x_{ 0}x_{t}\prod\limits_{i=1}^{t}(1-\beta_{i})}\) & \(0^{\ddagger}\) \\ \end{tabular}
\end{table}
Table 1: Forward, posterior, and prior distributions for three Bernoulli diffusion kernels. See Appendix A for derivations.
\begin{table}
\begin{tabular}{l c c c} Method & Deg. \(\downarrow\) & Clus. \(\downarrow\) & Orbit \(\downarrow\) \\ \hline GraphRNN & 2.00 & 1.31 & 2.00 \\ GRAN & 1.73 & 1.25 & 1.00 \\ MolGAN & 1.73 & 1.36 & 1.00 \\ SPECTRE & 1.00 & 1.73 & 1.00 \\ DiGress & 1.00 & 0.95 & **1.00** \\ \hline Bit-flip & **0.99** & **0.58** & 2.55 \\ Bit-one & 1.21 & 0.62 & 1.83 \\ Bit-zero & 1.87 & 1.02 & 4.69 \\ \end{tabular}
\end{table}
Table 2: Bernoulli edge diffusion MMD ratio (community-small)
allow the generation process to be easily controlled. At any stage of the reverse-diffusion process, edges or graph motifs that are desired can be manually retained in the intermediate (or symmetrically, edges or motifs that are not desired can be prevented from forming).
In order to illustrate the ease at which graph generation can be controlled with GraphGUIDE, we show some example graphs generated to have specific desired properties (Figures 3-4). First, we trained a model using the bit-one kernel to generate cliques of sizes three to six. Graphs that were generated unconditionally (i.e. without manual control) contained cliques of various sizes (Figure 3, top). We then _conditionally_ generated graphs by manually controlling the reverse-diffusion process so that all generated graphs would have a clique of size 6 (Figure 4, top). Recall that the bit-one kernel confers a prior distribution which is the complete graph (i.e. \(x_{T}\) corresponds to a graph with all possible edges). In the reverse-diffusion process, edges are gradually removed to recover a sample of \(x_{0}\). For each graph generated under manual control, we arbitrarily selected 6 nodes. Throughout the reverse-diffusion process--at each step--we ensured that no edges were removed between any of these 6 nodes. If any such edges were removed at a reverse-diffusion step, they were added back before the next step. As a result, all graphs generated using this procedure contained a 6-clique. Intriguingly, the model also extrapolated from the data and generated 7- and 8-cliques, which contain 6-cliques as subgraphs.
We then trained a model using the bit-zero kernel to generate community-small graphs (as in Table 2). Graphs generated by this model typically showed two communities which were oftentimes linked to each other with one or more edges (Figure 3, middle). We then controlled generation by ensuring no edges were added between the two communities, thereby always forming two disjoint subgraphs (Figure 4, middle). The bit-zero kernel confers a prior distribution which has no edges, and in the reverse-diffusion process, edges are slowly added to form the final graph sample \(x_{0}\). In order to perform manual control, we simply partitioned the empty graph \(x_{T}\) into two equal-sized sets of nodes, and ensured that no edges were ever added between nodes of different communities. The result is a set of graphs where the two communities were always disjoint.
\begin{table}
\begin{tabular}{l c c c} Method & Deg. \(\downarrow\) & Clus. \(\downarrow\) & Orbit \(\downarrow\) \\ \hline GraphRNN & 2.62 & 1.33 & 1.75 \\ GRAN & 3.76 & 1.29 & 1.46 \\ MolGAN & 5.42 & 1.87 & 1.67 \\ SPECTRE & 3.14 & 1.26 & **0.54** \\ DiGress & 1.26 & 1.22 & 1.30 \\ \hline Bit-flip & 2.73 & 1.23 & 0.94 \\ Bit-one & **1.00** & 1.21 & 0.81 \\ Bit-zero & 1.31 & **1.19** & 0.80 \\ \end{tabular}
\end{table}
Table 3: Bernoulli edge diffusion MMD ratio (stochastic block models)
Figure 4: Examples of conditionally generated graphs through manual control with GraphGUIDE. On our model trained to generate cliques of various sizes using the bit-one kernel, we used manual control to enforce that the generated graph always had a 6-clique (top). On the community-small diffusion model (trained with the bit-zero kernel), we conditionally generated graphs by enforcing throughout the generation process that the two communities would remain disjoint (i.e. no edges between the communities) (middle). On the molecule-like diffusion model trained with the bit-flip kernel, we conditionally generated graphs such that the generated molecules always contained a 6-membered backbone ring (bottom).
Figure 3: Examples of unconditionally generated graphs without manual control. We show graphs generated from a diffusion model trained on cliques of sizes ranging from 3 to 6, using the bit-one kernel (top). On the community-small dataset, we generated graphs from a diffusion model trained with the bit-zero kernel (middle). On a dataset of molecule-like graphs consisting of ring and non-ring backbones (dark blue) and secondary nodes (light blue), we generated graphs from a diffusion model trained with the bit-flip kernel (bottom).
Finally, we trained a model using the bit-flip kernel to generate a set of molecule-like graphs. Our molecule-like graphs consist of backbone and secondary nodes (atoms), where backbone nodes always have a degree of at most 4, and secondary nodes always have a degree of 1 and are linked to a backbone node. In terms of real organic chemistry, backbone nodes might correspond to carbon atoms, and secondary nodes might correspond to halogens. The backbone atoms can be linked together in rings of various sizes, or be acyclic and branched. Graphs generated from this model showed a diverse set of molecule-like structures (Figure 3, bottom). We then performed controllable generation and enforced throughout reverse diffusion that a 6-membered backbone ring would be formed (Figure 4, bottom). The bit-flip kernel confers a prior distribution \(x_{T}\) which is an Erdos-Renyi graph, and edges are slowly removed or added throughout the reverse-diffusion process to generate a graph sample \(x_{0}\). In order to perform manual control, we identified 6 backbone nodes from \(x_{T}\), and enforced throughout the generative process that those 6 backbone nodes were linked together in a ring, with no other edges between them. The resulting graphs always had a 6-membered backbone ring, usually with other backbone nodes or secondary nodes attached.
Throughout these experiments, because of the full manual control offered by the discrete diffusion, **100%** of the resulting graphs always had the desired property. In contrast, when we unconditionally generated graphs from these models, we only obtained 6-cliques 68% of the time, disjoint communities 21% of the time, and 6-membered-ring molecules 12% of the time.
In this section, we demonstrated the power of conditional generation via manual control with GraphGUIDE. Notably, this was made possible by the fully discrete nature of the Bernoulli kernels, which require that all diffusion intermediates are fully well-defined graphs. This is not only important for interpretability and for humans to easily manipulate the diffusion process, but is also critical for the robustness of the neural-network predictions. Just as the neural network is trained on well-defined graphs with binary edges, manually edited reverse-diffusion intermediates are also well-defined graphs with binary edges, and therefore are more likely in-distribution for the neural network, thus not leading to unexpected or undefined behavior. Furthermore, we presented _three_ Bernoulli kernels, each of which is well-suited for different controllable-generation tasks, where edges or motifs need to be retained or removed. The bit-one kernel is best for ensuring that a particular set of edges are retained; the bit-zero kernel is best for ensuring that a particular set of edges are removed; the bit-flip kernel is best for ensuring that a more complex motif (with some edges that need to be retained and others removed) forms. Using the best-suited kernel of the three helps ensure that manually controlled generation creates intermediates that are in-distribution for the model. For example, using the bit-one kernel generates a prior which is the complete graph, and so in the reverse-diffusion process, edges are gradually deleted. If the manual-control task is to retain a set of edges, then retaining those edges throughout the reverse-diffusion process ensures that diffusion intermediates are much more likely to remain in-distribution for the neural network.
Contrast this interpretability and controllability with other graph-diffusion frameworks. On continuous-diffusion frameworks, the diffusion intermediates contain edges that are fractional or negative. Not only is this far less interpretable, it is also much more difficult to control the generation in terms of desired structural features, such as the presence or absence of a particular motif. Any attempt to manually control the generative process like above would be foiled by the fact that fully well-defined graphs are not in-distribution for most of the diffusion process. Additionally, all existing graph-diffusion frameworks (continuous or discrete) rely on traditional methods for conditional generation, which inscrutably influence graph generation using a limited, predefined set of labels (Section 2).
Our experiments highlight the unique method of conditional generation offered by GraphGUIDE. Conditional generation using GraphGUIDE no longer requires an external classifier, or even a predetermined set of property labels. Because the diffusion intermediates are so easily manipulated, practically any structural property--which is defined by the presence and/or absence of edges or motifs--can be imposed with full manual control at any time in the reverse-diffusion process. This very much distinguishes our framework from other methods for conditional generation.
## 4 Discussion
The diffusion framework proposed in this work is unique from other diffusion methods due to the controllability of the reverse-diffusion product. Graph generation through GraphGUIDE can be thought of as a reverse-diffusion process which iteratively decides which edges to add or remove in order to recover a final graph sample \(x_{0}\). This renders the generation process highly interpretable and controllable to humans.
Within the GraphGUIDE framework, nodes and their features can be thought of as static and eternal; the reverse-diffusion process determines which nodes to link up in order to form a final graph sample. As such, in our experiments, we trained our diffusion models to diffuse only on edges using our discrete Bernoulli kernels. That is, we did not perform diffusion on node features. This is because GraphGUIDE naturally defines reverse diffusion as a process which decides which nodes to link up with an edge.
Of course, this requires the set of nodes to have a limited set of possible feature values. For many graph-generation tasks, this assumption holds, including graphs where only the structure is important (e.g. communities, stochastic block models, etc.), and molecular graphs (i.e. there are only a few types of atoms in typical molecular-generation tasks). Note, however, that in order to accommodate nodes with many more possible features, diffusion can also be performed on node features (which can be continuous or discrete) and on (discrete) edges jointly.
Although we demonstrated GraphGUIDE on undirected graphs without self-edges or multi-edges, our work can be easily extended to accommodate both, simply by having the binary vector \(x_{t}\) (which denotes edge presence) contain an entry for every possible edge, whether it be a self-edge or a multi-edge (assuming there is a maximum number of multi-edges per pair of nodes). Similarly, this framework can also be readily applied to directed graphs by effectively doubling the size of \(x_{t}\). Our method may also be applied to graphs with edges that have edge attributes, as long as there is a relatively small set of discrete attributes. Again, this can be accomplished by having additional entries in \(x_{t}\), which denote every possible attribute for each edge. Alternatively, GraphGUIDE can be combined with kernels other than the Bernoulli kernels, such as multinomial kernels like those proposed in Hoogeboom et al. (2021) and Austin et al. (2021). Graphs with many possible edge attributes or continuous edge attributes, however, do not fit very well into the GraphGUIDE framework, as a fundamental assumption made by the Bernoulli kernel (or multinomial kernels) is that all edges are binary (or have a limited number of discrete states, respectively).
Another limitation of our framework may be in the set of properties that can be conditionally generated via manual control. We demonstrated an exquisite degree of manual control afforded by discrete diffusion on edges, and this allowed conditional generation of graphs with arbitrary structural properties (e.g. molecules with 6-membered backbone rings), without the need to predefine them. Hence, in order to take advantage of controllable generation using our framework, the property desired must be definable structurally--that is, it needs to be definable by a set of known edges or graph motifs. As such, it is difficult to use this framework to generate graphs which satisfy a high-level property (e.g. a molecular graph which is a target for the \(\beta_{2}\) adrenergic receptor), because it is not easy to identify what specific low-level structural properties (e.g. bonds or functional groups) confer this high-level property, which is a complex descriptor. Instead, high-level properties (e.g. drug targets) may be controlled for using the current standard methods of conditional generation, such as classifier-free conditional generation (i.e. training the diffusion model on a predefined set of labels) Ho et al. (2021). Notably, our framework remains orthogonal to classifier-free conditional generation, and can be easily combined with such methods so that conditional generation can be performed on a smaller set of pre-defined high-level properties as well as arbitrary low-level structural properties simultaneously.
## 5 Conclusion
In this work, we proposed GraphGUIDE, a novel framework for interpretable and controllable graph generation using diffusion models. To aid in generating interpretable diffusion intermediates, we presented three discrete diffusion kernels based on the Bernoulli distribution and applied them to graph edges. The resulting diffusion processes add noise to graphs by flipping, adding, or removing edges until reaching some prior distribution. In the reverse-diffusion process, the diffusion model iteratively decides which edges to keep or remove to recover a final graph sample. Notably, all diffusion intermediates are fully well-defined graphs, thus allowing all diffusion intermediates to be interpretable. More importantly, by using the appropriate kernel, the generative process is highly controllable--specific edges, motifs, and other properties can be retained or prevented at each stage of the reverse-diffusion process, while still being in-distribution for the neural network. Because of this high degree of control over all parts of the generative process, practically any structural property can be conditioned on, and without relying on any predefined set of labels. Together, GraphGUIDE allows the enforcement of any arbitrary structural property on-the-fly with 100% success. Additionally, we demonstrated that these advantages in interpretability and controllability are gained without any penalty in generative performance.
We illustrated the benefits of GraphGUIDE for several kinds of undirected graphs, particularly highlighting the application to molecular graphs for drug discovery or chemical design. Our work, however, may be applied to generating other kinds of graphs in an interpretable and controllable manner, such as knowledge graphs and causality graphs. In both these situations, it would be highly beneficial to be able to control for certain substructures easily and interpretably. Furthermore, the framework defined by GraphGUIDE may be applied to data types outside of graph edges, as well. Further exploration in discrete diffusion and controllable generation can continue to have impacts in many other real-world domains. |
2304.07678 | de Haas-van Alphen Oscillations for the Field Along c-axis in UTe2 | We performed de Haas-van Alphen (dHvA) experiments in the spin-triplet
superconductor UTe2 for magnetic field along the c-axis above 15T. Three
fundamental dHvA frequencies, named alpha1, alpha2 and beta corresponding to
the cross sections of cylindrical Fermi surfaces (FSs) with large cyclotron
effective masses (33-43 m0) were detected. No other fundamental dHvA
frequencies were detected at high frequency range, suggesting a
cylindrical-shaped electron FS without connecting at the Z point of the
Brillouin zone. However, the existence of small pocket FSs associated with
extremely heavy masses cannot be fully excluded. | Dai Aoki, Ilya Sheikin, Alix McCollam, Jun Ishizuka, Youichi Yanase, Gerard Lapertot, Jacques Flouquet, Georg Knebel | 2023-04-16T03:04:14Z | http://arxiv.org/abs/2304.07678v2 | # de Haas-van Alphen Oscillations for the Field Along \(c\)-axis in UTe\({}_{2}\)
###### Abstract
We performed de Haas-van Alphen (dHvA) experiments in the spin-triplet superconductor UTe\({}_{2}\) for magnetic field along the \(c\)-axis above 15 T. Three fundamental dHvA frequencies, named \(\alpha_{1}\), \(\alpha_{2}\) and \(\beta\) corresponding to the cross sections of cylindrical Fermi surfaces (FSs) with large cyclotron effective masses (33-43 \(m_{0}\)) were detected. No other fundamental dHvA frequencies were detected at high frequency range, suggesting a cylindrical-shaped electron FS without connecting at the \(Z\) point of the Brillouin zone. However, the existence of small pocket FSs associated with extremely heavy masses cannot be fully excluded.
The heavy-fermion paramagnet UTe\({}_{2}\) is one of the hottest materials in condensed matter physics, see recent review[1]. The superconducting transition occurs at \(T_{\rm c}=1.6\)-\(2.1\) K. The highlight is the huge upper critical field, \(H_{\rm c2}\), highly exceeding the Pauli limit for all field directions associated with the field-reentrant behavior for \(H\parallel b\)-axis. Another remarkable point is the multiple superconducting phases under pressure and in magnetic fields, detected as a thermodynamic response. These results strongly suggest spin-triplet superconductivity in UTe\({}_{2}\). Furthermore, possible topological superconductivity was suggested both theoretically and experimentally.
In order to understand unconventional superconductivity in UTe\({}_{2}\), it is important to clarify the electronic structure from a microscopic point of view. Recently, we reported the first observation of the de Haas-van Alphen (dHvA) effect, clarifying two kinds of cylindrical Fermi surfaces (FSs) associated with large cyclotron effective masses (32-57 \(m_{0}\))[2]. However, the dHvA signal for \(H\parallel c\)-axis was missing, because of the large \(H_{\rm c2}\) exceeding our maximum field of 15 T.
In this report, we present results of dHvA experiments at high fields up to 30 T for \(H\parallel c\)-axis in UTe\({}_{2}\). High-quality single crystals of UTe\({}_{2}\) with the residual resistivity ratio, RRR \(\sim\) 400-800 in this batch, were grown using the modified NaCl/KCl-flux method similar to Ref. 3. The dHvA experiments were performed by field-modulation technique in a dilution refrigerator at temperatures down to 75 mK and at high fields up to 30 T at the HFML in Nijmegen.
Figure 1 shows the dHvA oscillations and the corresponding FFT spectrum for the field range from 20 to 30 T. Three fundamental dHvA branches \(\alpha_{2}\), \(\beta\) and \(\alpha_{1}\) were detected at 3.67, 3.33 and 3.14 kT, respectively, together with the 2nd harmonic of the branch \(\alpha_{2}\) at 7.35 kT.
The cyclotron effective masses were determined to be 43 \(m_{0}\), 39 \(m_{0}\) and 33 \(m_{0}\) for branches \(\alpha_{2}\), \(\beta\) and \(\alpha_{1}\), respectively, from the dHvA measurements at different temperatures up to 150 mK.
The obtained dHvA frequencies for \(H\parallel c\)-axis are added to the angular dependence of the previously reported dHvA frequencies[2] in Fig. 2(a). The dHvA frequencies for \(H\parallel c\)-axis agree well with the previous results as an extension to \(c\)-axis. Figure 2(b) shows theoretical angular dependence of the dHvA frequencies calculated by the GGA+\(U\) methods with Coulomb repulsion, \(U=2\) eV. The corrugated cylindrical FS for the branch \(\alpha\) yields the splitting of the dHvA frequency due to the maximum and minimum cross-sectional areas. On the other hand, the frequencies for the branch \(\beta\) are nearly degenerate even for \(H\parallel c\)-axis. This can be understood from a wavy-shaped FS in the Brillouin zone based on the body-centered orthorhombic structure, as shown in Fig. 2(d). If the corrugation is moderate, the dHvA frequencies for maximal and minimal cross-sectional areas could be almost degenerate even for \(H\parallel c\)-axis. Meanwhile, the FS for the branch \(\alpha\) is strongly corrugated, and the frequency splits into two, \(\alpha_{1}\) and \(\alpha_{2}\), for \(H\parallel c\)-axis in our experiment.
No higher fundamental dHvA frequency was observed, while the 2nd harmonic of the branch \(\alpha_{2}\) was detected at 7.35 kT. An important question is whether the FSs of UTe\({}_{2}\) consist of only these detected cylindrical FSs. In the calculation with smaller \(U\) (\(=1.5\) eV), the electron FSs are connected at the \(Z\) point, forming a ring-shaped FS, as shown in the inset of Fig. 2(c). The corresponding angular dependence of the dHvA frequencies will consist of two sets of frequencies with \(1/\cos\theta\)-like behavior as a function of field angle \(\theta\) from \(c\) to \(a\)-axis. The predicted higher frequency is about 8 kT for \(c\)-axis, in which the cyclotron motion is stretched to different Brillouin zones, as shown in Fig. 2(e). The calculated band
Figure 1: (Color online) (a) dHvA oscillations after subtracting non-oscillating background around 80 mK and (b) the corresponding FFT spectrum for the field range from 20 to 30 T in UTe\({}_{2}\).
mass for this higher frequency is nearly twice larger than that for the lower one. Thus, the corresponding dHvA signal could be strongly damped, if it exists. On the other hand, the low frequency (\(\sim 2.5\,\)kT) close to \(H\parallel a\), originating from a ring-shaped orbit, must be easily detected because of the relatively low band mass with a favorable curvature factor. However, no dHvA signal was detected for \(H\parallel a\) in our dHvA experiments using field-modulation technique either in a superconducting magnet or in a resistive magnet up to 30 T.
Therefore, next important issue is how to reconcile the two-dimensional FS with the anisotropy of \(H_{\rm c2}\). The initial slope of \(H_{\rm c2}\), \(H^{\prime}_{\rm c2}\) (\(\equiv|dH_{\rm c2}/dT|_{T=T_{\rm c}}\)), should be proportional to \(1/v_{\rm F}^{2}\). Thus, the anisotropy of \(H^{\prime}_{\rm c2}\) would be explained by the so-called effective mass model [1], in which the topology of the averaged FS associated with the anisotropic effective mass determines the anisotropy of \(H_{\rm c2}\). Cylindrical FS elongated along the \(c\)-axis suggests a low \(H^{\prime}_{\rm c2}\) for \(H\parallel c\)-axis and high \(H^{\prime}_{\rm c2}\) for \(H\parallel a\) and \(b\)-axes. In fact, the lowest value of \(H^{\prime}_{\rm c2}\) for \(H\parallel c\)-axis, (\(\sim 7.5\,\)T/K) and higher values for \(b\) and \(a\)-axis (20-35 T/K) are reported from the results of specific heat measurements [4]. The anisotropy is, however, still small, compared to those expected from two-dimensional FSs.
Furthermore, resistivity also shows rather isotropic behavior in terms of the current direction [5]. The resistivity ratio between \(J\parallel c\) and \(a\) (\(b\)) is only \(\rho_{c}/\rho_{a}\sim 2\) (\(\rho_{c}/\rho_{b}\sim 1\)) at room temperature. The value of \(\rho_{c}/\rho_{a}\) increases up to 10 at low temperatures, suggestive of electronic structure changes as a function of temperature, but it is still not very large.
From the present data we cannot rule out the existence of a small FS pocket associated with a very heavy effective mass, which is not detected in the dHvA experiments. Two cylindrical FSs yield the total Sommerfeld coefficient, \(\gamma\sim 100\)mJ K\({}^{-2}\)mol\({}^{-1}\), indicating that the main FS are detected. If we expect that the \(\gamma\)-value of about 20 mJ K\({}^{-2}\)mol\({}^{-1}\) is missing, one can assume the existence of a spherical FS pocket at the Z point, for instance, with a dHvA frequency of 0.2 kT and an effective mass 90 \(m_{0}\), for which the \(\gamma\)-value is calculated by \(\gamma=k_{\rm B}^{2}V/(3\hbar^{2})m^{*}k_{\rm F}\). It is hard to detect such a small FS pocket with heavy mass in the dHvA experiments, especially by the field-modulation technique.
Recently, quantum oscillations were detected by torque method for the field directions from \(c\) to \(a\)-axis as well as from \(c\) to \(b\)-axis [6]. Theses results confirm our previous conclusions for two cylindrical FS. Based on the angular dependence of the quantum oscillation frequencies observed in their experiment and our previous results [2] they have modeled the cylindrical FS topology with super-elliptical cross-sections with no corrugation for the hole FS. The observation of only one dHvA frequency along the \(c\)-axis, at odds with our report here, may be due to a small misalignment in one of the experiments or caused by a limited resolution of the FFT due to the limited field range and asks for the clarification in future experiments. Another important point for future studies is whether an extra FS exists.
More recently, new quantum oscillation experiments by tunnel diode oscillator (TDO) technique at temperature down to 0.35 K was reported [7]. There, low frequencies below 1 kT with relatively small effective masses, 5.7-6.8 \(m_{0}\), are observed at high fields. Such frequencies should be easily detected in our dHvA experiments with field-modulation technique. Further experiments are required to check whether they exist at low fields.
In summary, we detected three fundamental dHvA frequencies for \(H\parallel c\)-axis with heavy effective masses, corresponding to cylindrical FSs. From the anisotropy of the initial slope of \(H_{\rm c2}\) and resistivity, we can still consider a possible pocket FS with an extremely large mass.
## Acknowledgements
We thank Y. Onuki, H. Harima, J. P. Brison, Y. Haga, H. Sakai, S.-i. Fujimori, V. Mineev, Y. Tokunaga, M. Kimata, A. Miyake, and S. Fujimoto for fruitful discussion. This work was supported by KAKENHI (JP19H00646,
Figure 2: (Color online) (a) The detected dHvA frequencies for \(H\parallel c\)-axis plotted on the previous angular dependence of frequencies in UTe\({}_{2}\)[2]. Panels (b) and (c) show the calculated angular dependence of dHvA frequencies by GGA+\(U\) with \(U=2\) eV and 1.5 eV, respectively. The insets display the corresponding FS. Panels (d) and (e) schematically show the cross sections for electron FS viewed from \(a\)-axis for \(U=2\) eV and 1.5 eV, respectively. Arrows correspond to the diameters of the cyclotron motions for the extremal cross-sectional areas for \(H\parallel c\).
JP20K20889, JP20H00130, JP20KK0061, JP22H04933), GIMRT (20H0406), ICC-IMR, ANR (FRESCO, No. ANR-20-CE30-0020), (NWO), HFML-RU/NWO member of EMFL.
|
2308.09580 | On a Generalization of Quasi-metric Space | We find an extension of the quasi-metric (to be called $g$-quasi metric) such
that the induced generalized topology may fail to form a topology. We show that
$g$-quasi metrizability is a $g$-topologically invariant property of
generalized topological spaces. Extending metric product and uniform continuity
for $g$-quasi metric spaces, we note that a $g$-quasi metric may fail to be
uniformly continuous in the extended sense unlike usual metric. Finally, we
extend the study of completeness, Lebesgue property and weak $G$-completeness
for $g$-quasi metric spaces. | Sugata Adhya, A. Deb Ray | 2023-08-18T14:21:07Z | http://arxiv.org/abs/2308.09580v1 | # On a generalization of quasi-metric space
###### Abstract.
We find an extension of the quasi-metric (to be called \(g\)-quasi metric) such that the induced generalized topology may fail to form a topology. We show that \(g\)-quasi metrizability is a \(g\)-topologically invariant property of generalized topological spaces. Extending metric product and uniform continuity for \(g\)-quasi metric spaces, we note that a \(g\)-quasi metric may fail to be uniformly continuous in the extended sense unlike usual metric. Finally, we extend the study of completeness, Lebesgue property and weak \(G\)-completeness for \(g\)-quasi metric spaces.
_AMS Subject Classification:_ 54A05, 54C08, 54E15.
_Keywords:_\(g\)-quasi metric, generalized topology, Lebesgue property, weak \(G\)-complete.
## 1. **Introduction**
Csaszar [5] proposed a notion of generalized topology by taking into account the idea of monotone mappings. It accommodates various open-like sets existed in literature [4, 12, 14, 15]. Given a nonempty set \(X,\) it is defined as a subcollection of \(\mathcal{P}(X)\) which contains \(\emptyset\) and is closed under arbitrary union. Considering the members of a generalized topology as open, it then became natural to study the usual topological notions for generalized topology. Accordingly, analogoues of closed set, closure, interior, product and subspaces, continuous functions, countability and separation axioms, compactness and connectedness have been studied for generalized topological spaces. Additionally, multiple weaker versions of the above notions have also been investigated in the light of weaker forms of open sets in the context of generalized topology. The interested readers may consult [18] and references therein.
In view of the facts that the associated open balls in any metric space forms a base for a natural topology and generalized topology is a generalization of usual topology, it is natural to search for a generalization of metric structure that under standard approach induces a generalized topology and fails to produce a topology, in general. This paper addressed this question.
Here, in Section 3, we obtain the related spaces (termed as \(g\)-quasi metric spaces) by generalizing Wilson's widely studied notion of the quasi-metric structure [17]. Subsequently, we discuss certain separation properties of the generalized topology induced by a \(g\)-quasi metric. We demonstrate that \(g\)-quasi metrizability is an invariant property of the generalized topology. Next, in Section 4, we propose the natural extensions of the notions of metric product and uniform continuity in the context of \(g\)-quasi metric spaces. It is noted that, unlike usual metric, a \(g\)-quasi metric may fail to be uniformly continuous in the extended sense while considered as a mapping from the product space to \(\mathbb{R}\).
The study made in Section 3 and Section 4 pave the way for the related investigations on Cauchyness and completeness. Accordingly, in Section 5, we take up the task of extending them in \(g\)-quasi metric spaces. Apart from the usual completeness, we introduce two stronger forms of completeness viz. Lebesgue property [2, 11, 16] and weak \(G\)-completeness [1, 9, 10] in \(g\)-quasi metric spaces. It is known that both Lebesgue property and weak \(G\)-completeness are intermediate metric properties between compactness and completeness that can be characterized in terms of pseudo-Cauchy [16] and \(G\)-Cauchy [9] sequences respectively. In what follows, we explore the mutual dependence of those completenesses for \(g\)-quasi metrics and enquire their behaviours in the product spaces through Cauchy, pseudo-Cauchy and \(G\)-Cauchy sequences.
## 2. **Preliminaries**
This section discusses the prerequisites that will be required subsequently.
**Definition 2.1**.: [5, 3, 13] A generalized topology \(\mu\) on a nonempty set \(X\) is a collection of its subsets such that \(\emptyset\in\mu\) and \(\mu\) is closed under arbitrary union. The pair \((X,\mu)\) is called a generalized topological space. Moreover, if \(X\in\mu\) then \(\mu\) is called a suprat topology or a strong generalized topology.
Given a generalized topological space \((X,\mu),\) the elements of \(\mu\) are called generalized open sets (or \(\mu\)-open sets) and their complements are called generalized closed sets (or \(\mu\)-closed sets) in \((X,\mu).\)
**Definition 2.2**.: [6] Given a nonempty set \(X\) and \(\mathcal{B}\subset\mathcal{P}(X)\) with \(\emptyset\in\mathcal{B},\) all possible unions of elements of \(\mathcal{B}\) form a generalized topology \(\mu(\mathcal{B})\) on \(X.\) Here \(\mathcal{B}\) is called a base for \(\mu(\mathcal{B}).\) Equivalently, \(\mu(\mathcal{B})\) is said to be generated by the base \(\mathcal{B}.\)
**Definition 2.3**.: [5, 7] Let \((X,\mu)\) and \((Y,\mu^{\prime})\) be two generalized topological spaces.
(a) A mapping \(f:X\to Y\) is called generalized continuous or \((\mu,\mu^{\prime})\)-continuous if \(f^{-1}(G)\in\mu,\ \forall\ G\in\mu^{\prime}.\)
(b) A mapping \(f:X\to Y\) is called a generalized homeomorphism or \((\mu,\mu^{\prime})\)-homeomorphism if \(f\) is bijective and \(f,f^{-1}\) are generalized continuous.
(c) A property of generalized topological spaces that is invariant under generalized homeomorphism is called a \(g\)-topologically invariance.
**Definition 2.4**.: [19] Let \((X,\mu)\) be a strong generalized topological space.
(a) \((X,\mu)\) is called \(\mu\)-\(T_{0}\) if for \(x,y\in X\) with \(x\neq y\) there exists \(B\in\mu\) such that exactly one of \(x\) and \(y\) is in \(B\).
(b) \((X,\mu)\) is called \(\mu\)-\(T_{1}\) if for \(x,y\in X\) with \(x\neq y\) there exist \(B_{1},B_{2}\in\mu\) such that \(B_{1}\) contains \(x\) but not \(y\) and \(B_{2}\) contains \(y\) but not \(x.\)
Clearly every \(\mu\)-\(T_{1}\) strong generalized topological space is \(\mu\)-\(T_{0}.\)
**Theorem 2.5**.: [19] Every singleton set in a \(\mu\)-\(T_{1}\) strong generalized topological space is \(\mu\)-closed.
**Definition 2.6**.: [8] Let \(X\) be a nonempty set. A nonempty subset \(\mathcal{U}\) of \(\mathcal{P}(X\times X)\) is called a generalized quasi-uniformity (or \(g\)-quasi uniformity) if
(i) each member of \(\mathcal{U}\) contains the diagonal of \(X,\)
(ii) \(\mathcal{U}\) is closed under superset,
(iii) for \(U\in\mathcal{U}\) there exists \(V\in\mathcal{U}\) such that \(V\circ V\subset U.\)
In this case, \((X,\mathcal{U})\) is called a generalized quasi-uniform space (or \(g\)-quasi uniform space).
**Theorem 2.7**.: [8] Given a nonempty set \(X\) and \(\mathcal{B}\)\((\neq\emptyset)\subset\mathcal{P}(X\times X),\)\(\mathcal{B}\) forms a base for some \(g\)-quasi uniformity on \(X\) if and only if (i) \(\Delta(X)\subset B,\ \forall\ B\in\mathcal{B},\) and (ii) \(B\in\mathcal{B}\implies\exists\ V\in\mathcal{B}\) such that \(V\circ V\subset B.\)
Moreover, such \(\mathcal{B}\) is a base for the \(g\)-quasi uniformity \(\{V\subset X:B\subset V\) for some \(B\in\mathcal{B}\}\) on \(X.\)
we finish this section by recalling certain preliminaries on Lebesgue property and weak \(G\)-completeness for metric spaces.
**Definition 2.8**.: [11] A metric space on which every real-valued continuos function is uniformly continuous is said to be Lebesgue (or Atsuji space).
**Definition 2.9**.: [11] A sequence \((x_{n})\) in a metric space \((X,d)\) is said to be pseudo-Cauchy if given \(\epsilon>0,k\in\mathbb{N}\) there exists distinct \(m,n\)\((>k)\in\mathbb{N}\) such that \(d(x_{m},x_{n})<\epsilon.\)
**Theorem 2.10**.: [11, 16] A metric space is Lebesgue if and only if every pseudo-Cauchy sequence having distinct terms clusters in it.
**Definition 2.11**.: [1, 9, 10] A sequence \((x_{n})\) in a metric space \((X,d)\) is called \(G\)-Cauchy if \(\lim\limits_{n\to\infty}d(x_{n+p},x_{n})=0,\ \forall\ p\in\mathbb{N}\) (or equivalently, \(\lim\limits_{n\to\infty}d(x_{n+1},x_{n})=0\)). A metric space in which every \(G\)-Cauchy sequence converges is said to be weak \(G\)-complete.
Both Lebesgue property and weak \(G\)-completeness are strictly intermediate between compactness and completeness of metric spaces [9, 11].
## 3. \(g\)-Quasi Metric Spaces and the Induced Generalized Topology
**Definition 3.1**.: [17] Let \(X\) be a nonempty set. A mapping \(d:X\times X\to\mathbb{R}\) is called a quasi-metric on \(X,\) if:
(a) \(\forall\ x,y\in X,\ d(x,y)\geq 0\) and \(d(x,y)=0\iff x=y;\)
(b) \(\forall\ x,y,z\in X,\ d(x,y)\leq d(x,z)+d(z,y).\)
Here \(d\) is called a quasi-metric on \(X.\) Moreover, the pair \((X,d)\) is called a quasi-metric space.
**Definition 3.2**.: Let \(X\) be a nonempty set. A mapping \(d:X\times X\to\mathbb{R}\) is called a \(g\)-quasi metric on \(X,\) if there exists \(r\geq 0\) such that
(a) \(\forall\ x,y\in X,\ d(x,y)\geq r\) and \(d(x,y)=r\iff x=y;\)
(b) \(\forall\ x,y,z\in X,\ d(x,y)\leq d(x,z)+d(z,y).\)
Here \(d\) is called a \(g\)-quasi metric on \(X\) and \(r,\) the index of \(d.\) Moreover, the pair \((X,d)\) is called a \(g\)-quasi metric space (of index \(r\)).
Clearly a \(g\)-quasi metric of index \(0\) is a quasi-metric and vice versa.
**Definition 3.3**.: A \(g\)-quasi metric \(d\) on a nonempty set \(X\) (and hence the related \(g\)-quasi metric space \((X,d)\)) is said to be symmetric if \(d(x,y)=d(y,x),\ \forall\ x,y\in X.\)
Clearly a symmetric \(g\)-quasi metric of index \(0\) is a metric and vice versa.
**Theorem 3.4**.: Let \((X,d)\) be a quasi-metric space. Then for \(r\geq 0,\ d^{\prime}=d+r\) forms a \(g\)-quasi metric on \(X\) of index \(r.\)
Proof.: (a) Clearly \(\forall\ x,y\in X,\ d^{\prime}(x,y)\geq r\) and \(d^{\prime}(x,y)=r\iff x=y;\)
(b) Choose \(x,y,z\in X.\) Then \(d^{\prime}(x,y)=d(x,y)+r\leq d(x,z)+d(z,y)+r\leq\{d(x,z)+r\}+\{d(z,y)+r\}\leq d ^{\prime}(x,z)+d^{\prime}(z,y).\)
Hence the result follows.
However given a \(g\)-quasi metric \(d\) on a nonempty set \(X,\)\(d^{\prime}=d-r\) may not form a quasi-metric on it for all choices of \(r\geq 0:\)
**Example 3.5**.: Let \(X=[2,4]\) and \(d:X\times X\rightarrow\mathbb{R}\) be given by \(d(x,y)=(x-y)^{2}+100,\ \forall\ x,y\in X.\) Then for no values of \(r\geq 0,\)\(d^{\prime}=d-r\) forms a quasi-metric on \(X.\) It follows by observing that for \(r=100,\ d^{\prime}(2,4)>d^{\prime}(2,3)+d^{\prime}(3,4),\) while for all other values of \(r,\ d^{\prime}(x,x)\neq 0,\ \forall\ x\in X.\)
However \(d\) is a \(g\)-quasi metric on \(X\) of index \(100:\)
(a) \(\forall\ x,y\in X,\ d(x,y)\geq 100\) and \(d(x,y)=100\iff x=y,\)
(b) \(\forall\ x,y,z\in X,\ d(x,y)+d(y,z)\geq 100+100\geq(x-z)^{2}+100=d(x,z).\)
**Definition 3.6**.: Let \((X,d)\) be a \(g\)-quasi metric space of index \(r\geq 0.\) Given \(x\in X\) and \(p>0,\) we denote the set \(\{y\in X:d(x,y)<p\}\) by \(B_{d}(x,p)\) (or simply by \(B(x,p)\)). Clearly \(\mathcal{B}(d)=\{B(x,p):x\in X,p>0\}\bigcup\{\emptyset\}\) forms a base for some strong generalized topology \(\mu(d)\) on \(X.\) It is called the generalized topology induced by \(d.\)
**Note 3.7**.: It should be noted, at this stage, that if \((X,d)\) is a symmetric \(g\)-quasi metric space of index \(0\) (i.e., a metric space) then \(\mathcal{B}(d),\) defined as before, forms a base for the topology induced by \(d.\)
In what follows, we show that for all positive values of \(r,\) a symmetric \(g\)-quasi metric space \((X,d)\) can be found so that (i) \(d\) is of index \(r,\) (ii) \(\mu(d)\) does not form a topology on \(X.\) We consider the following example.
**Example 3.8**.: Let \(r>0\) and \(d:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) be defined by
\[d(x,y)=\begin{cases}r&\text{if }x=y\\ 2r&\text{if }0<|x-y|\leq r\\ |x-y|&\text{if }|x-y|>r\end{cases}\]
We show that \((\mathbb{R},d)\) is a \(g\)-quasi metric space of index \(r\) though \(\mu(d)\) does not form a topology on \(\mathbb{R}.\)
(a) Clearly \(d(x,y)\geq r\) and \(d(x,y)=r\iff x=y,\)\(\forall\ x,y\in\mathbb{R}.\)
(b) Let \(x,y,z\in\mathbb{R}.\) We show that \(d(x,y)\leq d(x,z)+d(z,y).\)
Note for \(0\leq|x-y|\leq r\) the above inequality follows from (a). So let us assume \(|x-y|>r.\)
If \(|x-z|=0\) or \(|z-y|=0\) then the inequality is immediate.
If \(|x-z|,|z-y|>r\) then it follows from the order property of \(\mathbb{R}.\)
If \(0<|x-z|,|z-y|\leq r\) then \(d(x,y)=|x-y|\leq|x-z|+|z-y|\leq 4r=d(x,z)+d(z,y).\)
If \(|x-z|>r\) and \(0<|z-y|\leq r\) then \(d(x,y)\leq d(x,z)+|z-y|\leq d(x,z)+r\leq d(x,z)+2r=d(x,z)+d(z,y).\)
If \(|z-y|>r\) and \(0<|x-z|\leq r\) then it follows similarly as before.
Thus \(d\) forms a \(g\)-quasi metric on \(\mathbb{R}\) of index \(r.\)
We now show that \(\mu(d)\) does not form a topology on \(\mathbb{R}.\)
Since \(B\left(r,2r+\frac{r}{10}\right)\bigcap\)\(B\left(\frac{21r}{5},2r+\frac{r}{10}\right)=\left(r-\frac{21r}{10},r+\frac{21r}{10} \right)\bigcap\left(\frac{21r}{5}-\frac{21r}{10},\frac{21r}{5}+\frac{21r}{10}\right)\)\(=\left(\frac{21r}{10},r+\frac{21r}{10}\right),\) it suffices to show that \(\left(\frac{21r}{10},r+\frac{21r}{10}\right)\) does not contain any nonempty set of the form \(B(x,c)\) where \(x\in\mathbb{R},c>0.\)
Suppose otherwise. Then \(B(x,c)\subset\left(\frac{21r}{10},r+\frac{21r}{10}\right)\) for some \(x\in\mathbb{R}\) and \(c>r.\)
_Case I: \(r<c\leq 2r.\)_ Then for chosen \(y\in\mathbb{R}\) with \(r<|x-y|<c,\ y\in B(x,c)\) Also \(x\in B(x,c).\) Thus \(x,y\in B(x,c)\subset\left(\frac{21r}{10},r+\frac{21r}{10}\right),\) a contradiction to \(|x-y|>r.\)
_Case II: \(c>2r.\)_ Then for each \(y\in(x-r,x+r),\ d(x,y)<c.\) Consequently \((x-r,x+r)\subset B(x,c)\implies(x-r,x+r)\subset\left(\frac{21r}{10},r+\frac{21 r}{10}\right),\) a contradiction to \(\left|\left(\frac{21r}{10},r+\frac{21r}{10}\right)\right|=r.\)
The contradictions arrived at both the cases prove our claim.
**Remark 3.9**.: Let \((X,\overline{d})\) be a \(g\)-quasi metric space with index \(r.\) For \(\epsilon>r,\) set \(V_{\epsilon}=\{(x,y)\in X\times X:\overline{d}(x,y)<\epsilon\}.\)
If \(r=0,\) then it is clear that \(\mathcal{B}_{\mathcal{U}}=\{V_{\epsilon}:\epsilon>r\}\) forms a base for a \(g\)-quasi uniformity on \(X\) alike the classical case.
However for all other non-negative values of \(r,\) there is some \(g\)-quasi metric space with index \(r\) such that \(\mathcal{B}_{\mathcal{U}}\) fails to form a base for some \(g\)-quasi uniformity on \(X\) as we see now.
Consider the \(g\)-quasi metric space \((\mathbb{R},d),\) as defined in Example 3.8, with index \(r>0.\) If possible, let \(\mathcal{B}_{\mathcal{U}}=\{V_{\epsilon}:\epsilon>r\}\) forms a base for a \(g\)-quasi uniformity on \(\mathbb{R}\) where \(V_{\epsilon}=\{(x,y)\in X\times X:d(x,y)<\epsilon\},\ \forall\ \epsilon>r.\) Then there is \(\delta>r\) such that, \(V_{\delta}\circ V_{\delta}\subset V_{\frac{3r}{2}}.\)
Set \(x=0,y=r+\frac{\delta-r}{2},z=r+\delta.\) Then \(d(x,y)=d(y,z)=r+\frac{\delta-r}{2}<\delta\) and hence \((x,y),(y,z)\in V_{\delta}\) implies \((x,z)\in V_{\frac{3r}{2}}.\) i.e., \(d(x,z)<\frac{3r}{2},\) i.e., \(r+\delta<\frac{3r}{2}\) i.e., \(\delta<\frac{r}{2},\) a contradiction. Hence \(\mathcal{B}_{\mathcal{U}}\) is not a base for a \(g\)-quasi uniformity on \(\mathbb{R}.\)
**Theorem 3.10**.: Let \(d\) be a \(g\)-quasi metric on \(X.\) Then \((X,\mu(d))\) is \(\mu\)-\(T_{1}.\)
Proof.: Let \(d\) be of index \(r.\) Choose \(x,y\in X\) such that \(x\neq y.\)
Clearly \(d(x,y),\ d(y,x)>r.\) Choose \(p\in\mathbb{R}\) such that \(r<p<\min\{d(x,y),d(y,x)\}.\) Then \(B(x,p),\ B(y,p)\) are generalized open sets in \((X,\mu(d))\) containing \(x,\ y\) respectively such that \(x\notin B(y,p)\) and \(y\notin B(x,p).\)
Let \(d\) be a \(g\)-quasi metric on \(X.\) Then we may conclude the following, stated as corollaries:
**Corollary 3.11**.: Each singleton set in \((X,\mu(d))\) is \(\mu\)-closed.
**Corollary 3.12**.: \((X,\mu(d))\) is \(\mu\)-\(T_{0}.\)
**Remark 3.13**.: \(g\)-Quasi metrices of different indices may induce the same generalized topology. For example, choosing \(X=\{x,y\}\) and \(d_{1},d_{2}:X\times X\to\mathbb{R}\) as given by \(d_{1}(x,x)=d_{1}(y,y)=3,d_{1}(x,y)=d_{1}(y,x)=4\) and \(d_{2}(x,x)=d_{2}(y,y)=5,d_{2}(x,y)=d_{2}(y,x)=6,\) we observe that both \(d_{1},d_{2}\) induce discrete topology on \(X\) though they have distinct indices.
**Definition 3.14**.: A generalized topological space \((X,\mu)\) is called \(g\)-quasi metrizable if for some \(g\)-quasi metric \(d\) on \(X,\ \mu(d)=\mu.\)
**Theorem 3.15**.: Let \((X,\mu),(Y,\mu^{\prime})\) be two generalized topological spaces and \(f:X\to Y\) be a generalized homeomorphism. If \((X,\mu)\) is \(g\)-quasi metrizable, then so is \((Y,\mu^{\prime}).\)
Proof.: Let \((X,\mu)\) be \(g\)-quasi metrizable and \(d:X\times X\to\mathbb{R},\) a \(g\)-quasi metric on \(X\) of index \(r\geq 0\) such that \(\mu(d)=\mu.\)
Define \(d^{\prime}:Y\times Y\to\mathbb{R}\) by \(d^{\prime}(y_{1},y_{2})=d(f^{-1}(y_{1}),f^{-1}(y_{2})),\ \forall\ y_{1},y_{2}\in Y.\) Then
(a) \(\forall\ y_{1},y_{2}\in Y,\ d^{\prime}(y_{1},y_{2})\geq r\) and \(d^{\prime}(y_{1},y_{2})=r\iff f^{-1}(y_{1})=f^{-1}(y_{2})\iff y_{1}=y_{2};\)
(b) \(\forall\ y_{1},y_{2},y_{3}\in Y,\ d^{\prime}(y_{1},y_{2})+d^{\prime}(y_{2},y_{ 3})=d(f^{-1}(y_{1}),f^{-1}(y_{2}))+d(f^{-1}(y_{2}),f^{-1}(y_{3}))\geq d(f^{-1} (y_{1}),f^{-1}(y_{3}))=d^{\prime}(y_{1},y_{3}).\) Thus \(d^{\prime}\) forms a \(g\)-quasi metric on \(Y\) of index \(r.\)
Choose \(V\in\mu^{\prime}\) and \(y\in V.\) Then for some \(x\in X,\ p>r\) we have \(f^{-1}(y)\in B_{d}(x,p)\subset f^{-1}(V)\implies y\in f(B_{d}(x,p))\subset V \implies y\in B_{d^{\prime}}(f(x),p)\subset V.\)
Thus \(\mathcal{B}(d^{\prime})\) forms a base for \(\mu^{\prime}.\) Hence the result follows.
## 4. **Product of \(g\)-Quasi Metrics**
**Theorem 4.1**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r.\) Define \(d_{XY}:(X\times Y)\times(X\times Y)\to\mathbb{R}\) by
\[d_{XY}((x_{1},y_{1}),(x_{2},y_{2}))=\max\{d_{X}(x_{1},x_{2}),d_{Y}(y_{1},y_{2} )\},\]
\(\forall\ x_{1},x_{2}\in X\) and \(y_{1},y_{2}\in Y.\) Then \(d_{XY}\) defines a \(g\)-quasi metric on \(X\times Y\) of index \(r.\)
Proof.: Straightforward.
**Definition 4.2**.: Given two \(g\)-quasi metric spaces \((X,d_{X})\) and \((Y,d_{Y})\) of the same index \(r,\)\(d_{XY}\) is called the \(g\)-quasi metric product of \(d_{X}\) with \(d_{Y}\) or simply product \(g\)-quasi metric on \(X\times Y.\)
Clearly if \((X,d_{X})\) and \((Y,d_{Y})\) are metric spaces, then \(d_{XY}\) defines the product metric on \(X\times Y\).
**Definition 4.3**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of indices \(r_{1}\) and \(r_{2}\) respectively. A mapping \(f:X\to Y\) is said to be \(g\)-uniformly continuous if for \(\epsilon>r_{2}\) there exists \(\delta>r_{1}\) such that \(d_{X}(x_{1},x_{2})<\delta\implies d_{Y}(f(x_{1}),f(x_{2}))<\epsilon,\)\(\forall\ x_{1},x_{2}\in X.\)
Clearly if \((X,d_{X})\) and \((Y,d_{Y})\) are metric spaces, then every \(g\)-uniformly continuous mapping from \(X\) to \(Y\) is uniformly continuous as a mapping between metric spaces.
It is known that if \((X,d)\) is a metric space, then the distance function \(d:X\times X\to\mathbb{R}\) is uniformly continuous where \(X\times X\) is equipped with the product metric and \(\mathbb{R}\) with the usual metric. However given a \(g\)-quasi metric space \((X,d)\), the mapping
\(d:X\times X\to\mathbb{R}\) may not be \(g\)-uniformly continuous where \(X\times X\) is equipped with the product metric and \(\mathbb{R}\) with the usual metric (recall that, it is a \(g\)-quasi metric of index \(0\)). We consider the following example.
**Example 4.4**.: Consider the \(g\)-quasi metric space \((\mathbb{R},d),\) defined in Example 3.8, of index \(r>0.\) We show that \(d:(\mathbb{R}\times\mathbb{R},d^{\prime})\to(\mathbb{R},d_{u})\) is not \(g\)-uniformly continuous, where \(d^{\prime}\) is the \(g\)-quasi metric product of \(d\) with itself on \(\mathbb{R}\times\mathbb{R}\) and \(d_{u}\) is the usual metric on \(\mathbb{R}\).
Suppose otherwise. Then for \(\epsilon=\frac{r}{2},\)\(\exists\)\(\delta>r\) such that \(d^{\prime}((x,y),(x^{\prime},y^{\prime}))<\delta\implies|d(x,y)-d(x^{\prime},y ^{\prime})|<\epsilon,\)\(\forall\)\((x,y),(x^{\prime},y^{\prime})\in\mathbb{R}\times\mathbb{R}.\)
In particular, for \((x^{\prime},y^{\prime})=(0,0),\)\(d^{\prime}((x,y),(0,0))<\delta\implies|d(x,y)-d(0,0)|<\frac{r}{2},\)\(\forall\)\((x,y)\in\mathbb{R}\times\mathbb{R}.\)
That is, \(\max\{d(x,0),d(y,0)\}<\delta\implies|d(x,y)-d(0,0)|<\frac{r}{2},\)\(\forall\)\((x,y)\in\mathbb{R}\times\mathbb{R}.\)
Choose \(n\in\mathbb{N}\backslash\{1\}\) such that \(\frac{\delta-r}{(n-1)^{2}}<\frac{r}{2}.\) Then \(\frac{\delta-r}{n(n-1)}<\frac{r}{2}.\)
Set \(x=r+\frac{\delta-r}{n}\) and \(y=r+\frac{\delta-r}{n-1}.\)
Then \(|x-0|=r+\frac{\delta-r}{n}>r\) and \(|y-0|=r+\frac{\delta-r}{n-1}>r.\)
Consequently, \(d(x,0)=r+\frac{\delta-r}{n}<\delta\) and \(d(y,0)=r+\frac{\delta-r}{n-1}<\delta\) whence, \(\max\{d(x,0),\)\(d(y,0)\}<\delta.\)
However, \(|x-y|=\frac{\delta-r}{n(n-1)}<\frac{r}{2}\leq r\implies d(x,y)=2r,\) a contradiction since \(|d(x,y)-d(0,0)|<\frac{r}{2}.\)
Hence \(d:(\mathbb{R}\times\mathbb{R},d^{\prime})\to(\mathbb{R},d_{u})\) is not \(g\)-uniformly continuous.
## 5. Completeness, Lebesgue Property and (Weak) \(G\)-Completeness in \(g\)-Quasi Metric Spaces
In this section, we extend the study of completeness, Lebesgue property and weak \(G\)-completeness for \(g\)-quasi metric spaces using the extended notion of Cauchy, \(G\)-Cauchy and pseudo-Cauchy sequences.
**Definition 5.1**.: Let \((x_{n})\) be a sequence in a \(g\)-quasi metric space \((X,d)\) of index \(r\) and \(c\in X.\) Then
(i) \((x_{n})\) is said to be convergent to \(c\) in \((X,d)\) if it is so in \((X,\mu(d));\)
(ii) \(c\) is called a cluster point of \((x_{n})\) in \((X,d)\) if it is so in \((X,\mu(d)).\)
Clearly if \((x_{n})\) is convergent to \(c,\) then \(c\) is a cluster point of \((x_{n})\) (in \((X,d)).\)
**Definition 5.2**.: Let \((X,d)\) be a \(g\)-quasi metric space of index \(r\) and \((x_{n})\) be a sequence in \(X.\) Then
(i) \((x_{n})\) is called Cauchy if given \(\epsilon>r\) there exists \(k\in\mathbb{N}\) such that \(d(x_{m},x_{n})<\epsilon,\)\(\forall\)\(m,n\geq k;\)
(ii) \((x_{n})\) is called \(G\)-Cauchy if given \(\epsilon>r\) there exists \(k\in\mathbb{N}\) such that \(d(x_{n},x_{n+1})<\epsilon,\)\(\forall\)\(n\geq k;\)
(iii) \((x_{n})\) is called pseudo-Cauchy if given \(\epsilon>r\) and \(k\in\mathbb{N}\) there exist \(p,q\)\((p\neq q)\in\mathbb{N}\) with \(p,q\geq k\) such that \(d(x_{p},x_{q})<\epsilon.\)
**Definition 5.3**.: A \(g\)-quasi metric space \((X,d)\) is said to be
(i) complete if every Cauchy sequence converges to some point in it;
(ii) \(G\)-complete if every \(G\)-Cauchy sequence converges to some point in it;
(iii) weak \(G\)-complete if every \(G\)-Cauchy sequence has a cluster point in it;
(iv) Lebesgue if every pseudo-Cauchy sequence having distinct terms has a cluster point in it;
(v) strongly Lebesgue if every pseudo-Cauchy sequence has a cluster point in it.
Clearly for \(g\)-quasi metric spaces we have the following chain of implications:
\[\begin{CD}\text{Strongly Lebesgue}@>{}>{}>\text{Lebesgue}@>{}>{}>\text{ Weak $G$-completeness}\\ @V{}V{}V\\ \text{Completeness}@<{}<{}<G\text{-completeness}\end{CD}\]
In what follows, we show that for each \(r>0\), \((\mathbb{R},d)\) of index \(r\), as defined in Example 3.8, is not weak \(G\)-complete.
**Example 5.4**.: Consider the sequence \((x_{n})\) in \((\mathbb{R},d)\) where \(x_{n}=rn-\frac{r}{n},\ \forall\ n\in\mathbb{N}\).
Then \(\forall\ n\in\mathbb{N}\), \(|x_{n+1}-x_{n}|=r+r\left(\frac{1}{n}-\frac{1}{n+1}\right)>r\implies d(x_{n+1},x_{n})=r+r\left(\frac{1}{n}-\frac{1}{n+1}\right)\implies d(x_{n},x_{n+1})=r+ r\left(\frac{1}{n}-\frac{1}{n+1}\right).\)
Choose \(\epsilon>r.\) Then there is \(k\in\mathbb{N}\) such that \(\frac{r}{n(n+1)}<\epsilon-r,\ \forall\ n\geq k\) and hence, \(d(x_{n},x_{n+1})<\epsilon,\ \forall\ n\geq k.\) Thus \((x_{n})\) is \(G\)-Cauchy in \((\mathbb{R},d).\)
If possible, let there is a cluster point \(c\) of \((x_{n})\) in \((\mathbb{R},d).\)
Since \((x_{n})\) is a sequence of distinct terms, \(B(c,\frac{3r}{2})\) contains infinitely many elements of \((x_{n}).\)
However \(B(c,\frac{3r}{2})=\{y\in\mathbb{R}:d(c,y)<\frac{3r}{2}\}=\left(c-\frac{3r}{2},c -r\right)\bigcup\left(c+r,c+\frac{3r}{2}\right)\bigcup\{c\}\) which clearly contains finitely many elements of \((x_{n}),\) a contradiction.
Hence \((\mathbb{R},d)\) is not weak \(G\)-complete.
**Lemma 5.5**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r.\) A sequence \(((x_{n},y_{n}))\) is Cauchy in \((X\times Y,d_{XY})\) if and only if \((x_{n})\) and \((y_{n})\) are Cauchy in \((X,d_{X})\) and \((Y,d_{Y})\) respectively.
Proof.: Let \(((x_{n},y_{n}))\) be Cauchy in \((X\times Y,d_{XY}).\) Choose \(\epsilon>r.\) Then \(\exists\ k\in\mathbb{N}\) such that \(d_{XY}((x_{m},y_{m}),(x_{n},y_{n}))<\epsilon,\ \forall\ m,n\geq k.\) That is, \(d_{X}(x_{m},x_{n}),d_{Y}(y_{m},y_{n})<\epsilon,\ \forall\ m,n\geq k.\) Then \((x_{n})\) and \((y_{n})\) are Cauchy in \((X,d_{X})\) and \((Y,d_{Y})\) respectively.
_Conversely_, let \((x_{n})\) and \((y_{n})\) be Cauchy in \((X,d_{X})\) and \((Y,d_{Y})\) respectively. Choose \(\epsilon>r.\) Then \(\exists\ p,q\in\mathbb{N}\) such that \(d_{X}(x_{m},x_{n})<\epsilon,\ \forall\ m,n\geq p\) and \(d_{Y}(y_{m},y_{n})<\epsilon,\ \forall\ m,n\geq q.\)
Set \(r=\max\{p,q\}.\) Then \(d_{XY}((x_{m},y_{m}),(x_{n},y_{n}))<\epsilon,\ \forall\ m,n\geq r.\)
Hence \(((x_{n},y_{n}))\) is Cauchy in \((X\times Y,d_{XY}).\)
Similar chain of arguments yield the following results that we state without proof.
**Lemma 5.6**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r.\) A sequence \(((x_{n},y_{n}))\) is \(G\)-Cauchy in \((X\times Y,d_{XY})\) if and only if \((x_{n})\) and \((y_{n})\) are \(G\)-Cauchy in \((X,d_{X})\) and \((Y,d_{Y})\) respectively.
**Lemma 5.7**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r.\) If \(((x_{n},y_{n}))\) is a pseudo-Cauchy sequence in \((X\times Y,d_{XY})\) then \((x_{n})\) and \((y_{n})\) are pseudo-Cauchy in \((X,d_{X})\) and \((Y,d_{Y})\) respectively.
Then converse of Lemma 5.7 is not true. In support, we produce the following example.
**Example 5.8**.: Consider the \(g\)-quasi metric space \((\mathbb{R},d),\) as defined in Example 3.8, of index \(1.\) Then the sequences \((x_{n})\) and \((y_{n})\) in \(\mathbb{R},\) defined by
\[x_{n}=\begin{cases}1&\text{if $n$ is odd}\\ 10^{n}&\text{if $n$ is even}\end{cases}\]
and
\[y_{n}=\begin{cases}10^{n}&\text{if $n$ is odd}\\ 1&\text{if $n$ is even}\end{cases}\]
\(\forall\ n\in\mathbb{N},\) are pseudo-Cauchy in \((\mathbb{R},d).\)
However \(((x_{n},y_{n}))\) is not pseudo-Cauchy in \(\mathbb{R}\times\mathbb{R}\) (where \(\mathbb{R}\times\mathbb{R}\) is equipped with \(d^{\prime},\) the \(g\)-quasi metric product of \(d\) with itself). In fact, for any pair of positive integers \(m,q\)\((m\neq q)\) with \(m,q\geq 1,\) we obtain \(d^{\prime}((x_{m},y_{m}),(x_{q},y_{q}))>2,\) by considering even and odd cases separately for \(m\) and \(q.\) Thus \(((x_{n},y_{n}))\) is not pseudo-Cauchy in \((\mathbb{R}\times\mathbb{R},d^{\prime}).\)
**Theorem 5.9**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r.\) Then \((X\times Y,d_{XY})\) is complete if and only if \((X,d_{X})\) and \((Y,d_{Y})\) are complete.
Proof.: Let \((X\times Y,d_{XY})\) be complete.
We first show that \((X,d_{X})\) is complete. Choose a Cauchy sequence \((x_{n})\) in \((X,d_{X})\) and fix \(y\in Y.\) Then by Lemma 5.5, \(((x_{n},y))\) is Cauchy in \((X\times Y,d_{XY}).\) Since \((X\times Y,d_{XY})\) is complete, \(((x_{n},y))\) converges to a point \((a,b)\) in \((X\times Y,d_{XY}).\)
We claim that \((x_{n})\) is convergent to \(a\) in \((X,d_{X}).\)
Let \(V\) be a generalized open set in \((X,\mu(d_{X}))\) containing \(a.\) Then there exist \(p\in X,\delta>r\) such that \(a\in B_{d_{X}}(p,\delta)\subset V.\)
Since \(B_{d_{XY}}((p,b),\delta)\) is an open set in \((X\times Y,\mu(d_{XY}))\) containing \((a,b),\ \exists\ k\in\mathbb{N}\) such that \((x_{n},y)\in B_{d_{XY}}((p,b),\delta),\ \forall\ n\geq k.\)
Thus \(\max\{d_{X}(p,x_{n}),d_{Y}(b,y)\}<\delta,\ \forall\ n\geq k\implies d_{X}(p,x_{n})<\delta,\ \forall\ n\geq k.\)
i.e., \(x_{n}\in B_{d_{X}}(p,\delta)\subset V,\ \forall\ n\geq k.\) Hence \((x_{n})\) converges to \(a\) in \((X,d_{X})\) and so, \((X,d_{X})\) is complete. Similarly \((Y,d_{Y})\) is complete.
_Conversely_, let \((X,d_{X})\) and \((Y,d_{Y})\) be complete.
Choose a Cauchy sequence \(((x_{n},y_{n}))\) in \((X\times Y,d_{XY}).\) By Lemma 5.5, \((x_{n})\) and \((y_{n})\) are Cauchy in \((X,d_{X})\) and \((Y,d_{Y})\) respectively. So by hypothesis, there exist \(a\in X,b\in Y\) such that \((x_{n})\) converges to \(a\) in \((X,d_{X})\) and \((y_{n})\) to \(b\) in \((Y,d_{Y}).\)
We show that \(((x_{n},y_{n}))\) converges to \((a,b)\) in \((X\times Y,d_{XY}).\)
Let \(W\) be a generalized open set in \((X\times Y,\mu(d_{XY}))\) containing \((a,b).\) Then there exist \((p,q)\in X\times Y,\delta>r\) such that \((a,b)\in B_{d_{XY}}((p,q),\delta)\subset W.\) Consequently \(a\in B_{d_{X}}(p,\delta)\) and \(b\in B_{d_{Y}}(q,\delta).\)
Since \((x_{n})\) converges to \(a\) and \((y_{n})\) to \(b\), there exist \(k_{1},k_{2}\in\mathbb{N}\) such that \(x_{n}\in B_{d_{X}}(p,\delta),\ \forall\ n\geq k_{1}\) and \(y_{n}\in B_{d_{Y}}(q,\delta),\ \forall\ n\geq k_{2}\).
Set \(k=\max\{k_{1},k_{2}\}\). Then \((x_{n},y_{n})\in B_{d_{XY}}((p,q),\delta)\subset W,\ \forall\ n\geq k\). Thus \(((x_{n},y_{n}))\) converges to \((a,b)\) in \((X\times Y,d_{XY})\).
Thus \((X\times Y,d_{XY})\) is complete.
Similar chain of arguments yield the following results that we state without proof.
**Theorem 5.10**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r\). Then \((X\times Y,d_{XY})\) is \(G\)-complete if and only if \((X,d_{X})\) and \((Y,d_{Y})\) are \(G\)-complete.
**Theorem 5.11**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r\). If \((X\times Y,d_{XY})\) is weak \(G\)-complete then \((X,d_{X})\) and \((Y,d_{Y})\) are weak \(G\)-complete.
**Theorem 5.12**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be \(g\)-quasi metric spaces of the same index \(r\). If \((X\times Y,d_{XY})\) is (strongly) Lebesgue then \((X,d_{X})\) and \((Y,d_{Y})\) are (strongly) Lebesgue.
|
2307.01759 | Pretraining is All You Need: A Multi-Atlas Enhanced Transformer
Framework for Autism Spectrum Disorder Classification | Autism spectrum disorder (ASD) is a prevalent psychiatric condition
characterized by atypical cognitive, emotional, and social patterns. Timely and
accurate diagnosis is crucial for effective interventions and improved outcomes
in individuals with ASD. In this study, we propose a novel Multi-Atlas Enhanced
Transformer framework, METAFormer, ASD classification. Our framework utilizes
resting-state functional magnetic resonance imaging data from the ABIDE I
dataset, comprising 406 ASD and 476 typical control (TC) subjects. METAFormer
employs a multi-atlas approach, where flattened connectivity matrices from the
AAL, CC200, and DOS160 atlases serve as input to the transformer encoder.
Notably, we demonstrate that self-supervised pretraining, involving the
reconstruction of masked values from the input, significantly enhances
classification performance without the need for additional or separate training
data. Through stratified cross-validation, we evaluate the proposed framework
and show that it surpasses state-of-the-art performance on the ABIDE I dataset,
with an average accuracy of 83.7% and an AUC-score of 0.832. The code for our
framework is available at https://github.com/Lugges991/METAFormer | Lucas Mahler, Qi Wang, Julius Steiglechner, Florian Birk, Samuel Heczko, Klaus Scheffler, Gabriele Lohmann | 2023-07-04T15:00:06Z | http://arxiv.org/abs/2307.01759v2 | Pretraining is All You Need: A Multi-Atlas Enhanced Transformer Framework for Autism Spectrum Disorder Classification
###### Abstract
Autism spectrum disorder (ASD) is a prevalent psychiatric condition characterized by atypical cognitive, emotional, and social patterns. Timely and accurate diagnosis is crucial for effective interventions and improved outcomes in individuals with ASD. In this study, we propose a novel Multi-Atlas Enhanced Transformer framework, METAFormer, ASD classification. Our framework utilizes resting-state functional magnetic resonance imaging data from the ABIDE I dataset, comprising 406 ASD and 476 typical control (TC) subjects. METAFormer employs a multi-atlas approach, where flattened connectivity matrices from the AAL, CC200, and DOS160 atlases serve as input to the transformer encoder. Notably, we demonstrate that self-supervised pretraining, involving the reconstruction of masked values from the input, significantly enhances classification performance without the need for additional or separate training data. Through stratified cross-validation, we evaluate the proposed framework and show that it surpasses state-of-the-art performance on the ABIDE I dataset, with an average accuracy of 83.7% and an AUC-score of 0.832. The code for our framework is available at github.com/Lugges991/METAFormer
Keywords:Deep Learning Transformers fMRI Autism Spectrum Disorder Classification
## 1 Introduction
Autism spectrum disorder (ASD) is a widespread psychiatric condition characterized by atypical cognitive, emotional, and social patterns. With millions of individuals affected worldwide, the early diagnosis of ASD is a critical research priority, as it has a significant positive impact on patient outcomes. The etiology of ASD remains elusive, with intricate interactions among genetic, biological, psychological, and environmental factors playing a role. Currently, diagnosing ASD relies heavily on behavioral observations and anamnestic information, posing challenges and consuming a considerable amount of time. Skilled clinicians with extensive experience are required for accurate diagnosis. However,
common assessments of ASD have been criticized for their lack of objectivity and transparency [27]. Given these limitations, there is an urgent need for a fast, cost-effective, and objective diagnostic method that can accurately identify ASD leading to more timely interventions and improved outcomes for affected individuals.
In recent years, magnetic resonance imaging (MRI) has emerged as a powerful non-invasive tool for gaining insights into brain disorders' pathophysiology. Functional MRI (fMRI), a notable advancement in MRI technology, allows for the investigation of brain function by measuring changes in blood oxygen levels over time. Functional connectivity (FC) analysis [4] plays a crucial role in fMRI data analysis, as it examines the statistical dependencies and temporal correlations among different brain regions. Rather than considering isolated abnormalities in specific regions, brain disorders often arise from disrupted communication and abnormal interactions between regions. FC analysis enables researchers to explore network-level abnormalities associated with various disorders. This analysis involves partitioning the brain into regions of interest (ROIs) and quantifying the correlations between their time series using various mathematical measures.
In recent years machine learning approaches have been widely applied to the problem of ASD classification using resting-state fMRI (rs-fMRI) data. The majority of these studies use functional connectivities obtained from a predefined atlas as input to their classifiers. A considerable amount of work used classical machine learning algorithms such as support vector machines and logistic regression to classify ASD [11]. However, these methods have limitations as they are typically applied to small datasets with specific protocols and fixed scanner parameters, which may not adequately capture the heterogeneity present in clinical data. 3D Convolutional neural networks [20, 26, 33] have also been applied to preprocessed fMRI data, [1] have used 2D CNNs on preprocessed fMRI data. Though, these approaches are as well limited by the fact that they were only using small homogeneous datasets.
More recent works tried to overcome the homogeneity limitations and have used deep learning approaches to classify ASD based on connectomes. Multi-layer perceptrons are suited to the vector based representations of connectomes and have thus seen some usage in ASD classification [12, 30]. Graph convolutional models are also well suited and have yielded high accuracies [19, 29]. Other approaches used 1D CNNs [23], or variants of recurrent neural networks [17, 18], and also probabilistic neural networks have been proposed [16].
However, ASD classification is not limited to fMRI data and there has been work using, for example, EEG [5] or also more novel imaging approaches such as functional near-infrared spectroscopy [13].
The current study aims to improve classification performance of ASD based on rs-fMRI data over the entire ABIDE I dataset [22] by leveraging the representational capabilities of modern transformer architectures. We thus summarize our main contributions as follows:
1. We propose a novel multi-atlas enhanced transformer framework for ASD classification using rs-fMRI data: METAFormer
2. We demonstrate that self-supervised pretraining leads to significant improvements in performance without the requirement of additional data.
3. We show that our model outperforms state of the art methods on the ABIDE I dataset.
## 2 Methods
### Dataset
Our experiments are conducted on the ABIDE I dataset [22] which is a publicly available dataset containing structural MRI as well as rs-fMRI data obtained from individuals with Autism Spectrum Disorder (ASD) and typical controls (TC) from 17 different research sites. The raw dataset encompasses a total of 1112 subjects, 539 of which are diagnosed with ASD and 573 are TC. Subjects ages range from 7 to 64 years with a median age of 14.7 years across groups. The ABIDE I dataset is regarded as one of the most comprehensive and widely used datasets in the field, offering a combination of MRI, rs-fMRI, and demographic data.
The ABIDE I dataset exhibits significant heterogeneity and variations that should be taken into account. It comprises data from diverse research sites worldwide, leading to variations in scanning protocols, age groups, and other relevant factors. Consequently, the analysis and interpretation of the ABIDE I dataset pose challenges due this inherent heterogeneity.
#### 2.1.1 Preprocessing Pipeline.
We utilize the ABIDE I dataset provided by the Preprocessed Connectomes Project (PCP) [6] for our analysis. The PCP provides data for ABIDE I using different preprocessing strategies. In this work we use the preprocessed data from the DPARSF pipeline [31] comprising 406 ASD and 476 TC subjects. The DPARSF pipeline is based on SPM8 and includes the following steps: The first 4 volumes of each fMRI time series are discarded to allow for magnetization stabilization. Slice timing correction is performed to correct for differences in acquisition time between slices. The fMRI time series are then realigned to the first volume to correct for head motion. Intensity normalization is not performed. To clean confounding variation due to physiological noise, 24-parameter head motion, mean white matter and CSF signals are regressed out. Motion realignment parameters are also regressed out as well as linear and quadratic trends in low-frequency drifts. Bandpass filtering was performed after regressing nuisance signals to remove high-frequency noise and low-frequency drifts. Finally, functional to anatomical registration is performed using rigid body transformation and anatomical to standard space registration is performed using DARTEL [2].
#### 2.1.2 Functional Connectivity.
As the dimensionality of the preprocessed data is very high, we perform dimensionality reduction by dividing the brain into a set of parcels or regions with similar properties according to a brain atlas. In this work
we process our data using three different atlases. The first atlas is the Automated Anatomical Labeling (AAL) atlas [25]. This atlas, which is widely used in the literature, divides the brain into 116 regions of interest (ROIs) based on anatomical landmarks and was fractionated to functional resolution of \(3mm^{3}\) using nearest-neighbor interpolation. The second atlas is the Craddock 200 (CC200) atlas [7]. It divides the brain into 200 ROIs based on functional connectivity and was fractionated to functional resolution of \(3mm^{3}\) using nearest-neighbor interpolation. The third atlas we considered is the Dosenbach 160 (DOS160) atlas [10] which contains uniform spheres placed at coordinates obtained from meta-analyses of task-related fMRI studies.
After obtaining the ROI time-series from the atlases, we compute the functional connectivity using the Pearson Correlation Coefficient between each pair of ROIs. The upper triangular part of the correlation matrix as well as the diagonal are then dropped and the lower triangular part is vectorized to obtain a feature vector of length \(k(k-1)/2\), where \(k\) is the number of ROIs, which is then standardized and serves as input to our models.
### Model Architecture
#### 2.2.1 METAFormer: Multi-Atlas Transformer.
Here, we propose METAFormer, at the core of which lies the transformer encoder architecture, originally proposed by Vaswani et al. [28] for natural language processing tasks. However, as our main goal is to perform classification and not generation we do not use the decoder part of the transformer architecture. In order to accommodate input from multiple different atlases, we employ an ensemble of three separate transformers, with each transformer corresponding to a specific atlas. As depicted in Figure 1, the input to each transformer is a batch of flattened functional connectivity matrices. First, the input to each transformer undergoes embedding into a latent space using a linear layer with a dimensionality of \(d_{model}=256\). The output of the embedding is then multiplied by \(\sqrt{d_{model}}\) to scale the input features. This scaling operation aids in balancing the impact of the input features with the attention mechanism. Since we are not dealing with sequential data, positional encodings are not utilized.
The embedded input is subsequently passed through a BERT-style encoder [9], which consists of \(N=2\) identical layers with \(d_{ff}=128\) feed forward units, and \(h=4\) attention heads. To maintain stability during training, each encoder layer is normalized using layer normalization [3], and GELU [15] is used as the activation function. Following the final encoder layer, the output passes through a dropout layer. Then, a linear layer with \(d_{model}\) hidden units and two output units corresponding to the two classes is applied to obtain the final output. The outputs of the three separate transformers are averaged, and this averaged representation is passed through a softmax layer to derive the final class probabilities.
To train our Multi-Atlas Transformer model, we follow a series of steps. Firstly, we initialize the model weights using the initialization strategy proposed by He [14], while setting the biases to 0. To optimize the model, we employ the
AdamW optimizer [21] and minimize the binary cross entropy between predictions and labels. Our training process consists of 750 epochs, utilizing a batch size of 256. To prevent overfitting, we implement early stopping with a patience of 40 epochs. In order to ensure robustness of our model, we apply data augmentation. Specifically, we randomly introduce noise to each flattened connectome vector with an augmentation probability of 0.3. The noise standard deviation is set to 0.01. We conduct hyperparameter tuning using grid search. We optimize hyperparameters related to the optimizer, such as learning rate and weight decay. We also consider the dropout rate during this process.
### Self-Supervised Pretraining
As popularized by [24], the utilization of self-supervised generative pretraining followed by task-specific fine-tuning has demonstrated improved performance in transformer architectures. Building upon this approach, we propose a self-supervised pretraining task for our model. Our approach involves imputing missing elements in the functional connectivity matrices, drawing inspiration from the work introduced by [32]. To simulate missing data, we randomly set 10% of the standardized features in each connectome to 0 and train the model to predict the missing values. The corresponding configuration is illustrated in Figure 2. To achieve this, we begin by randomly sampling a binary noise mask \(M\in\{0,1\}^{n_{i}}\) for each training sample, where \(n_{i}\) denotes the number of features in the \(i\)-th connectome. Subsequently, the original input \(X\) is masked using element-wise multiplication with the noise mask: \(X_{masked}=X\odot M\).
Figure 1: Architecture of METAFormer. Our model consists of three separate transformers, each corresponding to a specific atlas. The input to each transformer is a batch of flattened functional connectivity matrices with the diagonal and the upper triangular part of the matrix removed. The output of the transformers is averaged and passed through a softmax layer to derive the final class probabilities.
To estimate the corrupted input, we introduce a linear layer with \(n_{i}\) output neurons on top of the encoder stack, which predicts \(\hat{x}_{i}\). We calculate a multi atlas masked mean squared error (MAMSE) loss \(\mathcal{L}_{multi}\) between the predicted and the original input:
\[\mathcal{L}_{multi}=\frac{1}{3}\sum_{i=1}^{3}\frac{1}{n_{i}}\sum_{j\in M}^{n_{ i}}||x_{i,j}-\hat{x}_{i,j}||^{2} \tag{1}\]
where \(x_{i,j}\) is the original value of the \(j\)-th masked input from the \(i\)-th atlas and \(\hat{x}_{i,j}\) is the predicted value for the masked input at position \(j\) in the \(i\)-th atlas.
## 3 Experiments
### Experimental Setup
To evaluate the classification performance of our models in a robust manner, we employed 10-fold stratified cross-validation. For each fold, the model is trained on the remaining nine training folds and evaluated on the held-out test fold. Further, we set aside 30% of each training fold as validation sets which are then used for hyperparameter tuning and early stopping.
In order to assess the impact of self-supervised pretraining, we compare the performance of our model with and without pretraining. To achieve that, we first pretrain the model using the imputation task on the training folds and subsequently fine-tune the model on the training folds using the classification task after which we evaluate on the held-out test fold.
Figure 2: Self-supervised pretraining on the imputation task of the METAFormer architecture. The inputs to the model are masked connectomes, where 10% of the features are randomly set to 0 (exemplified as black squares). The model is trained to predict the missing values implying that the output of the model has the same shape as the input.
In order to verify the efficacy of using multiple atlases as input we compared the performance of our METAFormer model with the performance of single atlas transformer (SAT) models. For that, we trained three separate transformer models using only one atlas as input. The SAT models are trained using the same architecture as well as training procedure as the METAFormer model. We also evaluated the performance of the SAT models with and without self-supervised pretraining in order to asses its impact on the performance of the model. To make results comparable, we use the same training and validation folds for all model configurations under investigation.
### Evaluation Metrics
By using cross-validation, we obtained 10 different sets of performance scores per configuration. These scores were then averaged and the standard deviation of each score was obtained, providing reliable estimates of the model's performance on unseen data. The classification results were reported in terms of accuracy, precision, recall (sensitivity), F1-score and AUC-score, which are commonly used metrics for evaluating classification models.
## 4 ASD Classification Results
Table 1 shows the superior performance of our pretrained METAFormer model compared to previously published ASD classifiers trained on atlas-based connectomes. Importantly, our model achieves higher accuracy even when compared to approaches with similar test set sizes that did not employ cross-validation.
To further validate the effectiveness of our proposed Multi-Atlas Transformer model for Autism Spectrum Disorder classification, we compare METAFormer against single atlas transformers. The results, as presented in Table 2, demonstrate the superiority of METAFormer over all single atlas models in terms of accuracy, precision, recall, F1-score, and AUC-score. Moreover, the multi-atlas model exhibits comparable or lower standard deviations in performance metrics compared to the single atlas models. This indicates higher robustness and stability of our multi-atlas METAFormer architecture, attributed to the joint training of the three transformer encoders.
### Impact of Pretraining
We also evaluated the effect of self-supervised pretraining on the classification performance of our models. As Table 2 shows pretraining significantly improves the performance of all models in terms of accuracy, precision, recall, F1-score and AUC-score. Furthermore, for our proposed METAFormer architecture pretraining improves the performance by a large margin.
## 5 Conclusion
In this paper, we propose METAFormer, a novel multi-atlas enhanced pretrained transformer architecture for ASD classification. We utilize self-supervised pretraining on the imputation task on the same dataset to prime the model for the downstream task. We conducted extensive experiments to demonstrate the effectiveness of our approach by comparing it to several baselines that use single-atlas and multi-atlas approaches with and without pretraining. Our results show that our model performs better than state-of-the-art methods and that pretraining is highly beneficial for the downstream task.
## 6 Acknowledgements
The authors thank the International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction (IMPRS-MMFD) for supporting Samuel Heczko. Florian Birk is supported by the Deutsche Forschungsgesellschaft (DFG) Grant DFG HE 9297/1-1. Julius Steiglechner is funded by Alzheimer Forschung Initiative e.V.Grant #18052.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Study** & \#**ASD** & \#**TC** & **Model** & **CV** & **Acc.** \\ \hline MAGE [29] & 419 & 513 & Graph CNN & 10-fold & 75.9\% \\ AIMAFE [30] & 419 & 513 & MLP & 10-fold & 74.5\% \\
1DCNN-GRU [23] & – & – & 1D CNN & – & 78.0\% \\ MISODNN [12] & 506 & 532 & MLP & 10-fold & 78.1\% \\
3D CNN [8] & 539 & 573 & 3D CNN & 5-fold & 74.53\% \\ CNNGCN [17] & 403 & 468 & CNN/GRU & – & 72.48\% \\ SSRN [18] & 505 & 530 & LSTM & – & 81.1\% \\
**Ours** & 408 & 476 & Transformer & 10-fold & **83.7\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of state-of-the-art ASD classification methods that use large, heterogenous samples from ABIDE I. Note that our model achieves the highest accuracy while still using 10-fold cross-validation.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Variant** & **Acc.** & **Prec.** & **Rec.** & **F1** & **AUC** \\ \hline METAFormer PT & **0.837**\(\pm 0.030\) & **0.819\(\pm 0.045\)** & 0.901 & \(\pm 0.044\) & **0.856**\(\pm 0.023\) & **0.832**\(\pm 0.032\) \\ METAFormer & 0.628 & \(\pm 0.041\) & 0.648\(\pm 0.040\) & 0.688 & \(\pm 0.091\) & 0.663 & \(\pm 0.047\) & 0.623 & \(\pm 0.041\) \\ \hline SAT (AAL) & 0.593 & \(\pm 0.040\) & 0.585\(\pm 0.042\) & 0.888 & \(\pm 0.091\) & 0.701 & \(\pm 0.024\) & 0.568 & \(\pm 0.047\) \\ SAT (CC200) & 0.586 & \(\pm 0.037\) & 0.577\(\pm 0.027\) & 0.888 & \(\pm 0.057\) & 0.698 & \(\pm 0.019\) & 0.560 & \(\pm 0.044\) \\ SAT (DOS160) & 0.570 & \(\pm 0.055\) & 0.571\(\pm 0.038\) & 0.816 & \(\pm 0.101\) & 0.670 & \(\pm 0.051\) & 0.550 & \(\pm 0.056\) \\ SAT (AAL) PT & 0.601 & \(\pm 0.069\) & 0.587\(\pm 0.055\) & 0.939 & \(\pm 0.059\) & 0.719 & \(\pm 0.033\) & 0.573 & \(\pm 0.077\) \\ SAT (CC200) PT & 0.632 & \(\pm 0.071\) & 0.622\(\pm 0.074\) & 0.891 & \(\pm 0.102\) & 0.724 & \(\pm 0.035\) & 0.611 & \(\pm 0.082\) \\ SAT (DOS160) PT & 0.683 & \(\pm 0.094\) & 0.652\(\pm 0.091\) & **0.964**\(\pm 0.057\) & 0.771 & \(\pm 0.047\) & 0.660 & \(\pm 0.106\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification results for the different model configurations. Reported values are the mean \(\pm\) standard deviation over 10 folds. The best results are in bold. SAT=Single Atlas Transformer, PT=Pretrained, atlases are in braces. Note that pretraining significantly improves performance across metrics and atlases. Using our multi-atlas METAFormer in combination with pretraining yields impressive performance increases. |
2310.02847 | On the Length of Strongly Monotone Descending Chains over $\mathbb{N}^d$ | A recent breakthrough by K\"unnemann, Mazowiecki, Sch\"utze, Sinclair-Banks,
and Wegrzycki (ICALP, 2023) bounds the running time for the coverability
problem in $d$-dimensional vector addition systems under unary encoding to
$n^{2^{O(d)}}$, improving on Rackoff's $n^{2^{O(d\lg d)}}$ upper bound (Theor.
Comput. Sci., 1978), and provides conditional matching lower bounds.
In this paper, we revisit Lazi\'c and Schmitz' "ideal view" of the backward
coverability algorithm (Inform. Comput., 2021) in the light of this
breakthrough. We show that the controlled strongly monotone descending chains
of downwards-closed sets over $\mathbb{N}^d$ that arise from the dual backward
coverability algorithm of Lazi\'c and Schmitz on $d$-dimensional unary vector
addition systems also enjoy this tight $n^{2^{O(d)}}$ upper bound on their
length, and that this also translates into the same bound on the running time
of the backward coverability algorithm.
Furthermore, our analysis takes place in a more general setting than that of
Lazi\'c and Schmitz, which allows to show the same results and improve on the
2EXPSPACE upper bound derived by Benedikt, Duff, Sharad, and Worrell (LICS,
2017) for the coverability problem in invertible affine nets. | Sylvain Schmitz, Lia Schütze | 2023-10-04T14:32:54Z | http://arxiv.org/abs/2310.02847v2 | # On the length of strongly monotone descending chains over \(\mathbb{N}^{d}\)
###### Abstract.
A recent breakthrough by Kunnemann, Mazowiecki, Schutze, Sinclair-Banks, and Wegrzycki (ICALP 2023) bounds the running time for the coverability problem in \(d\)-dimensional vector addition systems under unary encoding to \(n^{2^{O(d)}}\), improving on Rackoff's \(n^{2^{O(d\lg d)}}\) upper bound (_Theor. Comput. Sci._ 1978), and provides conditional matching lower bounds.
In this paper, we revisit Lazic and Schmitz' "ideal view" of the backward coverability algorithm (_Inform. Comput._ 2021) in the light of this breakthrough. We show that the controlled strongly monotone descending chains of downwards-closed sets over \(\mathbb{N}^{d}\) that arise from the dual backward coverability algorithm of Lazic and Schmitz on \(d\)-dimensional unary vector addition systems also enjoy this tight \(n^{2^{O(d)}}\) upper bound on their length, and that this also translates into the same bound on the running time of the backward coverability algorithm.
Furthermore, our analysis takes place in a more general setting than that of Lazic and Schmitz, which allows to show the same results and improve on the 2EXPSPACE upper bound derived by Benedikt, Duff, Sharad, and Worrell (LICS 2017) for the coverability problem in invertible affine nets.
Keywords. Vector addition system, coverability, well-quasi-order, ideal, affine net
\({}^{1}\) Universite Paris Cite, CNRS, IRIF, Paris, France
\({}^{2}\) IUF, France
\({}^{3}\) Max Planck Institute for Software Systems (MPI-SWS), Kaiserslautern, Germany
## 1. Introduction
_Well-Quasi-Orders_ (wqo for short) are a notion from order theory [28, 40] that has proven very effective in many areas of mathematics, logic, combinatorics, and computer science in order to establish finiteness statements. For instance, in the field of formal verification, they provide the termination arguments for the generic algorithms for _well structured transition systems_[1, 21], notably the _backward coverability algorithm_ for deciding safety properties [3, 1, 21].
In full generality, one cannot extract complexity bounds from wqo-powered termination proofs. Nevertheless, in an algorithmic setting, one can "instrument" wqos by considering so-called _controlled sequences_[40, 38], and new tight complexity upper bounds for wqo-based algorithms now appear on a regular basis [e.g., 39, 4, 6, 5, 25, for a few recent examples].
Those complexity upper bounds are however astronomically high, and sometimes actually way too high for the problem at hand. An emblematic illustration of this phenomenon is the backward coverability algorithm for vector addition systems (VAS), which was shown to run in double exponential time by Bozzelli and Ganty [12] based on an original analysis due to Rackoff [36]: the corresponding bounds over the wqo \(\mathbb{N}^{d}\) are Ackermannian [19].
###### Abstract
We consider the _dual_ version of the backward coverability algorithm in well structured transition systems. We show that the _dual_ version of the backward coverability algorithm in well structured transition systems is a _dual_ version of the backward coverability algorithm in well structured transition systems. We show that the _dual_ version of the backward coverability algorithm in well structured transition systems is a _dual_ version of the backward coverability algorithm in well structured transition systems.
## 1 Introduction
The _dual_ version of the backward coverability algorithm in well structured transition systems is a _dual_ version of the backward coverability algorithm in well structured transition systems. The _dual_ version of the backward coverability algorithm in well structured transition systems is a _dual_ version of the backward coverability algorithm in well structured transition systems. The _dual_ version of the backward coverability algorithm in well structured transition systems is a _dual_ version of the backward coverability algorithm in well structured transition systems. The _dual_ version of the backward coverability algorithm in well structured transition systems is a _dual_ version of the backward coverability algorithm in well structured transition systems.
_Doscenting Chains._ One way pioneered by Lazic and Schmitz [31] to close such complexity gaps while retaining some of the wide applicability of wqos and well structured transition systems is to focus on the descending chains of downwards closed sets over the wqo at hand. Indeed, one of the equivalent characterisations of wqos is the _descending chain condition_[28, 40], which guarantees that those descending chains are finite.
In themselves, descending chains are no silver bullet: e.g., the controlled descending chains over \(\mathbb{N}^{d}\) are also of Ackermannian length [31, Thm. 3.10]. Nevertheless, these chains sometimes exhibit a form of "monotony," which yields vastly improved upper bounds. When applied to a _dual_ version of the backward coverability algorithm in well structured transition systems, this allows to recover the same double exponential time upper bound as in [12, 36] for the VAS coverability problem, along with tight upper bounds for coverability in several VAS extensions. The same framework was also the key to establishing tight bounds for coverability in \(\nu\)-Petri nets [30]. As a further testimony to the versatility of the approach, Benedikt, Duff, Sharad, and Worrell use it in [7] to derive original upper bounds for problems on invertible polynomial automata and invertible affine nets, in a setting that is not strictly speaking one of well structured transition systems.
#### Fine-grained Bounds for VAS Coverability
The coverability problem in VAS is well-known to be \(\mathsf{EXPSPACE}\)-complete, thanks to Rackoff's 1978 upper bound matching a 1976 lower bound by Lipton [33]. The main parameter driving this complexity is the dimension of the system: the problem is in pseudo-polynomial time in fixed dimension \(d\); more precisely, Rackoff's analysis yields a \(n^{2^{O(d\lg d)}}\) deterministic time upper bound for \(d\)-dimensional VAS encoded in unary [37], by proving the same bound on the length of a covering execution of minimal length. Here, there is a discrepancy with the \(n^{2^{\Omega(d)}}\) lower bound on the length of that execution in Lipton's construction--a discrepancy that was already highlighted as an open problem in the early 1980's by Mayr and Meyer [34], and settled in the specific case of reversible systems by Koppenhagen and Mayr [27]. The upper bounds of both Bozzelli and Ganty [12] and Lazic and Schmitz [31] on the complexity of the backward coverability algorithm inherit from Rackoffs \(n^{2^{O(d\lg d)}}\) bound and suffer from the same discrepancy.
This was the situation until Kunnemann, Mazowiecki, Schutze, Sinclair-Banks, and Wegrzycki showed an \(n^{2^{O(d)}}\) upper bound on the length of minimal covering executions of unary encoded \(d\)-dimensional VAS, matching Lipton's lower bound [29, Thm. 3.3]. This directly translates into a deterministic time algorithm with the same upper bound bound [29, Cor. 3.4]. Furthermore, assuming the exponential time hypothesis, Kunnemann et al. also show that there does not exist a deterministic \(n^{o(2^{d})}\) time algorithm deciding coverability in unary encoded \(d\)-dimensional VAS [29, Thm. 4.2].
#### Thinness
The improved upper bound relies on the notion of a _thin_ vector in \(\mathbb{N}^{d}\)[29, Def. 3.6] (somewhat reminiscent of the "extractors" of Leroux [32]). The proof of [29, Thm. 3.3] works by induction on the dimension \(d\). By splitting a covering execution of minimal length at the first non-thin configuration, Kunnemann et al. obtain a prefix made of distinct thin configurations (which must then be of bounded length), and a suffix starting from a configuration with some components high enough to be
disregarded, hence that can be treated as an execution in a VAS of lower dimension, on which the induction hypothesis applies.
ContributionsIn this paper, we show that the improved \(n^{2^{O(d)}}\) upper bound of Kunnemann et al. [29] also applies to the number of iterations of the backward coverability algorithm for \(d\)-dimensional VAS encoded in unary (see Theorem 4.2). In order to do so, one could reuse the approach of Bozzelli and Ganty [12] to lift the improved bound from the length of minimal covering executions to the running time of the backward coverability algorithm, but here we aim for the generality of the framework of [31].
Our main contribution is thus to show in SS 3 that the upper bounds on the length of strongly monotone controlled descending chains of downwards closed sets over \(\mathbb{N}^{d}\)--which include those constructed during the running of the backward coverability algorithm for VAS--can be improved similarly (see Theorem 3.6) when focusing on a suitably generalised notion of thinness. As a byproduct, we observe that thinness is an inherent property of such chains (see Corollary 3.7), rather than an _a priori_ condition that--almost magically--yields the improved bound.
In addition to the application in SS 4.2 to the coverability problem in vector addition systems, where we show that the backward coverability algorithm runs in time \(n^{2^{O(d)}}\) (see Corollary 4.5) and is therefore conditionally optimal by [29, Thm. 4.2], and as a further demonstration of the versatility of our results, we show in SS 4.3 how to apply them to invertible affine nets, a generalisation of vector addition systems introduced by Benedikt et al. [7]. We obtain the same bounds for their coverability problem as in the case of vector addition systems (see Theorem 4.11 and Corollary 4.12), and thereby improve on the \(2\mathsf{EXPSPACE}\) upper bound of [7] by showing that the problem is actually \(\mathsf{EXPSPACE}\)-complete (see Corollary 4.13).
## 2. Well-Quasi-Orders and Ideals
We start by introducing the necessary background on well-quasi-orders, descending chains, and order ideals.
Well-Quasi-OrdersA _quasi-order_\((X,\leq)\) comprises a set \(X\) and a transitive reflexive relation \(\leq\subseteq X\times X\). For a subset \(S\subseteq X\), its _downward closure_ is the set of elements smaller or equal to some element in \(S\), i.e., \(\downarrow\!\!S\stackrel{{\text{\tiny{\raisebox{0.0pt}[0.0pt][0.0 pt]{\tiny def}}}}}{{=}}\{x\in X\mid\exists y\in S\cdot x\leq y\}\). When \(S=\{y\}\) is a singleton, we note \(\downarrow\!\!y\) for this set. A subset \(S\subseteq X\) is _downwards-closed_ if \(S=\downarrow\!\!S\). A _well-quasi-order_ is a quasi-order \((X,\leq)\) such that all the _descending chains_
\[D_{0}\supsetneq D_{1}\supsetneq D_{2}\supsetneq\cdots \tag{1}\]
of downwards-closed subsets \(D_{k}\subseteq X\) are finite [28, 40].
Conversely, the _upward closure_ of a subset \(S\subseteq X\) is \(\uparrow\!\!S\stackrel{{\text{\tiny{\raisebox{0.0pt}[0.0pt][0.0 pt]{\tiny def}}}}}{{=}}\{x\in X\mid\exists y\in S.y\leq x\}\), and \(S\) is _upwards-closed_ if \(S=\uparrow\!\!S\). The complement \(X\setminus D\) of a downwards-closed set \(D\) is upwards-closed (and conversely), hence wqos have the _ascending chain condition_ for chains \(U_{0}\subsetneq U_{1}\subsetneq\cdots\) of upwards-closed sets. Furthermore, any upwards-closed set \(U\) over a wqo has a _finite basis_\(B\) such that \(U=\uparrow\!\!B\)[28, 40]; without loss of generality, we can take the elements of \(B\) to be minimal and mutually incomparable in \(U\).
A well-studied wqo is \((\mathbb{N}^{d},\sqsubseteq)\) the set of \(d\)-dimensional vectors of natural numbers along with the component-wise (aka product) ordering [16]; see Figure 1 for an
illustration of a descending chain over \(\mathbb{N}^{2}\), which happens to be produced by the backward coverability algorithm for a vector addition system [31, Ex. 3.6].
#### Order Ideals
An _order ideal_ of \(X\) is a downwards-closed subset \(I\subseteq X\), which is _directed_: it is non-empty, and if \(x,x^{\prime}\) are two elements of \(I\), then there exists \(y\) in \(I\) with \(x\leq y\) and \(x^{\prime}\leq y\). Alternatively, order ideals are characterised as the _irreducible_ non-empty downwards-closed sets of \(X\): an order ideal is a non-empty downwards-closed set \(I\) with the property that, if \(I\subseteq D_{1}\cup D_{2}\) for two downwards-closed sets \(D_{1}\) and \(D_{2}\), then \(I\subseteq D_{1}\) or \(I\subseteq D_{2}\).
Over a wqo \((X,\leq)\), any downwards-closed set \(D\subseteq X\) has a canonical decomposition as a finite union of order ideals \(D=I_{1}\cup\dots\cup I_{n}\), where the \(I_{j}\)'s are mutually incomparable for inclusion [10, 24]. We write \(I\in D\) if \(I\) is an order ideal appearing in the canonical decomposition of \(D\), i.e., if it is a maximal order ideal included in \(D\). Then \(D\subseteq D^{\prime}\) if and only if, for all \(I\in D\), there exists \(I^{\prime}\in D^{\prime}\) such that \(I\subseteq I^{\prime}\).
#### Effective Representations over \(\mathbb{N}^{d}\)
Over the wqo \((\mathbb{N}^{d},\sqsubseteq)\), the order ideals are exactly the sets of the form \(\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The order ideals of \(\mathbb{N}^{d}\), when represented as vectors in \(\mathbb{N}^{d}_{\omega}\), are rather easy to manipulate [24]--and thus so are the downwards-closed subsets of \(\mathbb{N}^{d}\) when represented as finite sets of vectors in \(\mathbb{N}^{d}_{\omega}\). For instance,
* \(I\subseteq I^{\prime}\) (as subsets of \(\mathbb{N}^{d}\)) if and only if \(I\sqsubseteq I^{\prime}\) (as vectors in \(\mathbb{N}^{d}_{\omega}\))--which incidentally entails \(\omega(I)\subseteq\omega(I^{\prime})\) and therefore \(\dim I\leq\dim I^{\prime}\); also note that, if \(I\subseteq I^{\prime}\) and \(\dim I=\dim I^{\prime}\), then \(\omega(I)=\omega(I^{\prime})\)--;
* the intersection of two order ideals is again an order ideal, represented by the vector \(I\wedge I^{\prime}\) defined by \((I\wedge I^{\prime})(i)\stackrel{{\mbox{\tiny def}}}{{=}}\min(I(i ),I^{\prime}(i))\) for all \(1\leq i\leq d\);
* the complement of an order ideal \(I\) is the upwards-closed set \(\bigcup_{i\in\operatorname{fin}(I)}\uparrow\bigl{(}(I(i)+1)\cdot\boldsymbol{e }_{i}\bigr{)}\), where \(\boldsymbol{e}_{i}\) denotes the unit vector with "\(1\)" in coordinate \(i\) and "\(0\)" everywhere else.
#### Proper Ideals and Monotony
If \(D\supsetneq D^{\prime}\), then there must be an order ideal \(I\in D\) such that \(I\not\in D^{\prime}\). Coming back to a descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\supsetneq D_{\ell}\), we then say that an order ideal \(I\) is _proper_ at step \(0\leq k<\ell\) if \(I\in D_{k}\) but \(I\not\in D_{k+1}\); at each step \(0\leq k<\ell\), there must be at least one proper order ideal. In Figure 1, \((\omega,4)\) is proper at step \(0\), and more generally \((\omega,4-k)\) is the only proper order ideal at step \(0\leq k<5\).
It turns out that the descending chains arising from some algorithmic procedures, including the backward coverability algorithm for VAS, enjoy additional relationships between their proper order ideals. We say that a descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) is
* _strongly monotone_[35, 7] if, whenever an ideal \(I_{k+1}\) is proper at some step \(k+1\), then there exists \(I_{k}\) proper at step \(k\) such that \(\dim I_{k+1}\leq\dim I_{k}\), and
* in particular \(\omega\)_-monotone_[31] if, whenever an ideal \(I_{k+1}\) is proper at some step \(k+1\), then there exists \(I_{k}\) proper at step \(k\) such that \(\omega(I_{k+1})\subseteq\omega(I_{k})\).
The descending chain depicted in Figure 1 is \(\omega\)-monotone--and thus strongly monotone--, with \(\omega((\omega,4-(k+1)))\subseteq\omega((\omega,4-k))\) for all \(4>k\geq 0\).
#### Controlled Sequences
While finite, descending chains over a wqo can have arbitrary length. Nevertheless, their length can be bounded if we make additional assumptions. We define the _size_ of a downwards-closed subset of \(\mathbb{N}^{d}\) and of an order ideal of \(\mathbb{N}^{d}\) as
\[\|D\|\stackrel{{\mbox{\tiny def}}}{{=}}\max_{I\in D}\|I\|\;, \|I\|\stackrel{{\mbox{\tiny def}}}{{=}}\max_{i\in \operatorname{fin}(I)}I(i)\;. \tag{4}\]
In Figure 1, \(\|D_{0}\|=\|D_{1}\|=\|D_{2}\|=4\), \(\|D_{3}\|=5\), \(\|D_{4}\|=7\), and \(\|D_{5}\|=9\).
Given a _control_ function \(g\colon\mathbb{N}\to\mathbb{N}\) that is monotone (i.e., \(\forall x\leq y.g(x)\leq g(y)\)) and expansive (i.e., \(\forall x.x\leq g(x)\)) along with an _initial size_\(n_{0}\in\mathbb{N}\), we say that a descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) over \(\mathbb{N}^{d}\) is (asymptotically) \((g,n_{0})\)_-controlled_ if, for all \(k\geq 0\),
\[\|D_{k}\|\leq g^{k}(n_{0}) \tag{5}\]
where \(g^{k}(n_{0})\) is the \(k\)th iterate of \(g\) applied to \(n_{0}\)[38]. In particular, \(\|D_{0}\|\leq n_{0}\) initially. In Figure 1, the descending chain is \((g,4)\)-controlled for \(g(x)\stackrel{{\mbox{\tiny def}}}{{=}}x+1\).
## 3. Main Result
In this section, we establish a new bound on the length of controlled strongly monotone descending sequences. This relies on a generalisation of the notion of _thinness_ from Kunnemann et al. [29, Def. 3.6] (see SS 3.1), before we can recast their
proof in the setting of strongly monotone descending chains and prove our main result in SS 3.2.
### Thinness
Fix a control function \(g\), an initial norm \(n_{0}\), and a dimension \(d\geq 0\). Define inductively the bounds on norms \((N_{i})_{0\leq i\leq d}\) and lengths \((L_{i})_{0\leq i\leq d}\) by
\[N_{0} \stackrel{{\trianglelefteq}}{{=}}n_{0}\;, N_{i+1} \stackrel{{\trianglelefteq}}{{=}}g^{L_{i}+1}(n_{0})\;, \tag{6}\] \[L_{0} \stackrel{{\trianglelefteq}}{{=}}0\;, L_{i+1} \stackrel{{\trianglelefteq}}{{=}}L_{i}+\prod_{1\leq j \leq i+1}(d-j+1)(N_{j}+1)\;. \tag{7}\]
Beware the abuse of notation, as the bounds above depend on \((g,n_{0})\) and \(d\), but those will always be clear from the context.
**Remark 3.1** (Monotony of \((N_{i})_{0\leq i\leq d}\) and \((L_{i})_{0\leq i\leq d}\)).: By definition, for all \(0\leq i<j\leq d\), \(0\leq L_{i}<L_{j}\), and because \(g\) is assumed monotone expansive, \(n_{0}\leq N_{i}\leq N_{j}\). \(\square\)
The following definition generalises [29, Def. 3.6] to handle order ideals and an arbitrary control function and initial norm.
**Definition 3.2** (Thin order ideal).: Let \((g,n_{0})\) be a control function and initial norm and \(d>0\) a dimension. An order ideal \(I\) of \(\mathbb{N}^{d}\) is _thin_ if there exists a bijection \(\sigma\colon\operatorname{fin}(I)\to\{1,\ldots,\operatorname{fdim}I\}\) such that, for all \(i\in\operatorname{fin}(I)\), \(I(i)\leq N_{\sigma(i)}\).
Observe that that, if \(I^{\prime}\) is thin, \(I\subseteq I^{\prime}\), and \(\dim I=\dim I\), then \(I\) is thin.
**Remark 3.3** (Number of thin order ideals).: There cannot be more than \(\binom{d}{i}\cdot i!\cdot\prod_{1\leq j\leq i}(N_{j}+1)=\prod_{1\leq j\leq i}( d-j+1)(N_{j}+1)\) distinct thin order ideals of finite dimension \(i\). As will become apparent in the proofs, this is what motivates the definition in (7).
Furthermore, if we let \(\operatorname{Idl}^{\mathsf{thin}}(\mathbb{N}^{d})\) denote the set of thin order ideals of \(\mathbb{N}^{d}\), there is only one thin order ideal of finite dimension \(0\)--namely \((\omega,\ldots,\omega)\)--, and
\[|\operatorname{Idl}^{\mathsf{thin}}(\mathbb{N}^{d})| \leq 1+\sum_{1\leq i\leq d}\prod_{1\leq j\leq i}(d-j+1)(N_{j}+1)\] \[=1+\sum_{1\leq i\leq d}(L_{i}-L_{i-1})\] \[=1+L_{d}-L_{0}=1+L_{d}\;.\]
### Thinness Lemma
The crux of our result is the following lemma.
**Lemma 3.4** (Thinness).: _Consider a \((g,n_{0})\)-controlled strongly monotone descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) of downwards-closed subsets of \(\mathbb{N}^{d}\). If \(I_{\ell}\) is a proper order ideal at some step \(\ell\), then \(I_{\ell}\) is thin and \(\ell\leq L_{\operatorname{fdim}I_{\ell}}\)._
The proof of Lemma 3.4 proceeds by induction over the finite dimension \(\operatorname{fdim}I_{\ell}=d-\dim I_{\ell}\). For the base case where \(I_{\ell}\) has full dimension \(\dim I_{\ell}=d\), then \(I_{\ell}=(\omega,\ldots,\omega)\) is thin and \(D_{\ell}=\mathbb{N}^{d}\) is the full space, which can only occur at step \(\ell=0=L_{0}\). For the induction step, we first establish thinness with the following claim; note that, as just argued, an order ideal of dimension \(d\) is necessarily thin. We then follow with the bound on \(\ell\) to complete the proof of Lemma 3.4.
**Claim 3.5**.: _Let \(0\leq d^{\prime}<d\) and assume that Lemma 3.4 holds for all proper order ideals \(I^{\prime}\) of dimension \(\dim I^{\prime}>d^{\prime}\). If \(I\) is any (not necessarily proper) order ideal of dimension \(\dim I=d^{\prime}\) appearing in the descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\), then \(I\) is thin._
Proof of Claim 3.5.: Let \(k\) be a step where \(I\) appears in the descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\), i.e., \(I\in D_{k}\), and let us write \(I_{k}\stackrel{{\text{\tiny def}}}{{=}}I\). If \(k>0\), since \(D_{k}\subseteq D_{k-1}\), there exists an order ideal \(I_{k-1}\in D_{k-1}\) such that \(I_{k}\subseteq I_{k-1}\). If \(k=0\), or by repeating this argument if \(k>0\), we obtain a chain of order ideals (with decreasing indices)
\[I_{k}\subseteq I_{k-1}\subseteq\cdots\subseteq I_{0} \tag{8}\]
where \(I_{m}\in D_{m}\) for all \(k\geq m\geq 0\). Every order ideal in that chain must have dimension at least \(\dim I_{k}=d^{\prime}\) since they all contain \(I_{k}\). Two cases arise.
1. If every order ideal in the chain (8) has dimension \(\dim I_{k}\), then because the descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) is \((g,n_{0})\)-controlled, we have \(\|I_{0}\|\leq n_{0}=N_{0}\) and we know by Remark 3.1 that \(I_{0}\) is thin. Since \(I_{k}\subseteq I_{0}\) and \(\dim I_{k}=\dim I_{0}\), \(I_{k}\) is also thin.
2. Otherwise there exists a first index \(K\) along the chain (8) where the dimension increases, i.e., such that \(\dim I_{k}<\dim I_{K}\) and \(\dim I_{m}=\dim I_{k}\) for all \(k\geq m>K\). Then \(I_{K}\) is proper, as otherwise \(D_{K+1}\) would contain two distinct but comparable order ideals in its canonical decomposition: \(I_{K+1}\subseteq I_{K}\) and \(\dim I_{K+1}=\dim I_{k}<\dim I_{K}\) indeed imply \(I_{K+1}\subsetneq I_{K}\). By assumption, Lemma 3.4 can be applied to \(I_{K}\) of dimension \(\dim I_{K}>\dim I_{k}=d^{\prime}\), thus \(I_{K}\) is thin and \(K\leq L_{\operatorname{fdim}I_{K}}\). Let us now show that \(I_{K+1}\) is thin, which will also yield that \(I_{k}\) is thin since \(I_{k}\subseteq I_{K+1}\) and \(\dim I_{k}=\dim I_{K+1}\). Since \(\dim I_{K+1}<\dim I_{K}\), we let \(f\stackrel{{\text{\tiny def}}}{{=}}\dim I_{K}-\dim I_{K+1}= \operatorname{fdim}I_{K+1}-\operatorname{fdim}I_{K}>0\). As furthermore \(I_{K+1}\subseteq I_{K}\), \(\omega(I_{K+1})\subsetneq\omega(I_{K})\) and we let \(\{i_{1},\ldots,i_{f}\}\stackrel{{\text{\tiny def}}}{{=}}\omega( I_{K})\setminus\omega(I_{K+1})=\operatorname{fin}(I_{K+1})\setminus\operatorname{ fin}(I_{K})\). Since \(I_{K}\) is thin, there exists a bijection \(\sigma\colon\operatorname{fin}(I_{K})\to\{1,\ldots,\operatorname{fdim}(I_{K})\}\) such that \(I_{K}(i)\leq N_{\sigma(i)}\) for all \(i\in\operatorname{fin}(I_{K})\). We extend \(\sigma\) to a bijection \(\sigma^{\prime}\colon\operatorname{fin}(I_{K})\uplus\{i_{1},\ldots,i_{f}\}\to \{1,\ldots,\operatorname{fdim}I_{K}+f\}\): we let \(\sigma^{\prime}(i)\stackrel{{\text{\tiny def}}}{{=}}\sigma(i)\) for all \(i\in\operatorname{fin}(I_{K})\), and \(\sigma^{\prime}(i_{j})\stackrel{{\text{\tiny def}}}{{=}}\operatorname {fdim}I_{K}+j\) for all \(1\leq j\leq f\). Let us check that \(\sigma^{\prime}\) witnesses the thinness of \(I_{K+1}\). * Because \(I_{K+1}\subseteq I_{K}\), for all those \(i\in\operatorname{fin}(I_{K})\), \(I_{K+1}(i)\leq I_{K}(i)\leq N_{\sigma(i)}=N_{\sigma^{\prime}(i)}\). * Since \(K+1\leq L_{\operatorname{fdim}I_{K}}+1\) and since the descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) is \((g,n_{0})\)-controlled, we have a bound of \(g^{L_{\operatorname{fdim}I_{K}}+1}(n_{0})=N_{\operatorname{fdim}I_{K}+1}\) on all the finite components of \(I_{K+1}\), and in particular \(I_{K+1}(i_{j})\leq N_{\operatorname{fdim}I_{K}+1}\) for all \(1\leq j\leq f\). By Remark 3.1, we conclude that \(I_{K+1}(i_{j})\leq N_{\operatorname{fdim}I_{K}+j}=N_{\sigma^{\prime}(i_{j})}\) for all \(1\leq j\leq f\). [3.5]
Proof of Lemma 3.4.: We have already argued for the base case, so let us turn to the inductive step where \(\dim I_{\ell}<d\). If \(\ell>0\) and since our descending chain is strongly monotone, we can find an order ideal \(I_{\ell-1}\) proper at step \(\ell-1\) such that \(\dim I_{\ell}\leq\dim I_{\ell-1}\). Both if \(\ell=0\) or by repeating this argument, we obtain a sequence of order ideals (with decreasing indices)
\[I_{\ell},I_{\ell-1},\ldots,I_{0} \tag{9}\]
where, for each \(\ell>k\geq 0\), \(I_{k}\) is proper at step \(k\), and \(\dim I_{k+1}\leq\dim I_{k}\).
Let us decompose our sequence (9) by identifying the first step \(L\) where \(\dim I_{L+1}<\dim I_{L}\); let \(L\stackrel{{\text{\tiny def}}}{{=}}-1\) if this never occurs. After this step, for all \(L\geq k\geq 0\), \(\dim I_{k}>\dim I_{\ell}\). Within the initial segment, for \(\ell\geq k>L\), the dimension \(\dim I_{k}\) remains constant equal to \(\dim I_{\ell}\), and the induction hypothesis allows to apply
Claim 3.5 and infer that every order ideal \(I_{k}\) in this initial segment, and in particular \(I_{\ell}\) among them, is thin.
It remains to provide a bound on \(\ell\). The \(\ell-L\) order ideals in the initial segment are thin, and distinct since they are proper, hence by Remark 3.3,
\[\ell\leq L+\prod_{1\leq i\leq\operatorname{fdim}I_{\ell}}(d-i+1)(N_{i}+1)\;. \tag{10}\]
**If \(\boldsymbol{L\geq 0}\):**: we can apply the induction hypothesis to the proper order ideal \(I_{L}\) of finite dimension \(\operatorname{fdim}I_{L}<\operatorname{fdim}I_{\ell}\) along with Remark 3.1 to yield \(L\leq L_{\operatorname{fdim}I_{L}}\leq L_{\operatorname{fdim}I_{\ell}-1}\) and therefore
\[\ell\leq L_{\operatorname{fdim}I_{\ell}-1}+\prod_{1\leq i\leq\operatorname{ fdim}I_{\ell}}(d-i+1)(N_{i}+1)=L_{\operatorname{fdim}I_{\ell}}\;. \tag{11}\]
**If \(\boldsymbol{L=-1}\):**: then (11) also holds since \(L_{\operatorname{fdim}I_{\ell}-1}\geq 0>L\) in (10).
We deduce a general combinatorial statement on the length of controlled strongly monotone descending chains.
**Theorem 3.6** (Length function for strongly monotone descending chains).: _Consider a \((g,n_{0})\)-controlled strongly monotone descending chain \(D_{0}\supsetneq\cdots\supsetneq D_{\ell}\) of downwards-closed subsets of \(\mathbb{N}^{d}\). Then \(\ell\leq L_{d}+1\)._
Proof.: In such a descending chain, either \(\ell=0\leq L_{d}+1\), or \(\ell>0\) and there must be an order ideal \(I\) proper at step \(\ell-1\), and \(I\) has finite dimension at most \(d\). By Lemma 3.4 and Remark 3.1, \(\ell-1\leq L_{\operatorname{fdim}I}\leq L_{d}\) in that case.
### Thin Order Ideals and Filters
Let us conclude this section with some consequences of Lemma 3.4 and Claim 3.5. Whereas thinness was posited _a priori_ in the proof of Kunnemann et al. [29, Thm. 3.3] and then shown to indeed allow a suitable decomposition of minimal covering executions and to eventually prove their result, here in the descending chain setting it is an inherent property of all the order ideals appearing in the chain, thereby providing an "natural" explanation for thinness.
**Corollary 3.7**.: _Consider a \((g,n_{0})\)-controlled strongly monotone descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) of downwards-closed subsets of \(\mathbb{N}^{d}\). Then every order ideal appearing in the chain is thin._
Corollary 3.7 also entails a form of thinness of the minimal configurations in the complement of the downwards-closed sets \(D_{k}\). Recall that such a complement is the upward-closure of a finite basis \(B_{k}\stackrel{{\text{\tiny{\rm def}}}}{{=}}\min_{\sqsubseteq} \mathbb{N}^{d}\setminus D_{k}\). Each element \(\boldsymbol{v}\in B_{k}\) is a vector defining a so-called _(principal) order filter_\(\uparrow\!\!\boldsymbol{v}\) of \(\mathbb{N}^{d}\). Let us call a vector \(\boldsymbol{v}\in\mathbb{N}^{d}\)_nearly thin_ if there exists a permutation \(\sigma\colon\{1,\dots,d\}\to\{1,\dots,d\}\) such that, for all \(1\leq i\leq d\), \(\boldsymbol{v}(i)\leq N_{\sigma(i)}+1\). We can relate thin order ideals with nearly thin order filters, which by Corollary 3.7 applies to every vector \(\boldsymbol{v}\in\bigcup_{k}B_{k}\).
**Proposition 3.8**.: _If every order ideal in the canonical decomposition of a downwards-closed set \(D\subseteq\mathbb{N}^{d}\) is thin, then each \(\boldsymbol{v}\in\min_{\sqsubseteq}\mathbb{N}^{d}\setminus D\) is nearly thin._
Proof.: Consider the canonical decomposition \(D=I_{1}\cup\cdots\cup I_{m}\) of \(D\). Then \(U\stackrel{{\text{\tiny{\rm def}}}}{{=}}\mathbb{N}^{d}\setminus D =(\mathbb{N}^{d}\setminus I_{1})\cap\cdots\cap(\mathbb{N}^{d}\setminus I_{m})\). In turn, for each \(1\leq j\leq m\), \(\mathbb{N}^{d}\setminus I_{j}=\bigcup_{k=1}^{m}\mathbb{N}^{d}\setminus I_{j}\). Since \(\mathbb{N}^{d}\setminus I_{j}\) is a union of \(\mathbb{N}^{d}\setminus I_{j}\), we have \(\mathbb{N}^{d}\setminus I_{j}=\bigcup_{k=1}^{m}\mathbb{N}^{d}\setminus I_{j}\).
\(\bigcup_{i\in\operatorname{fin}(I_{j})}\uparrow\bigl{(}(I_{j}(i)+1)\cdot\mathbf{e}_{i} \bigr{)}\) where \(\mathbf{e}_{i}\) denotes the unit vector such that \(\mathbf{e}_{i}(i)\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox}}} \mbox{$}}}}}}}}}}1\) and \(\mathbf{e}_{i}(j)\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox }}}}}}}}}}}}}}}0}0\) for all \(j\neq i\). Distributing intersections over unions, we obtain that
\[U=\bigcup_{(i_{1},\ldots,i_{m})\in\operatorname{fin}(I_{1})\times\cdots\times \operatorname{fin}(I_{m})}\bigcap_{1\leq j\leq m}\uparrow\bigl{(}(I_{j}(i_{j})+1 )\cdot\mathbf{e}_{i_{j}}\bigr{)}\;.\]
For two order filters \(\uparrow\!\!\mathbf{v}\) and \(\uparrow\!\!\mathbf{v}^{\prime}\), \((\uparrow\!\!\mathbf{v})\cap(\uparrow\!\!\mathbf{v}^{\prime})=\uparrow\!\!(\!\mathbf{v} \vee\mathbf{v}^{\prime})\) where \(\mathbf{v}\vee\mathbf{v}^{\prime}\) denotes the component-wise maximum of \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\). Therefore, by \((*)\), any \(\mathbf{v}\in\operatorname{min}_{\sqsubseteq}U\) is of the form
\[\mathbf{v}_{i_{1},\ldots,i_{m}}\stackrel{{\mbox{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tinytiny{\tinytiny\tiny{\tiny{ \tinytinytiny\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox \
For all \(1\leq j\leq m\), because \(I_{j}\) is thin, there exists a bijection \(\sigma_{j}\colon\operatorname{fin}(I_{j})\to\{1,\ldots,\operatorname{fdim}I_{j}\}\) such that, for all \(i\in\operatorname{fin}(I_{j})\), \(I_{j}(i)\leq N_{\sigma_{j}(i)}\). Without loss of generality, we can assume that for all \(i,i^{\prime}\in\operatorname{fin}(I_{j})\), \(I_{j}(i)\leq I_{j}(i^{\prime})\) whenever \(\sigma_{j}(i)<\sigma_{j}(i^{\prime})\).
**Example 3.9** (continuing from p. 9).: Here are suitable bijections witnessing thinness:
\[\sigma_{1} =(1\,2\,3\,4)\;, \sigma_{2} =(1\,3\,2\,4)\;, \sigma_{3} =(2\,1\,4\,3)\;,\] \[\sigma_{4} =(2\,1\,4\,3)\;, \sigma_{5} =(3\,4\,2\,1)\;.\qed\]
For every \(j\in V_{k}\), \(\sigma_{j}^{-1}(\{1,\ldots,k\})\setminus\{1,\ldots,k-1\}\) is non empty. Therefore it contains an element \(i^{\prime}_{j}\geq k\) such that \(I_{j}(i^{\prime}_{j})\leq N_{k}\). For every \(1\leq j\leq m\) such that \(j\not\in V_{k}\), let \(i^{\prime}_{j}\cong i_{j}\).
Let us check that \(\boldsymbol{v}_{i^{\prime}_{1},\ldots,i^{\prime}_{m}}\sqsubseteq\boldsymbol{v}\), which will allow to conclude. Define \(S^{\prime}_{i}\stackrel{{\text{\tiny def}}}{{=}}\{1\leq j\leq m \mid i^{\prime}_{j}=i\}\) for each \(1\leq i\leq d\); then equation \((**)\) holds mutatis mutandis for \(\boldsymbol{v}_{i^{\prime}_{1},\ldots,i^{\prime}_{m}}\) and
**for \(i<k\):**: \(S^{\prime}_{i}=S_{i}\) hence \(\boldsymbol{v}_{i^{\prime}_{1},\ldots,i^{\prime}_{m}}(i)=\boldsymbol{v}(i)\);
**for \(i=k\):**: \(S^{\prime}_{k}=S_{k}\setminus V_{k}=\{j\in S_{k}\mid I_{j}(k)\leq N_{k}\}\) hence \(\boldsymbol{v}_{i^{\prime}_{1},\ldots,i^{\prime}_{m}}(k)\leq N_{k}+1< \boldsymbol{v}(k)\) by definition of \(k\);
**for \(i>k\):**: \(S^{\prime}_{i}=S_{i}\cup\{j\in V_{k}\mid i^{\prime}_{j}=i\}\) hence \(\boldsymbol{v}_{i^{\prime}_{1},\ldots,i^{\prime}_{m}}(i)=\max\{I_{j}(i)+1\mid j \in S^{\prime}_{i}\}=\max(\max\{I_{j}(i)+1\mid j\in S_{i}\},\max\{I_{j}(i)+1 \mid j\in V_{k}\text{ and }i^{\prime}_{j}=i\})\).
* On the one hand, \(\max\{I_{j}(i)+1\mid j\in S_{i}\}=\boldsymbol{v}(i)\).
* On the other hand, \(I_{j}(i^{\prime}_{j})\leq N_{k}\) for all \(j\in V_{k}\) by definition of \(i^{\prime}_{j}\), hence \(\max\{I_{j}(i)+1\mid j\in V_{k}\text{ and }i^{\prime}_{j}=i\}\leq N_{k}+1< \boldsymbol{v}(k)\) by definition of \(k\). As \(\boldsymbol{v}(k)\leq\boldsymbol{v}(i)\) by assumption since \(i>k\), we conclude \(\boldsymbol{v}_{i^{\prime}_{1},\ldots,i^{\prime}_{m}}(i)=\boldsymbol{v}(i)\).
**Example 3.9** (continuing from p. 10).: We have \(\sigma_{2}^{-1}(\{1,2\})=\{1,3\}\) and \(\sigma_{5}^{-1}(\{1,2\})=\{3,4\}\), hence we can pick \(i^{\prime}_{2}\stackrel{{\text{\tiny def}}}{{=}}3\) and \(i^{\prime}_{5}\stackrel{{\text{\tiny def}}}{{=}}4\). This defines \(\boldsymbol{v}_{3,3,4,1,4}\) with stem sets
\[S^{\prime}_{1}=\{4\}\;,\qquad\quad S^{\prime}_{2}=\emptyset\;,\qquad\quad S^{ \prime}_{3}=\{1,2\}\;,\qquad\quad S^{\prime}_{4}=\{3,5\}\;.\]
Then \(\boldsymbol{v}_{3,3,4,1,4}=(2,0,7,7)\sqsubseteq\boldsymbol{v}\) as desired.
## 4. Applications
We describe two applications of Theorem 3.6 in this section. The first application in SS 4.2 is to the coverability problem in vector addition systems, and relies on the analysis of the backward coverability algorithm done in [31]. Thus we can indeed recover the improved upper bound of Kunnemann et al. [29] for the coverability problem in the more general setting of descending chains, and show that the backward coverability algorithm achieves this \(n^{2^{O(d)}}\) upper bound (see Corollary 4.5).
The second application in SS 4.3 focuses on the coverability problem in invertible affine nets, a class introduced by Benedikt et al. [7], who analysed the complexity of the problem through a reduction to zeroness in invertible polynomial automata. We give a direct analysis of the complexity of the backward coverability algorithm, which follows the same lines as in the VAS case, and allows to improve on the 2EXPSPACE upper bound shown in [7] for the problem, by showing that it is actually EXPSPACE-complete (see Corollary 4.13). This application additionally illustrates the usefulness of considering strongly monotone descending chains rather than the
\(\omega\)-monotone ones, as the descending chains constructed by the backward algorithm for invertible affine nets are in general not \(\omega\)-monotone.
As both applications take place in the framework of well-structured transition systems [1, 21], we start with a quick refresher on this framework, the backward coverability algorithm, and its dual view using downwards-closed sets [31] in the upcoming SS 4.1.
### Coverability in Well-Structured Transition Systems
Well-structured transition systems (WSTS) form an abstract family of computational models where the set of configurations is equipped with a well-quasi-ordering "compatible" with the computation steps. This wqo ensures the termination of generic algorithms checking some important behavioural properties like coverability and termination. While the idea can be traced back to the 1980's [20], this framework has been especially popularised through two landmark surveys [1, 21] that emphasised its wide applicability, and new WSTS models keep being invented in multiple areas to this day.
#### 4.1.1. Well-Structured Transition Systems
A _well-structured transition system_ (WSTS) [1, 21] is a triple \((X,\to,\leq)\) where \(X\) is a set of configurations, \(\to\subseteq X\times X\) is a transition relation, and \((X,\leq)\) is a wqo with the following _compatibility_ condition: if \(x\leq x^{\prime}\) and \(x\to y\), then there exists \(y^{\prime}\geq y\) with \(x^{\prime}\to y^{\prime}\).
The coverability problem below corresponds to the verification of safety properties, i.e., to checking that no bad configuration can ever be reached from a given initial configuration \(s\in X\). Here we are given an error configuration \(t\in X\), and we assume that any configuration larger than \(t\) is also an error.
**Problem** (Coverability in well-structured transition systems).: **input:**: a well-structured transition system \((X,\to,\leq)\) and two configurations \(s\) and \(t\) in \(X\)
**question:**: does \(s\)_cover_\(t\), i.e., does there exist \(t^{\prime}\in X\) such that \(s\to^{*}t^{\prime}\geq t\)?
#### 4.1.2. The Backward Coverability Algorithm
The first published version of this algorithm seems to date back to [3], where it was used to show the decidability of coverability in vector addition systems extended with reset capabilities, before it was rediscovered and generalised to well-structured transition systems [1].
_The Algorithm._ Given an instance of the coverability problem, the _backward coverability algorithm_[3, 1, 21] computes (a finite basis for) the upwards-closed set
\[U_{*}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleftleftleftleft({{\leftleftleftleft({{ \leftleftleft({ \leftleftleftleft({{ \leftleft({ } {{ } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\}\ \ \ \ \ \ \ \ \ \ \ \}\ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
the wqo \((X,\leq)\). In order to turn (13) into an actual algorithm, one needs to make some effectiveness assumptions on \((X,\to,\leq)\), typically that \(\leq\) is decidable and a finite basis for \(\operatorname{Pre}_{\exists}(\uparrow x)\) can be computed for all \(x\in X\)[21, Prop. 3.5].
_A Dual View._ Lazic and Schmitz [31] take a dual view of the algorithm and define from (13) a descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) of the same length where
\[D_{k}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \mboxmboxmboxmboxmboxmbox }}}}}}}}}}}}}}X \setminus U_{k} \tag{14}\]
for each \(k\); this stops with \(D_{*}=X\setminus U_{*}\) the set of configurations that do _not_ cover \(t\). The entire computation in (13) can be recast in this dual view, by setting
\[D_{0}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{}}}}}}}}}}}}}}}X \setminus\uparrow t}\;, D_{k+1}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{{\tiny{ \tiny{ }}}}}}}}}}}}}}}D_{k}}D_{k}\cap \operatorname{Pre}_{\forall}(D_{k})\;, \tag{15}\]
where, for a set \(S\subseteq X\),
\[\operatorname{Pre}_{\forall}(S)\stackrel{{\mbox{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tinytiny{\tiny{ \tiny{ \tiny \tiny{ \
**Theorem 4.2**.: _The backward coverability algorithm terminates after at most \(n^{2^{O(d)}}\) iterations on a \(d\)-dimensional VAS encoded in unary._
Proof.: Let \(n\) be the size of the input to the coverability problem; we assume in the following that \(n,d\geq 2\). By Fact 4.1 and due to the unary encoding, the descending chain \(D_{0}\supsetneq D_{1}\supsetneq\dots\supsetneq D_{\ell}=D_{*}\) is \((g,n_{0})\)-controlled for \(g(x)\stackrel{{\text{\tiny def}}}{{=}}x+n\) and \(n_{0}\stackrel{{\text{\tiny def}}}{{=}}n\), and is \(\omega\)-monotone and thus strongly monotone. By Theorem 3.6, \(\ell\leq L_{d}+1\). Let us bound this value.
**Claim 4.3**.: _Let \(g(x)\stackrel{{\text{\tiny def}}}{{=}}x+n\) and \(n_{0}\stackrel{{\text{\tiny def}}}{{=}}n\). Then, for all \(i\leq d\),_
\[N_{i+1} =n\cdot(L_{i}+2)\;, L_{i}+4\leq n^{3^{i}\cdot(\lg d+1)}\;.\]
Proof of Claim 4.3.: In the case of \(N_{i+1}\), by the definition of \(N_{i+1}\) in (6), \(N_{i+1}=g^{L_{i}+1}(n_{0})=n+(L_{i}+1)\cdot n=n\cdot(L_{i}+2)\) as desired.
Regarding \(L_{i}\), we proceed by induction over \(i\). For the base case \(i=0\), \(L_{0}+4=4\leq n^{3^{0}\cdot(\lg d+1)}\) since we assumed \(n,d\geq 2\). For the induction step, by the definition of \(L_{i+1}\) in (7)
\[L_{i+1}+4 =L_{i}+4+\prod_{0\leq j\leq i}(d-j)(N_{j+1}+1)\] \[\leq L_{i}+4+\prod_{0\leq j\leq i}(d-j)\cdot n\cdot(L_{j}+3)\] \[\leq 2\cdot(dn)^{i+1}\cdot\prod_{0\leq j\leq i}(L_{j}+3)\;.\]
Here, since \(n\geq 2\),
\[2\cdot(dn)^{i+1}\leq n^{(i+1)(\lg d+1)+1}\]
and by induction hypothesis for \(j\leq i\)
\[\prod_{0\leq j\leq i}(L_{j}+3)\leq n^{\sum_{0\leq j\leq i}3^{j}(\lg d+1)}\;.\]
Thus, it only remains to see that, since \(i>0\),
\[3^{i+1}\cdot(\lg d+1) =(1+2\cdot\sum_{0\leq j\leq i}3^{j})\cdot(\lg d+1)\] \[\geq(1+3^{0}+3^{i})\cdot(\lg d+1)+\sum_{0\leq j\leq i}3^{j}\cdot( \lg d+1)\] \[\geq(i+1)\cdot(\lg d+1)+1+\sum_{0\leq j\leq i}3^{j}\cdot(\lg d+1 )\;.\]
Thus \(L_{d}+1\leq n^{3^{d}\cdot(\lg d+1)}\) by Claim 4.3, which is in \(n^{2^{O(d)}}\).
**Remark 4.4** (Branching or alternating vector addition systems).: The improved upper bound parameterised by the dimension \(d\) in Theorem 4.2 also applies to some extensions of vector addition systems, for which Lazic and Schmitz [31] have shown that the backward coverability algorithm was constructing an \(\omega\)-monotone descending chain controlled as in Fact 4.1, namely
* in [31, claims 6.7 and 6.8] for bottom-up coverability in branching vector addition systems (BVAS)--which is 2EXP-complete [15]--, and
* in [31, claims 5.4 and 5.5] for top-down coverability in alternating vector addition systems (AVAS)--which is 2EXP-complete as well [14].
Recall that \(U_{\ell}\) is the set of configurations that can cover the target \(\boldsymbol{t}\) in at most \(\ell\) steps, hence Theorem 4.2 provides an alternative proof for [29, Thm. 3.3]: if there exists a covering execution, then there is one of length in \(n^{2^{O(d)}}\), from which an algorithm in \(n^{2^{O(d)}}\) follows by [29, Thm. 3.2]. Regarding the optimality of Theorem 4.2, recall that Lipton [33] shows an \(n^{2^{\Omega(d)}}\) lower bound on the length of a minimal covering execution, which translates into the same lower bound on the number \(\ell\) of iterations of the backward coverability algorithm [12, Cor. 2]. Finally, this also yields an improved upper bound on the complexity of the (original) backward coverability algorithm. Here, we can rely on the analysis performed by Bozzelli and Ganty [12, Sec. 3] and simply replace Rackoff's \(n^{2^{O(d_{1}\lg d)}}\) bound on the length of minimal covering executions by the bound from Theorem 4.2.
**Corollary 4.5**.: _The backward coverability algorithm runs in time \(n^{2^{O(d)}}\) on \(d\)-dimensional VAS encoded in unary._
Proof.: Let \(n\) be the size of the input to the coverability problem and \(U_{0}\subsetneq U_{1}\subsetneq\dots\subsetneq U_{\ell}=U_{*}\) be the ascending chain constructed by the backward coverability according to (13). By Theorem 4.2, \(\ell\) is in \(n^{2^{O(d)}}\).
Let \(B_{k}\stackrel{{\text{\tiny{\sf def}}}}{{=}}\min_{\sqsubseteq}U_ {k}\) be the minimal basis at each step \(k\). The algorithm computes \(B_{k+1}\) from \(B_{k}\) as per (13) by computing \(\min_{\sqsubseteq}\operatorname{Pre}_{\exists}(\uparrow\!\boldsymbol{v})\) for each \(\boldsymbol{v}\in B_{k}\), adding the elements of \(B_{k}\), and removing any non-minimal vector. Thus each step can be performed in time polynomial in \(n\), \(d\), and the number of vectors in \(B_{k}\). Here, Bozzelli and Ganty's analysis in [12, Sec. 3] shows that \(\|\boldsymbol{v}^{\prime}\|\leq g(\|\boldsymbol{v}\|)\) for all \(\boldsymbol{v}^{\prime}\in\min_{\sqsubseteq}\operatorname{Pre}_{\exists}( \uparrow\!\boldsymbol{v})\), yielding a bound of \(|B_{k}|\leq(g^{k}(n)+1)^{d}\leq((\ell+1)\cdot n+1)^{d}\), which is still in \(n^{2^{O(d)}}\).
We can do slightly better. By Corollary 3.7, all the ideals in the canonical decomposition of \(D_{k}\stackrel{{\text{\tiny{\sf def}}}}{{=}}\mathbb{N}^{d} \setminus U_{k}\) are thin, and in turn Proposition 3.8 shows that all the vectors in \(B_{k}\) are nearly thin. Accordingly, let us denote by \(\operatorname{Fil}^{\mathsf{thin}+1}(\mathbb{N}^{d})\) the set of order filters \(\uparrow\!\boldsymbol{v}\) such that \(\boldsymbol{v}\) is nearly thin. Then \(|B_{k}|\leq|\operatorname{Fil}^{\mathsf{thin}+1}(\mathbb{N}^{d})|\), and the latter is in \(n^{2^{O(d)}}\):
\[|\operatorname{Fil}^{\mathsf{thin}+1}(\mathbb{N}^{d})| \leq d\mathbb{I}\cdot\prod_{1\leq i\leq d}(N_{i}+2)\] \[\leq d\mathbb{I}\cdot n^{d}\cdot\prod_{0\leq i\leq d-1}(L_{i}+4) \text{(by Claim \ref{lem:def} on $N_{i}$)}\] \[\leq n^{2d+\sum_{0\leq i\leq d-1}3^{i}\cdot(\lg d+1)}\text{ (because $d\leq n$ and by Claim \ref{lem:def} on $L_{i}$)}\] \[\leq n^{3^{d}\cdot(\lg d+1)}\;. \tag{16}\]
Therefore, the overall complexity of the backward coverability algorithm is polynomial in \(\ell\), \(\max_{0\leq k\leq\ell}|B_{k}|\), \(n\), and \(d\), which is in \(n^{2^{O(d)}}\).
Observe that the dual version of the backward coverability algorithm enjoys the same upper bound: at each step \(k\), the algorithm computes \(D_{k+1}\) from \(D_{k}\) as per (15); this computation of \(D_{k+1}\) can be performed in time polynomial in \(n\), \(d\), and the number of ideals in the canonical decomposition of \(D_{k}\)[31, Sec. 3.2.1]. By Corollary 3.7 and Remark 3.3, \(|D_{k}|\leq|\operatorname{Id}^{\mathsf{thin}}(\mathbb{N}^{d})|\leq 1+L_{d}\), hence the overall
complexity of the dual algorithm is polynomial in \(L_{d}\), \(n\), and \(d\), which is still in \(n^{2^{O(d)}}\).
The bounds in \(n^{2^{O(d)}}\) for \(\|\boldsymbol{v}\|\leq N_{d}+1\) for all \(\boldsymbol{v}\in\min_{\subseteq}U_{k}\) and for \(|\min_{\subseteq}U_{k}|\leq|\operatorname{Fil}^{\mathsf{thin}+1}(\mathbb{N}^{d })|\) in the previous proof also improve on the corresponding bounds in [43, Thm. 9] and [12, Thm. 2]. Recall that Kunnemann et al. [29, Thm. 4.2] show that, assuming the exponential time hypothesis, there does not exist a deterministic \(n^{o(2^{d})}\) time algorithm deciding coverability in unary encoded \(d\)-dimensional VAS, hence the backward coverability algorithm is conditionally optimal.
### Coverability in Affine Nets
Affine nets [22], also known as affine vector addition systems, are a broad generalisation of VAS and Petri nets encompassing multiple extended VAS operations designed for greater modelling power.
#### 4.3.1. Affine Nets
A \(d\)-dimensional (well-structured) _affine net_[22] is a finite set \(\mathcal{N}\) of triples \((\boldsymbol{a},\boldsymbol{A},\boldsymbol{b})\in\mathbb{N}^{d}\times \mathbb{N}^{d\times d}\times\mathbb{N}^{d}\). It defines a well-structured transition system \((\mathbb{N}^{d},\to_{\mathcal{N}},\sqsubseteq)\) with \(\mathbb{N}^{d}\) as set of configurations and transitions \(\boldsymbol{u}\to_{\mathcal{N}}A\cdot(\boldsymbol{u}-\boldsymbol{a})+ \boldsymbol{b}\) for all \(\boldsymbol{u}\) in \(\mathbb{N}^{d}\) and \((\boldsymbol{a},\boldsymbol{A},\boldsymbol{b})\) in \(\mathcal{N}\) such that \(\boldsymbol{u}-\boldsymbol{a}\) is in \(\mathbb{N}^{d}\). This model encompasses notably
* VAS and Petri nets when (each such) \(A\) is the identity matrix \(I_{d}\),
* _reset nets_[2, 3] when \(A\) is component-wise smaller or equal to \(I_{d}\),
* _transfer nets_[13] when the sum of values in every column of \(A\) is one,
* _post self-modifying nets_[42]--also known as _strongly increasing affine nets_[22, 11]--when \(A\) is component-wise larger or equal to \(I_{d}\), and
* _invertible_ affine nets [7] when \(A\) is invertible over the rationals, i.e., \(A\in\operatorname{\mathsf{GL}}_{d}(\mathbb{Q})\).
As in the case of VAS, we will work with a unary encoding, and we let \(\|\mathcal{N}\|\stackrel{{\text{\tiny{\rm{ad}}}}}{{=}}\max\{\| \boldsymbol{a}\|\mid(\boldsymbol{a},A,\boldsymbol{b})\in\mathcal{N}\}\); note that the entries from \(\boldsymbol{b}\) and \(A\) are not taken into account.
**Example 4.6**.: The affine net
\[\mathcal{N}_{1}\stackrel{{\text{\tiny{\rm{ad}}}}}{{=}}\left\{ \left(\begin{array}{cc}2\\ 0\end{array}\right),\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\left(\begin{array}{c}0\\ 1\end{array}\right)\right\}\]
defines the same WSTS as the \(2\)-dimensional VAS \(\boldsymbol{A}_{\div 2}=\{(-2,1)\}\). The affine net
\[\mathcal{N}_{2}\stackrel{{\text{\tiny{\rm{ad}}}}}{{=}}\left\{ \left(\begin{array}{cc}2\\ 0\end{array}\right),\left(\begin{array}{cc}1&1\\ 0&0\end{array}\right),\left(\begin{array}{c}0\\ 1\end{array}\right)\right\}\]
performs a transfer from its second component into its first component. The affine net
\[\mathcal{N}_{3}\stackrel{{\text{\tiny{\rm{ad}}}}}{{=}}\left\{ \left(\begin{array}{cc}2\\ 0\end{array}\right),\left(\begin{array}{cc}1&1\\ 2&0\end{array}\right),\left(\begin{array}{c}0\\ 1\end{array}\right)\right\}\]
sums the values of its first two components into the first one, and puts the double of its first component into its second one.
The coverability problem for reset VAS was first shown decidable in 1978 by Arnold and Latteux [3] using the backward coverability algorithm, and the same algorithm applies to all affine nets [17, 22]. Its complexity is considerable: their coverability problem has already an Ackermannian complexity in the reset or transfer cases [41, 19, 39]. In the strongly increasing case, Bonnet, Finkel, and Praveen [11,
Lem. 11 and Thm. 13] show how to adapt Rackoff's original argument to derive an upper bound in \(n^{2^{O(d\lg d)}}\) on the length of minimal coverability witnesses, with an \(\mathsf{EXPSPACE}\) upper bound for the problem when \(d\) is part of the input, while in the invertible case, Benedikt et al. [7, Thm. 6] show a \(\mathsf{2EXPSPACE}\) upper bound.
_Control._ Before we turn to the case of invertible affine nets, let us show that the descending chains defined by the backward coverability algorithm for affine nets are controlled, with a control very similar to the VAS case (c.f. Fact 4.1).
**Proposition 4.7**.: _The descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) defined by equations (13-15) for a \(d\)-dimensional affine net \(\mathcal{N}\) and a target vector \(\boldsymbol{t}\) is \((g,n_{0})\)-controlled for \(g(x)\stackrel{{\mbox{\tiny{\rm{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{ \
Strong MonotonicityWhen dealing with a descending sequence of downwards-closed sets produced by the generic backward coverability, a key observation made in [31] allows to sometimes derive monotonocity.
For this, in a WSTS \((X,\rightarrow,\leq)\), define \(\operatorname{Post}_{\exists}(S)\stackrel{{\cong}}{{=}}\{y\in X\mid \exists x\in S\,.\,x\to y\}\), and for two order ideals \(I\) and \(I^{\prime}\), write \(I\to I^{\prime}\) if \(I^{\prime}\) appears in the canonical decomposition of \(\downarrow\!\operatorname{Post}_{\exists}(I)\). In the case of affine nets, and identifying order ideals \(I\) with vectors in \(\mathbb{N}_{\omega}^{d}\) with \(\omega+n=\omega-n=\omega\cdot n=\omega\) for all \(n\) in \(\mathbb{N}\), \(\downarrow\!\operatorname{Post}_{\exists}(I)=\downarrow\{A\cdot(I-\boldsymbol{a })+\boldsymbol{b}\mid(\boldsymbol{a},A,\boldsymbol{b})\in\mathcal{N},I \sqsupseteq\boldsymbol{a}\}\).
**Fact 4.9** ([31, Claim 4.2]).: _Let \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) be a descending chain of downwards-closed sets defined by equations (13-15). If \(I_{k+1}\) is a proper at step \(k+1\), then there exists an order ideal \(I\) and an order ideal \(I_{k}\) proper at step \(k\) such that \(I_{k+1}\to I\subseteq I_{k}\)._
**Proposition 4.10**.: _The descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) defined by equations (13-15) for a \(d\)-dimensional invertible affine net \(\mathcal{N}\) and a target vector \(\boldsymbol{t}\) is strongly monotone._
Proof.: Let \(I_{k+1}\) be proper at step \(k+1\). By Fact 4.9, there exists an order ideal \(I\) and an order ideal \(I_{k}\) proper at step \(k\) such that \(I_{k+1}\rightarrow_{\mathcal{N}}I\subseteq I_{k}\). Let us show that \(\dim I_{k+1}\leq\dim I\); as \(\dim I\leq\dim I_{k}\) because \(I\subseteq I_{k}\), this will yield the result.
Since \(I_{k+1}\rightarrow_{\mathcal{N}}I\), there exists \((\boldsymbol{a},A,\boldsymbol{b})\) in \(\mathcal{N}\) such that \(I-\boldsymbol{b}=A\cdot(I_{k+1}-\boldsymbol{a})\). For this to hold, note that for all \(i\in\operatorname{fin}(I)\), the \(i\)th row of \(A\) must be such that \(A(i,j)=0\) for all \(j\in\omega(I_{k+1})\). As \(A\) is invertible, those \((\operatorname{fdim}I)\)-many rows must be linearly independent, hence necessarily \(\operatorname{fdim}I_{k+1}\geq\operatorname{fdim}I\), i.e., \(\dim I_{k}\leq\dim I\).
Observe that the proof of Proposition 4.10 does not work for the transfer net \(\mathcal{N}_{2}\) of Example 4.6: \(\left(\begin{array}{c}\omega\\ \omega\end{array}\right)\rightarrow_{\mathcal{N}_{2}}\left(\begin{array}{c} \omega\\ 1\end{array}\right)\); this is exactly the kind of non-monotone behaviour invertibility was designed to prevent. Also observe that \(\left(\begin{array}{c}2\\ \omega\end{array}\right)\rightarrow_{\mathcal{N}_{3}}\left(\begin{array}{c} \omega\\ 1\end{array}\right)\) in the invertible affine net \(\mathcal{N}_{3}\), which is not an \(\omega\)-monotone behaviour: this illustrates the usefulness of capturing strongly monotone descending chains, as [31, Thm. 4.4 and Cor. 4.6] do not apply.
Complexity Upper BoundsWe are now equipped to analyse the complexity of the backward coverability algorithm in invertible affine nets. Regarding the length \(\ell\) of the chain constructed by the algorithm, by propositions 4.7 and 4.10 we are in the same situation as in Theorem 4.2 and we can simply repeat the arguments from its proof.
**Theorem 4.11**.: _The backward coverability algorithm terminates after at most \(n^{2^{O(d)}}\) iterations on \(d\)-dimensional invertible affine nets encoded in unary when \(d\geq 2\)._
We deduce two corollaries from Theorem 4.11: one pertaining to the complexity of the backward coverability algorithm in dimension \(d\), which mirrors Corollary 4.5, and one for the coverability problem when \(d\) is part of the input. Let us start with the backward coverability algorithm.
**Corollary 4.12**.: _The backward coverability algorithm runs in time \(n^{2^{O(d)}}\) on \(d\)-dimensional invertible affine nets encoded in unary when \(d\geq 2\)._
Proof.: Theorem 4.11 shows that the length \(\ell\) of the ascending chain \(U_{0}\subsetneq U_{1}\subsetneq\dots\subsetneq U_{\ell}=U_{*}\) constructed by the backward coverability algorithm is at most \(L_{d}+1\), which is in \(n^{2^{O(d)}}\).
Let \(B_{k}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\mbox
Proof of Claim 4.14.: If a coverability pseudo-witness exists, then we claim that for all \(\ell^{\prime}\geq k\geq 0\) there exists \(\mathbf{s}_{k}\sqsupseteq\mathbf{t}_{k}\) such that \(\mathbf{s}=\mathbf{s}_{\ell^{\prime}}\rightarrow_{\mathcal{N}}\mathbf{s}_{\ell^{\prime}-1} \rightarrow_{\mathcal{N}}\cdots\rightarrow_{\mathcal{N}}\mathbf{s}_{k}\), and thus in particular \(\mathbf{s}\rightarrow^{*}_{\mathcal{N}}\mathbf{s}_{0}\geq\mathbf{t}_{0}\) for \(k=0\). We can check this by induction over \(k\). For the base case \(k=\ell^{\prime}\), define \(\mathbf{s}_{\ell^{\prime}}\stackrel{{\text{\tiny{\#}}}}{{=}}\mathbf{s}\). For the induction step \(k\), since \(\mathbf{t}_{k+1}\in\operatorname{Pre}_{\exists}(\uparrow\mathbf{t}_{k})\) there exists \(\mathbf{s}^{\prime}_{k}\sqsupseteq\mathbf{t}_{k}\) such that \(\mathbf{t}_{k+1}\rightarrow_{\mathcal{N}}\mathbf{s}^{\prime}_{k}\); by WSTS compatibility and since \(\mathbf{s}_{k+1}\sqsupseteq\mathbf{t}_{k+1}\), there exists \(\mathbf{s}_{k}\sqsupseteq\mathbf{s}^{\prime}_{k}\) such that \(\mathbf{s}_{k+1}\rightarrow_{\mathcal{N}}\mathbf{s}_{k}\).
Conversely, assume that \(\mathbf{s}\) covers \(\mathbf{t}\) in \(\mathcal{N}\). Then \(\mathbf{s}\in U_{\ell}\), and let \(\ell^{\prime}\leq\ell\) be the least index such that \(\mathbf{s}\in U_{\ell^{\prime}}\). Then either \(\ell^{\prime}=0\), i.e., \(\mathbf{s}\sqsupseteq\mathbf{t}=\mathbf{t}_{0}\) and we are done, or \(\ell^{\prime}>0\). Because \(\mathbf{s}\in U_{\ell^{\prime}}\) there must be some \(\mathbf{t}_{\ell^{\prime}}\in\min_{\sqsubseteq}U_{\ell^{\prime}}\) with \(\mathbf{s}\sqsupseteq\mathbf{t}_{\ell^{\prime}}\), and \(\mathbf{t}_{\ell^{\prime}}\not\in U_{\ell^{\prime}-1}\) as otherwise \(\mathbf{s}\) would be in \(U_{\ell^{\prime}-1}\), contradicting the minimality of \(\ell^{\prime}\). In general, if we have found a sequence \((\mathbf{t}_{j})_{\ell^{\prime}\geq j\geq k>0}\) satisfying (17) until rank \(k+1\) included and know that \(\mathbf{t}_{k}\in(\min_{\sqsubseteq}U_{k})\setminus U_{k-1}\), then either \(k=1\) and \(\mathbf{t}_{1}\in\min_{\sqsubseteq}\operatorname{Pre}_{\exists}(\uparrow\mathbf{t}_{0})\) by definition of \(U_{0}\) and \(U_{1}\) in (13), or \(k>1\) and because \(\mathbf{t}_{k}\not\in U_{k-1}\), there exists \(\mathbf{t}_{k-1}\in\min_{\sqsubseteq}U_{k-1}\) such that \(\mathbf{t}_{k}\in\min_{\sqsubseteq}\operatorname{Pre}_{\exists}(\uparrow\mathbf{t}_{k-1})\), and \(\mathbf{t}_{k-1}\not\in U_{k-2}\) as otherwise we would have \(\mathbf{t}_{k}\) in \(U_{k-1}\). Repeating this process yields a coverability pseudo-witness. [4.14]
By Claim 4.14, a non-deterministic algorithm for coverability can guess and check the existence of a coverability pseudo-witness. By Theorem 4.11, such a pseudo-witness has a length \(\ell^{\prime}\leq\ell\) in \(n^{2^{O(d)}}\). Furthermore, by Claim 4.8 the components in each \(\mathbf{t}_{k}\) in such a pseudo-witness are bounded by \(\|\mathbf{t}\|+\|\mathcal{N}\|\cdot k\leq(\ell+1)\cdot n\), which is still in \(n^{2^{O(d)}}\). Thus exponential space suffices. Note that this also holds when we assume the invertible affine net to be encoded in binary, by substituting \(2^{n}\) for \(n\) in the bound \(n^{2^{O(d)}}\).
**Remark 4.15** (Strictly increasing affine nets).: All the results we have proven for invertible affine nets in this section also hold for strictly increasing affine nets, because the descending chains of downwards-closed sets they generate when running the backward coverability algorithm are \(\omega\)-monotone.
Strictly increasing affine nets [42, 22, 11] are intuitively the affine nets devoid of any form of reset or transfer; in Example 4.6, only \(\mathcal{N}_{1}\) is strictly increasing. As a consequence of this restriction, the descending chains of downwards-closed sets they generate when running the backward coverability algorithm are \(\omega\)-monotone, which yields yet another illustration of the applicability of our results.
**Claim 4.16**.: _The descending chain \(D_{0}\supsetneq D_{1}\supsetneq\cdots\) defined by equations (13-15) for a \(d\)-dimensional strictly increasing affine net \(\mathcal{N}\) and a target vector \(\mathbf{t}\) is \(\omega\)-monotone._
Proof of Claim 4.16.: Let \(I_{k+1}\) be proper at step \(k+1\). By Fact 4.9, there exists an order ideal \(I\) and an order ideal \(I_{k}\) proper at step \(k\) such that \(I_{k+1}\rightarrow_{\mathcal{N}}I\subseteq I_{k}\). Let us show that \(\omega(I_{k+1})\subseteq\omega(I)\); as \(\omega(I)\subseteq\omega(I_{k})\) because \(I\subseteq I_{k}\), this will yield the result.
Since \(I_{k+1}\rightarrow_{\mathcal{N}}I\), there exists \((\mathbf{a},A,\mathbf{b})\) in \(\mathcal{N}\) such that \(I_{k+1}\sqsupseteq\mathbf{a}\) and \(I=A\cdot(I_{k+1}-\mathbf{a})+\mathbf{b}\). Because \(\mathcal{N}\) is strictly increasing, \(A=I_{d}+A^{\prime}\) for some matrix \(A^{\prime}\in\mathbb{N}^{d\times d}\), hence \(I=I_{k+1}-\mathbf{a}+A^{\prime}\cdot(I_{k+1}-\mathbf{a})+\mathbf{b}\). Thus \(I\sqsupseteq(I_{k+1}-\mathbf{a})\) and therefore \(\omega(I)\supseteq\omega(I_{k+1})\). [4.16]
While the EXPSPACE upper bound of Corollary 4.13 was already shown by Bonnet et al. [11], the \(n^{2^{O(d)}}\) bound of Theorem 4.11 for the problem parameterised by \(d\) is
an improvement over the \(n^{2^{O(d\lg d)}}\) bounds of [11, Lem. 11 and Thm. 13], and the bound in Corollary 4.12 for the backward coverability algorithm is new.
|
2305.06897 | AfriQA: Cross-lingual Open-Retrieval Question Answering for African
Languages | African languages have far less in-language content available digitally,
making it challenging for question answering systems to satisfy the information
needs of users. Cross-lingual open-retrieval question answering (XOR QA)
systems -- those that retrieve answer content from other languages while
serving people in their native language -- offer a means of filling this gap.
To this end, we create AfriQA, the first cross-lingual QA dataset with a focus
on African languages. AfriQA includes 12,000+ XOR QA examples across 10 African
languages. While previous datasets have focused primarily on languages where
cross-lingual QA augments coverage from the target language, AfriQA focuses on
languages where cross-lingual answer content is the only high-coverage source
of answer content. Because of this, we argue that African languages are one of
the most important and realistic use cases for XOR QA. Our experiments
demonstrate the poor performance of automatic translation and multilingual
retrieval methods. Overall, AfriQA proves challenging for state-of-the-art QA
models. We hope that the dataset enables the development of more equitable QA
technology. | Odunayo Ogundepo, Tajuddeen R. Gwadabe, Clara E. Rivera, Jonathan H. Clark, Sebastian Ruder, David Ifeoluwa Adelani, Bonaventure F. P. Dossou, Abdou Aziz DIOP, Claytone Sikasote, Gilles Hacheme, Happy Buzaaba, Ignatius Ezeani, Rooweither Mabuya, Salomey Osei, Chris Emezue, Albert Njoroge Kahira, Shamsuddeen H. Muhammad, Akintunde Oladipo, Abraham Toluwase Owodunni, Atnafu Lambebo Tonja, Iyanuoluwa Shode, Akari Asai, Tunde Oluwaseyi Ajayi, Clemencia Siro, Steven Arthur, Mofetoluwa Adeyemi, Orevaoghene Ahia, Anuoluwapo Aremu, Oyinkansola Awosan, Chiamaka Chukwuneke, Bernard Opoku, Awokoya Ayodele, Verrah Otiende, Christine Mwase, Boyd Sinkala, Andre Niyongabo Rubungo, Daniel A. Ajisafe, Emeka Felix Onwuegbuzia, Habib Mbow, Emile Niyomutabazi, Eunice Mukonde, Falalu Ibrahim Lawan, Ibrahim Said Ahmad, Jesujoba O. Alabi, Martin Namukombo, Mbonu Chinedu, Mofya Phiri, Neo Putini, Ndumiso Mngoma, Priscilla A. Amuok, Ruqayya Nasir Iro, Sonia Adhiambo | 2023-05-11T15:34:53Z | http://arxiv.org/abs/2305.06897v1 | # AfriQA: Cross-lingual Open-Retrieval
###### Abstract
African languages have far less in-language content available digitally, making it challenging for question answering systems to satisfy the information needs of users. Cross-lingual open-retrieval question answering (XOR QA) systems--those that retrieve answer content from other languages while serving people in their native language--offer a means of filling this gap. To this end, we create AfriQA, the first cross-lingual QA dataset with a focus on African languages. AfriQA includes 12,000+ XOR QA examples across 10 African languages. While previous datasets have focused primarily on languages where cross-lingual QA _augments_ coverage from the target language, AfriQA focuses on languages where cross-lingual answer content is the _only_ high-coverage source of answer content. Because of this, we argue that African languages are one of the most important and realistic use cases for XOR QA. Our experiments demonstrate the poor performance of automatic translation and multilingual retrieval methods. Overall, AfriQA proves challenging for state-of-the-art QA models. We hope that the dataset enables the development of more equitable QA technology.1
Footnote 1: The data is available at: [https://github.com/masakhane-io/afriqa](https://github.com/masakhane-io/afriqa).
## 1 Introduction
Question Answering (QA) systems provide access to information (Kwiatkowski et al., 2019) and increase accessibility in a range of domains, from healthcare and health emergencies such as COVID-19 (Moller et al., 2020; Morales et al., 2021) to legal queries (Martinez-Gil, 2021) and financial questions (Chen et al., 2021). Many of these applications are particularly important in regions where information and services may be less accessible and where language technology may thus help to reduce the burden on the existing system. At the same time, many people prefer to access information in their local languages--or simply do not speak a
language supported by current language technologies (Amano et al., 2016). To benefit the more than three billion speakers of under-represented languages around the world, it is thus crucial to enable the development of QA technology in local languages.
Standard QA datasets mainly focus on English (Joshi et al., 2017; Mihaylov et al., 2018; Kwiatkowski et al., 2019; Sap et al., 2020). While some reading comprehension datasets are available in other high-resource languages (Ruder and Sil, 2021), only a few QA datasets (Clark et al., 2020; Asai et al., 2021; Longpre et al., 2021) cover a typologically diverse set of languages--and very few datasets include African languages (see Table 1).
In this work, we lay the foundation for research on QA systems for one of the most linguistically diverse regions by creating AfriQA, the first QA dataset for 10 African languages. AfriQA focuses on open-retrieval QA where information-seeking questions2 are paired with retrieved documents in which annotators identify an answer if one is available (Kwiatkowski et al., 2019). As many African languages lack high-quality in-language content online, AfriQA employs a cross-lingual setting (Asai et al., 2021) where relevant passages are retrieved in a high-resource language spoken in the corresponding region and answers are translated into the source language. To ensure utility of this dataset, we carefully select a relevant source language (either English or French) based on its prevalence in the region corresponding to the query language. AfriQA includes 12,000+ examples across 10 languages spoken in different parts of Africa. The majority of the dataset's questions are centered around entities and topics that are closely linked to Africa. This is an advantage over simply translating existing datasets into these languages. By building a dataset from the ground up that is specifically tailored to African languages and their corresponding cultures, we are able to ensure better contextual relevance and usefulness of this dataset.
Footnote 2: These questions are **information-seeking** in that they are written without seeing the answer, as is the case with real users of question answering systems. We contrast this with the reading comprehension task where the question-writer sees the answer passage prior to writing the question; this genre of questions tends to have both higher lexical overlap with the question and elicit questions that may not be of broad interest.
We conduct baseline experiments for each part of the open-retrieval QA pipeline using different translation systems, retrieval models, and multilingual reader models. We demonstrate that cross-lingual retrieval still has a large deficit compared to automatic translation and retrieval; we also show that a hybrid approach of sparse and dense retrieval improves over either technique in isolation. We highlight interesting aspects of the data and discuss annotation challenges that may inform future annotation efforts for QA. Overall, AfriQA proves challenging for state-of-the-art QA models. We hope that AfriQA encourages and enables the development and evaluation of more multilingual and equitable QA technology. The dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license3.
Footnote 3: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
In summary, we make the following contributions:
* We introduce the first cross-lingual question answering dataset with 12,000+ questions across 10 geographically diverse African
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & **QA?** & **CLIR?** & **Open Retrieval?** & **\# Languages** & **\# African Languages** \\ \hline XQA (Liu et al., 2019) & ✓ & ✓ & ✓ & 9 & Nil \\ XOR QA (Asai et al., 2021) & ✓ & ✓ & ✓ & 7 & Nil \\ XQuAD (Artetxe et al., 2020) & ✓ & ✗ & ✗ & 11 & Nil \\ MLQA (Lewis et al., 2020) & ✓ & ✗ & ✗ & 7 & Nil \\ MKQA (Longpre et al., 2021) & ✓ & ✗ & ✓ & 26 & Nil \\ TyDi QA (Clark et al., 2020) & ✓ & ✗ & ✓ & 11 & 1 \\ AmQA (Abedissa et al., 2023) & ✓ & ✗ & ✗ & 1 & 1 \\ KenSwQuAD (Wanjawa et al., 2023) & ✓ & ✗ & ✗ & 1 & 1 \\ \hline AfriQA (Ours) & ✓ & ✓ & ✓ & 10 & 10 (see Table 3) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of the Dataset with Other Question Answering Datasets.** This table provides a comparison of the current dataset used in the study with other related datasets. The first, second, and third columns, “QA”, “CLIR”, and “Open Retrieval”, indicate whether the dataset is question answering, cross-lingual or open retrieval, respectively. The fourth column, “# Languages”, shows the total number of languages in the dataset. The final column lists the African languages present in the dataset.
languages. This dataset directly addresses the deficit of African languages in existing datasets.
* We conduct a comprehensive analysis of the linguistic properties of the 10 languages, which is crucial to take into account when formulating questions in these languages.
* Finally, we conduct a comprehensive evaluation of the dataset for each part of the open-retrieval QA pipeline using various translation systems, retrieval models, and multilingual reader models.
## 2 AfriQA
The AfriQA dataset was created by researchers from Masakhane4--a not-for-profit community that promotes the representation and coverage of under-resourced African languages in NLP research--in collaboration with Google. We show examples of the data in Table 2. In SS2.1, we provide an overview of the 10 languages discussing their linguistic properties, while SS2.2 and SS2.3 describe the data collection procedure and quality control measures put in place to ensure the quality of the dataset.
Footnote 4: [https://www.masakhane.io/](https://www.masakhane.io/)
### Discussion of Languages
African languages have unique typologies, grammatical structures, and phonology, many of them being tonal and morphologically rich [1]. We provide an overview of the linguistic properties of the ten languages in AfriQA that are essential to consider when crafting questions for QA systems.
**Bemba** is a morphologically rich language like many Bantu languages, which attaches affixes to the headword to change grammatical forms such as tenses, negation, and plurality when formulating questions. Negation is typically expressed using three morphemes: "ta-" (e.g. **tab**aleelanda - They are not speaking), "-shi-" (e.g. ab**ash**ile**elanda - who is not talking ), and "kaana" (e.g. ukuk**aaana**lya - not eating). The present tense is typically indicated by "ali" (e.g. Ninaani **ali** kateeka wa caalo ca Zambia? - Who is the president of Zambia? ) and past tense by "aali" (e.g. Ninaani **aali** kateeka wa caalo ca Zambia? - Who was the former president of Zambia? ). Plurality is indicated by prefixes attached to the stem of the noun, which vary according to the noun class they belong to. For example, "u-**mu**-ntu" (person), and "a-**ba**-ntu" (people). Typical question wh-words used are "cinshi" (what), "naani"(who), "liaisa" (when), "mulandunshi" (why), "ciisa" (which), "kwi/kwiisa" (where),
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**lang** & **Question \(Q_{L}\)** & **Relevant Passage \(P_{pl}\)** & **Answer \(A_{pl}\)** \\ & (_Translation \(Q_{pl}\)_) & & (_Translation \(A_{L}\)_) \\ \hline \hline \multirow{3}{*}{hau} & Jahohi nawa ne a kasar Malaysia? & The states and federal territories of Malaysia are the principal & \\ & banga? (_How many states are there in Malaysia?_) & administrative divisions of Malaysia. Malaysia is a federation & \\ & _in Malaysia?_) & of **13** states (Negeri) and 3 federal territories. & \\ \hline \multirow{3}{*}{bem} & Bushe Mwanawasa stadium ingisha & The Levy Mwanawasa Stadium is a multi-purpose stadium in & \\ & abantu banga? (_What is the capacity of Mwanawasa Stadium?_) & Ndola, Zambia. It is used mostly for football matches. The & 49,800 people \\ & _Mwanawasa Stadium?_) & stadium has a capacity of 49,800 people. & \\ \hline \multirow{3}{*}{wol} & Man po moo niroo agpowum Softbal? & Ce sport est un descendant direct du baseball (afin de & \\ & (_Quel sport resemble beaucoup au softball?_) & differencier les deux) mais differee de cernier par differents & baseball (_Bas-bal_) \\ \cline{1-1} \cline{2-5} & Kwenzeka namuphi unyaka & & \\ & uMilio Omkhuul waseLondon? & Great Fire of London The Great Fire of London was a major & 1666 \\ \cline{1-1} zul & (_In what year did the Great Fire_ & conflagration that swept through the central parts of London & \\ & _of London occur?_ ) & from Sunday 2 September to Thursday, 6 September 1666 & \\ \cline{1-1} \cline{2-5} & The fire gutted the medieval City of London inside the wall. & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Table showing selected questions, relevant passages, and answers in different languages from the dataset. It also includes the human-translated versions of both questions and answers. For the primary XOR QA task, systems are expected to find the relevant passage among all Wikipedia passages, not simply the gold passage shown above.
and shaani (how).
**Fon** is an isolating language in terms of morphology typology. For changes in grammatical forms such as tenses, negation, and plurality when formulating questions, a new word is added to express this change. For example, "x" is added for negation, "xoxo" to indicate past tense, and "le"for plurality. Common question wh-words are Et\(\epsilon\)(what), M\(\epsilon\) (who), Hwet\(\epsilon\)nu (when), Aniwu (why), qe t (which) and Fite (where).
**Hausa** is the only Afro-Asiatic language in AfriQA. It typically makes use of indicative words for changes to the grammatical forms within a sentence, such as negation, tenses, and plurality. For negation, the indicative words "ba/ba a" (not) and "banda" (except) are used. For tenses, "tsohon" (was) and "yanzu" (is) are used to indicate past and present tenses. Plurality has complex forms which often require the deletion of the last vowel of the word and the addition of a suffix (like "una", "aye" and "oli"). For example, "hula" (cap) - "huluna" (caps), "mace" (girl) - "mataye" (girls). Typical question wh-words used are "me/ya" (what), "wa"(who), "yaushe" (when), "dan me/akan me" (why), "wanne" (which), "ina/ a ina" (where), and "yaya/qaqa" (how).
**Igbo** is a morphologically rich language and most changes in grammatical forms (negations, questions) can be embedded in a single word or by varying the tone. For example, a suffix "ghi" often signifies a negation. However, there is no affix to indicate plurality, the count is often specified after the word. Question words are often preceded by "kedu" or "gini" like "kedu/gini" (what), "onye/kedu onye" (who), "kedu mgebe" (when), "gini mere/gini kpatara" (why), "kedu nke" (which), "ebee" (where), and "kedu ka" or "kedu eu" (how).
**Kinyarwanda** is a morphologically rich language with several grammatical features such as negation, tenses, and plurals that are expressed as changes to morphemes in a word. For example, the plural of "**umuntu**" (person) is "**abantu**" (people). According to [10], Kinyarwanda lacks an overt question particle or a syntactic movement process to form polar questions (yes/no). Thus, Kinyarwanda makes use of prosodic and toncological processes to differentiate between declarative and polar questions. Question words typically used are "iki" (what), "nde/inde" (who), 'ryari'" (when), "kihie/uwuhe" (which), "hehe" (where), and "gute" (how).
**Swahili** is a morphologically rich language that typically has several morphemes to incorporate changes to grammatical forms such as negation, tenses and plurality. For example, "ni" (is), "alikuwa" (was/former), and "atakuwa" (will be) indicate present, past, and future tenses. Similar to Kinyarwanda, changes to the prefix indicate plurality, for example, "**mtu**" (person) -- "**watu**" (people), "gari" (car) -- "**ma**gari"(cars). A question word can be placed at the beginning or end of the question sentence, for example, "amekuja nani?" (who has come?) and "nani amekuja?" (who has come?). The question word "gani" requires a noun modifier and can be placed at the beginning or end of the sentence. Other question words often used are "nini" (what), "nani" (who), "lini" (when), "kwanini" (why), "wapi", (where), and "vipi" (how).
**Twi** is a dialect of the Akan language and AfriQA includes the Asante variant. Akan makes extensive use of different affixes for different grammatical changes such as negation, plurality, and tenses. For example, the suffix "n" is added to the root word to express negation. Similarly, plurality is indicated by replacing the first two letters of a root word with "mm" or "nn", and in some cases, a suffix "nom" can be used instead of a prefix. A few common question words used are "eden"(what), "hwan" (who), "daben" (when), "aden" (why), "dehen" (which), "chenfa" (where), "sen" (how).
**Wolf** is an agglutinative language, however, it does not use an affix attached to the headwords like in Bantu languages. Instead, it makes use of a dependent such as a determiner that is not attached to the headword. Changes to the grammatical form like negation, tenses, and plurality are captured by this dependent word. For incorporating past tense, "oon" is attached to the end of the verb, while for plurality, "yi" or "ay" is attached before or after the word. For example, "xar mi"(a sheep) - "xar yi"(the sheeps). For negation, suffix "ul" is added at the end of the verb, for example "man reew moo **nekk** ci Afrig" (which one of these countries is **located** in Africa?) -- "man reew moo **nekkul** ci Afrig." (which one of these countries is **not located** in Africa?). A few common question words used are "ian"(what), "kan" (who), "kan" (when), "lu tax", "ban" (which), "fan" (where), and "naka" (how).
**Yoruba** has a derivational morphology that entails affixation, reduplication, and compounding. How
ever, there is no affix to indicate plurality in the language; a number is instead specified with a separate token. Yoruba employs polar question words such as "nje", "se", "abi", "sebi" (for English question words "do" or "is", "are", "was" or "were") and content question markers such as "tani" (who), "kini" (what), "nibo" (where), "elo/meloo" (how many), "bauwo"(how is), "kilode" (why), and "igba/nigba" (when). Negation can be expressed with "ko". Phonological rules must be followed when adapting a loanword to Yoruba. For instance, if the loanword has consonant clusters, a vowel might be added in between the clusters or the phonological structures of the clusters might be modified.
**Zulu** is a very morphologically-rich language where several grammatical features such as tense, negation, and the plurality of words are indicated through prefixes or suffixes. Negation is typically indicated by the prefix "nga-". The present tense is indicated by an affix after the subject concord. For example, "ya" or "sa" indicates present tense (as in "_Ngivadalla_" - I am playing), while past tense is indicated by a suffix, for example "e" or "ile" (in "_Ngikhathaille_") - I was tired). The most commonly used question words are "yini" (what), "ubani" (who), "nini" (when), "kungani" (why), "yiliphi" (which), "kuphi" (where), "kanjani" (how), and "venza" (do).
### Data Collection Procedure
For each of the 10 languages in AfriQA, a team of 2-6 native speakers was responsible for the data collection and annotation. Each team was led by a coordinator. The annotation pipeline consisted of 4 distinct stages: 1) question elicitation in an African language; 2) translation of questions into a pivot language; 3) answer labeling in the pivot language based on a set of candidate paragraphs; and 4) answer translation back to the source language. All data contributions were compensated financially.
#### 2.2.1 Question Elicitation
The TyDi QA methodology [10] was followed to elicit locally relevant questions. Team members were presented with prompts including the first 250 characters of the most popular Wikipedia5 articles in their languages, and asked to write factual or procedural questions for which the answers were not contained in the prompts. Annotators were encouraged to follow their natural curiosity. This annotation process avoids excessive and artificial overlap between the question and answer passage, which can often arise in data collection efforts for non-information-seeking QA tasks such as reading comprehension.6 For Fon and Bemba where there is no in-language Wikipedia, team members were presented with prompts relevant to Benin and Zambia from the French and English Wikipedia respectively, and asked to generate questions in their native languages. For Swahili, questions elicited in TyDi QA, which remained unanswered in the original dataset were used with light curation from the Swahili team for correctness. These questions remained unanswered because the TyDi QA annotator team was not able to find a candidate paragraph in Swahili to answer them. The question elicitation was carried out via simple spreadsheets.
Footnote 5: [https://www.wikipedia.org/](https://www.wikipedia.org/)
Before moving on to the second stage, team coordinators reviewed elicited questions for grammatical correctness and suitability for the purposes of information-seeking QA.7
Footnote 7: Reading comprehension differs from information-seeking QA as question-writers see the answer prior to writing the question and thus tests understanding of the answer text rather than the general ability to provide a correct answer.
#### 2.2.2 Question Translation
Elicited questions were translated from the original African languages into pivot languages following [1]. English was used as the pivot language across all languages except [1]. Where possible, questions elicited by one team member were allocated to a different team member for translation to further ensure that only factual or procedural questions that are grammatically correct make it into the final dataset. This serves as an additional validation layer for the elicited questions.
#### 2.2.3 Answer Retrieval
Using the translated questions as queries, Google Programmable Search Engine9 was used to retrieve Wikipedia paragraphs that are candidates to contain an answer in the corresponding pivot language.
The Mechanical Turk interface10 was used to show candidate paragraphs to team members who were then asked to identify 1) the paragraph that contains an answer and 2) the exact minimal span of the answer. In the case of polar questions, team members had to select "Yes" or "No" instead of the minimal span. In cases where candidate paragraphs did not contain the answer to the corresponding question, team members were instructed to select the "No gold paragraph" option.
Footnote 10: The Mechanical Turk _interface_ was used, but no Mechanical Turk _workers_ were employed—all annotations were carried out by team members.
As with question elicitation, team members went through a phase of training, which included a group meeting where guidelines were shared and annotators were walked through the labeling tool. Two rounds of in-tool labeling training were conducted.
#### 2.2.4 Answer Translation
To obtain answers in the African languages, we translated the answers in the pivot languages to the corresponding African languages. We allocated the task of translating the answers labeled by team members to different team members in order to ensure accuracy. Translators were instructed to minimize the span of the translated answers. In cases where the selected answers were incorrect or annotators failed to select the minimum span, we either removed the question, corrected the answer, or re-annotated the question using the annotation tool.
### Quality Control
To ensure completeness, quality, and suitability of the dataset, we implemented rigorous quality control measures at every stage of the dataset creation process. We recruited only native speakers of the languages as annotators and team coordinators. Prior to eliciting questions in their native languages, annotators underwent three rounds of training in question elicitation using English prompts. Each annotator received personalized feedback during each training round, with a focus on ensuring that the elicited questions were factual and that the answers were not present in the prompts. Only annotators that achieved a minimum accuracy rate of 90% were permitted to proceed with the question elicitation in their native languages. For annotators who were unable to achieve the target percentage, additional training rounds with one-on-one instruction were provided. Both annotators and team coordinators participated in the question elicitation training.
All language teams consisted of at least 3 members, including a coordinator, except for Fon and Kinyarwanda teams, which had 2 members. This was done to ensure that the questions elicited by one team member were translated by another team member for quality control purposes. During the question translation phase, annotators were asked to flag questions that were not factual. These questions were either corrected or removed from the datasets. Similarly, during the answer labeling phase, annotators were provided with comment options to indicate if a question was unsuitable for the datasets, which were then used to filter out questions. Furthermore, language team coordinators reviewed the question-and-answer pairs alongside their translations, while central project managers reviewed the translations for consistency. Common issues were identified, such as answer-span length, accidental selection of Yes/No when the question is not polar or vice versa, and wrong answer selection. Span lengths were fixed in post-production, while wrong answers or polar question misunderstandings resulted in questions being removed from the dataset.
### Final Dataset
The statistics of the dataset are presented in Table 3, which includes information on the languages, their corresponding pivot languages, and the total number of questions collected for each language. The final dataset consists of a total of 12,239 questions across 10 different languages, with 8,892 corresponding question-answer pairs. We observed a high answer coverage rate, with only 27% of the total questions being unanswerable. This can be attributed to the lack of relevant information on Wikipedia, especially for named entities with sparse information. Despite this sparsity, we were able to find answers for over 60% of the questions in most of the languages in our collection.
## 3 Tasks and Baselines
As part of our evaluation for AfriQA, we follow the methodology proposed in Asai et al. (2021) and assess its performance on three different tasks: XOR-Retrieve, XOR-PivotLanguageSpan, and XOR-Full. Each task poses unique challenges for cross-lingual information retrieval and question answering due to the low-resource nature of many
African languages.
### XOR-Retrieve
The XOR-Retrieve task focuses on cross-lingual passage retrieval. Specifically, given a question \(q_{x}\) in language \(X\), the goal is to find a set of passages in a pivot language \(Y\) that contains an answer to the question. This task is particularly challenging for African languages due to the limited availability of resources, which makes it difficult to retrieve relevant passages in the source language or pivot language. For our experiments, we measure the retrieval effectiveness using recall@\(k\), as defined in Karpukhin et al. (2020), where \(k\in 10,20,100\). The recall@\(k\) is calculated as the percentage of questions for which the answer span appears in one of the top \(k\) retrieved passages.
**Retrieval Corpora:** We use Wikipedia as the retrieval corpus for the XOR experiments. Specifically, we use processed Wikipedia dumps in English and French as our retrieval passage corpora, as these are our pivot languages. More information on the processing details can be found in Appendix A.
### XOR-PivotLanguageSpan
This task is designed to address the challenge of answering questions in language \(X\), using passages in a pivot language \(Y\). Specifically, given a question \(q_{x}\) in language \(X\), the goal is to identify a set of passages in language \(Y\) that contain the answer to \(q_{x}\) and extract the answer span \(a_{y}\) from these passages. We also include baselines for extracting the answer span from annotated gold passages for that question. We evaluate the effectiveness of our predictions using the Exact Match (EM) accuracy and F1 metrics, as outlined in Rajpurkar et al. (2016). This evaluation is based on how much the predicted answer spans match the token set of the correct answer.
### XOR-Full
This task is similar to XOR-PivotLanguageSpan, with the difference being that we are trying to find answers to a question in the same language as the question. Specifically, given a question \(q_{x}\) in language \(X\), the goal is to find an answer span \(a_{x}\) in the same language while leveraging passages in a pivot language \(Y\) and translating the answer back to the question language. We evaluate this task using the same metrics (F1 and EM) as the XOR-PivotLanguageSpan task. In addition, we also include BLEU scores to measure the degree of overlap between translated answer spans and ground-truth human translations.
## 4 Experiments
In this section, we describe the different baseline translation, retrieval, and reading comprehension systems.
### Translation Systems
A common approach to cross-lingual question answering is to translate queries from the source language into a target language, which is then used to find an answer in a given passage. This approach requires the use of translation systems that can accurately translate the queries from one language to another. For our experiments, we explore the use of different translation systems as baselines for AfriQA. We consider human translation, Google Translate, and open-source translation models such
\begin{table}
\begin{tabular}{l r r|l r|l r|r r|r} \hline \hline
**Source** & \multirow{2}{*}{**ISO**} & **Pivot** & **African** & \multirow{2}{*}{**Script**} & **\# Native** & \multirow{2}{*}{**Train**} & **Dev** & **Test** & **\% Unanswerable** \\
**Language** & & **Language** & & & & **Speakers** & & & **Questions** \\ \hline Bemba & bem & English & South, East \& Central & Latin & 4M & 502 & 503 & 314 & 0.41 \\ Fon & fon & French & West & Latin & 2M & 427 & 428 & 386 & 0.22 \\ Hausa & hau & English & West & Latin & 63M & 435 & 436 & 300 & 0.36 \\ Igbo & ibo & English & West & Latin & 27M & 417 & 418 & 409 & 0.18 \\ Kinyarwanda & kin & English & Central & Latin & 15M & 407 & 409 & 347 & 0.26 \\ Swahili & swa & English & East \& Central & Latin & 98M & 415 & 417 & 302 & 0.34 \\ Twi & twi & English & West & Latin & 9M & 451 & 452 & 490 & 0.12 \\ Wolof & wol & French & West & Latin & 5M & 503 & 504 & 334 & 0.38 \\ Yorübá & yor & English & West & Latin & 42M & 360 & 361 & 332 & 0.21 \\ Zulu & zul & English & South & Latin & 27M & 387 & 388 & 325 & 0.26 \\ \hline Total & — & — & — & — & 292M & 4333 & 4346 & 3560 & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Dataset information:** This table contains key information about the AfriQA Dataset
as NLLB (NLLB Team et al., 2022) and fine-tuned M2M-100 models (Adelani et al., 2022) in zero-shot settings. Below is a breakdown of the different machine translation systems.
Google Machine Translation.We use Google Translate because it is readily available and provides out-of-the-box translation for 7 out of 10 languages in our dataset. Although Google Translate provides a strong translation baseline for many of the languages, we cannot guarantee the future reproducibility of these translations as it is a product API and is constantly being updated. For our experiments, we use the translation system as of February 2023. Note that while Google Translate supports 133 languages, it does not include Bemba, Fon, nor Wolof; this speaks to the very low-resource nature of the languages included in this work and the difficulty of building systems for them.
NLLB.NLLB is an open-source translation system trained on 100+ languages and provides translation for all the languages in AfriQA. At the time of release, NLLB provides state-of-the-art translation in many languages and covers all the languages in our dataset. For our experiments, we use the 1.3B size NLLB models.11
Footnote 11: [https://huggingface.co/facebook/nllb-200-1.38](https://huggingface.co/facebook/nllb-200-1.38)
MAFAND M2M-100.MAFAND M2M-100 is an adaptation of the M2M-100 (Fan et al., 2021) machine translation model to 16 African languages in the news domain (Adelani et al., 2022). Each translation direction (e.g., yor-eng) was fine-tuned on a few thousand (2.5k-30K) parallel sentences in the news domain.
Table 4 shows the BLEU score of the different translation systems on the test set of AfriQA, evaluated against the human-translated queries. Google Translate performs the best on the languages it supports while NLLB 1.3B achieves slightly poorer performance with a broader language coverage.
### Passage Retrieval
We present two baseline retrieval systems: translate-retrieve and cross-lingual baselines. In the translate-retrieve baseline, we first translate the queries using the translation systems described in SS4.1. The translated queries are used to retrieve relevant passages using three different retrieval systems: BM25, multilingual Dense Passage Retriever (mDPR), and a hybrid combination of BM25 and mDPR. Alternatively, the cross-lingual baseline directly retrieves passages in the pivot language without the need for translation using a multilingual dense retriever.
BM25.BM25 (Robertson and Zaragoza, 2009) is a classic term-frequency-based retrieval model that matches queries to relevant passages using the frequency of word occurrences in both queries and passages. We use the BM25 implementation provided by Pyserini (Lin et al., 2021) with default hyperparameters k1 = 0.9, b = 0.4 for all languages.
mDPR.We evaluate the performance of mDPR, a multilingual adaptation of the Dense Passage Retriever (DPR) model (Karpukhin et al., 2020). In mDPR, we replace the BERT model in DPR with multilingual BERT (mBERT) which is fine-tuned on the MS MARCO passage ranking dataset (Bajaj et al., 2018). While this approach has been found effective for monolingual retrieval (Zhang et al., 2022), we also investigate its potential for cross-lingual retrieval by using original language queries for passage retrieval and translated queries for monolingual retrieval. Retrieval is performed using the Faiss flat index implementation provided by Pyserini.
Sparse-Dense Hybrid.We also explore sparse-dense hybrid baselines, a combination of sparse (BM25) and hybrid (mDPR) retrievers. We use a linear combination of both systems to generate a reranked list of passages for each question.
\begin{table}
\begin{tabular}{l l l c c} \hline \hline
**Source** & **Target** & \multirow{2}{*}{**GMT**} & \multirow{2}{*}{**NLLB**} & \multirow{2}{*}{**M2M-100**} \\
**lang** & & & & \\ \hline \hline ben & eng & — & **24.4** & — \\ fon & fre & — & **16.6** & 8.7 \\ hau & eng & **55.2** & 44.6 & 26.3 \\ ibo & eng & **48.3** & 46.3 & 34.1 \\ kin & eng & **44.9** & 43.1 & — \\ swa & eng & **54.0** & 53.2 & 34.7 \\ twi & eng & **33.0** & 30.1 & 15.7 \\ wol & fre & — & **16.6** & 12.7 \\ yor & eng & **32.7** & 30.6 & 10.6 \\ zul & eng & **50.2** & 45.4 & 33.3 \\ \hline avg & — & **45.5** & 35.1 & 22.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Translation BLEU Scores: BLEU score of some translation systems on the test set for the answer translation task. Note that Google Translate is not yet available in all languages, due to their very low-resource nature.**
### Answer Span Prediction
To benchmark models' answer selection capabilities on AfriQA, we combine different translation, extractive, and generative QA approaches.
**Extractive QA on Gold Passages.** In this approach, we extract the answer span from passages that have been manually annotated in both French and English, using both original and translated queries. We used AfroXLMR Alabi et al. (2022) as a backbone to train our extractive QA models. The models were trained on SQuAD 2.0 Rajpurkar et al. (2016) and FQuAD d'Hoffschmidt et al. (2020) separately.
**Generative QA on Gold Passages.** To evaluate the performance of generative question answering, we utilize mT5-base Xue et al. (2021) fine-tuned on SQuAD 2.0 Rajpurkar et al. (2016) and evaluate it using both translated and original queries. The model was provided with the queries and the gold passages that were annotated using a template prompt and generates the answers to the questions.
**Extractive QA on Retrieved Passages.** For XOR-PivotLanguageSpan baselines, we employed an extractive question-answering model that extracts the answer span from the output passages produced by the various retrieval baselines outlined in SS4.2. The model is trained to extract answer spans from each passage, along with the probability indicating the likelihood of each answer. The answer span with the highest probability is selected as the correct answer. We trained a multilingual DPR reader model, which was initialized from mBERT and trained on Natural Questions Kwiatkowski et al. (2019).
## 5 Results and Analysis
### XOR-Retrieve Results
We present the retrieval results for recall@10 and recall@100 in Table 5.12 The table includes retriever results using different question translations and retrieval systems. We also report the performance with both original and human-translated queries. The table shows that hybrid retrieval using human translation yields the best results for all languages, with an average recall@10 of 73.9 and recall@100 of 86.2. In isolation, mDPR retrieval outperforms BM25 for all translation types. This table also enables us to compare the effectiveness of different translation systems in locating relevant passages for cross-lingual question answering in African languages. This is illustrated in Figure 1, showing retriever recall rates for different translation types at various cutoffs using mDPR.
Footnote 12: For recall@k retrieval results, we assume that there is only one gold passage despite the possibility of other retrieved passages containing the answer.
We observe that human translation yields better accuracy than all other translation types, indicating that the current state-of-the-art machine translation systems still have a long way to go in accurately translating African languages. Google Translate shows better results for the languages where it is available, while the NLLB model provides better coverage. The cross-lingual retrieval model that retrieves passages using questions in their original language is the least effective of all the model types. This illustrates that the cross-lingual representations learned by current retrieval methods are not yet of sufficient quality to enable accurate retrieval across different languages.
### XOR-PivotLanguageSpan Results
**Gold Passage Answer Prediction.** We first evaluate the extractive and generative QA setting using gold passages. We present F1 and Exact Match results using different methods to translate the query in Table 6 and Table 7. On both approaches, human translation of the queries consistently outperforms using machine-translated queries, which outperforms using queries in their original language. The generative setting using mT5 yields slightly better results on average compared to the extractive
Figure 1: Graph of retriever recall@k for different translation systems. The scores shown in this graph are from mDPR retrieval.
setting across different translation systems.
**Retrieved Passages Answer Prediction.** We now evaluate performance using retrieved passages. We present F1 and Exact Match results with different translation-retriever combinations in Table 8. We extract the answer spans from only the top-10 retrieved passages for each question using an ex
\begin{table}
\begin{tabular}{l c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{HT} & \multicolumn{2}{c|}{GMT} & \multicolumn{2}{c|}{NLLB} & \multicolumn{2}{c}{Crosslingual} \\ & F1 & EM & F1 & EM & F1 & EM & F1 & EM \\ \hline bem & **48.8** & **41.7** & — & — & 38.5 & 32.0 & 2.9 & 1.1 \\ fon & **41.4** & **28.5** & — & — & 23.4 & 15.3 & 5.1 & 2.3 \\ hau & **58.5** & **49.0** & 53.5 & 45.7 & 50.9 & 42.7 & 25.8 & 22.3 \\ ibo & **66.6** & **59.2** & 59.8 & 53.3 & 60.2 & 53.3 & 41.7 & 34.7 \\ kin & **60.8** & **43.8** & 57.3 & 40.9 & 58.8 & 42.9 & 25.5 & 20.2 \\ swa & **52.3** & **42.6** & 48.9 & 40.8 & 49.2 & 41.2 & 29.4 & 23.5 \\ twi & **55.4** & **45.3** & 42.0 & 33.7 & 40.1 & 33.1 & 5.3 & 3.5 \\ wol & **44.6** & **36.1** & — & — & 21.8 & 16.9 & 3.9 & 2.8 \\ yor & **54.9** & **49.8** & 48.9 & 45.1 & 47.9 & 43.0 & 11.9 & 7.8 \\ zul & **60.2** & **50.8** & 57.4 & 48.9 & 55.6 & 46.5 & 24.7 & 20.9 \\ \hline avg & **54.5** & **44.7** & 46.0 & 38.6 & 44.6 & 36.7 & 17.6 & 13.9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Generative Gold Passages Answer Prediction:** Comparison of F1 and Exact Match Accuracy scores for generative answer span prediction on the test set using mT5-base (Xue et al., 2020) as the backbone.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{HT} & \multicolumn{2}{c|}{GMT} & \multicolumn{2}{c|}{NLLB} & \multicolumn{2}{c}{Crosslingual} \\ & F1 & EM & F1 & EM & F1 & EM & F1 & EM \\ \hline bem & **38.2** & **29.5** & — & — & 30.0 & 21.9 & 0.4 & 0.4 \\ fon & **53.8** & **40.4** & — & — & 37.5 & 26.7 & 13.4 & 6.0 \\ hau & **60.9** & **52.7** & 54.4 & 47.7 & 50.9 & 43.7 & 27.7 & 23.7 \\ ibo & **68.2** & **60.6** & 62.1 & 55.0 & 62.8 & 56.2 & 29.2 & 24.7 \\ kin & **56.8** & **38.9** & 50.8 & 36.0 & 51.3 & 36.6 & 22.7 & 17.9 \\ swa & **45.2** & **37.9** & 44.6 & 37.9 & 45.2 & 38.1 & 31.6 & 24.6 \\ twi & **51.2** & **41.8** & 39.2 & 31.1 & 34.3 & 30.0 & 3.4 & 2.5 \\ wol & **45.2** & **33.9** & — & — & 33.2 & 26.0 & 1.8 & 0.9 \\ yor & **45.1** & **38.6** & 36.0 & 31.7 & 32.3 & 28.0 & 6.0 & 3.8 \\ zul & **59.1** & **49.2** & 56.0 & 48.6 & 53.6 & 45.8 & 17.0 & 13.5 \\ \hline avg & **52.4** & **42.4** & 42.9 & 36.0 & 43.1 & 35.3 & 15.3 & 11.8 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Extractive Gold Passages Answer Prediction:** Comparison of F1 and Exact Match Accuracy scores for extractive answer span prediction on the test set using AfroXLMR-base (Alabi et al., 2022) as the backbone.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Human Translation**} & \multicolumn{2}{c|}{**GMT**} & \multicolumn{2}{c|}{**NLLB**} & \multicolumn{2}{c|}{**M2M-100**} & \multicolumn{2}{c}{**Crosslingual**} \\
**lang** & **BM25** & **mDPR** & **Hybrid** & **BM25** & **mDPR** & **BM25** & **mDPR** & **BM25** & **mDPR** & **mDPR** \\ \hline & \multicolumn{8}{c}{Recall@10} \\ \hline bem & 55.7 & 67.5 & **72.3** & — & — & 52.2 & **59.8** & — & — & 14.7 \\ fon & 66.3 & 69.4 & **70.7** & — & — & 43.9 & **48.7** & 39.9 & 43.3 & 28.5 \\ hau & 58.0 & 65.7 & **72.7** & 53.3 & **60.3** & 52.0 & 59.7 & 36.7 & 44.3 & 13.7 \\ igb & 70.4 & 74.3 & **82.9** & 65.5 & **71.2** & 64.8 & 68.0 & 62.1 & 67.5 & 25.4 \\ kin & 59.1 & 66.3 & **75.5** & 53.6 & **61.1** & 53.0 & 58.8 & — & — & 15.6 \\ swa & 46.0 & 61.9 & **67.6** & 45.0 & **60.9** & 43.1 & 58.3 & 39.1 & 54.6 & 20.9 \\ twi & 61.8 & 66.7 & **75.3** & 56.1 & **58.0** & 50.4 & 54.1 & 45.7 & 49.4 & 21.4 \\ wol & 61.4 & 67.7 & **68.6** & — & — & 35.0 & **36.5** & 34.4 & 35.0 & 13.8 \\ yor & 55.1 & 66.6 & **71.7** & 52.1 & **59.0** & 50.9 & 57.5 & 36.8 & 35.5 & 21.4 \\ zul & 59.7 & 70.2 & **76.3** & 57.2 & **66.2** & 51.5 & 64.6 & 45.5 & 60.0 & 14.2 \\ \hline avg & 59.4 & 67.6 & **73.4** & 54.7 & **62.4** & 49.7 & 56.6 & 42.5 & 48.7 & 19.0 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Recall@100} \\ \hline bem & 76.8 & 81.9 & **84.7** & — & — & 70.4 & **74.2** & — & — & 37.3 \\ fon & 78.8 & 79.3 & **80.1** & — & — & **60.3** & 59.3 & 59.6 & 59.3 & 46.9 \\ hau & 77.7 & 83.3 & **84.7** & 77.7 & **79.3** & 75.0 & 77.7 & 58.3 & 64.3 & 34.3 \\ igb & 87.0 & 89.7 & **94.6** & 85.6 & **87.5** & 84.8 & 83.9 & 82.4 & 83.4 & 50.1 \\ kin & 78.1 & 81.3 & **87.0** & 75.2 & **78.1** & 74.1 & 77.0 & — & — & 30.3 \\ swa & 70.9 & 80.5 & **82.1** & 68.1 & **79.8** & 68.2 & 77.2 & 64.2 & 76.2 & 40.1 \\ twi & 78.4 & 82.9 & **85.7** & 71.6 & **83.7** & 70.0 & 72.5 & 61.8 & 63.1 & 38.4 \\ wol & 82.6 & 82.6 & **84.7** & — & — & **56.0** & 55.1 & 57.2 & 53.
tractive multilingual reader model (see SS4.3). The model assigns a probability to each answer span, and we select the answer with the highest probability as the final answer.
Our results show that hybrid retrieval using human-translated queries achieves the best performance across all languages on average. Using human-translated queries generally outperforms using translations by both Google Translate and NLLB, regardless of the retriever system used. In terms of retrieval methods, mDPR generally performs better than BM25, with an average gain of 3 F1 points across different translation types. These results highlight the importance of carefully selecting translation-retriever combinations to achieve the best answer span prediction results in cross-lingual question answering.
### XOR-Full Results
Each pipeline consists of components for question translation, passage retrieval, answer extraction, and answer translation. From Table 9, we observe that Google machine translation combined with mDPR is the most effective. This is followed by a pipeline combining NLLB translation with mDPR.
## 6 Related Work
Africa NLP.In parallel with efforts to include more low-resource languages in NLP research Costa-jussa et al. (2022); Ruder (2020), demand for NLP that targets African languages, which represent more than 30% of the world's spoken languages Ogueji et al. (2021) is growing. This has resulted in the creation of publicly available multilingual datasets targeting African languages for a variety of NLP tasks such as sentiment analysis Muhammad et al. (2023); Shode et al. (2022), language identification Adebara et al. (2022), data-to-text generation Gehrmann et al. (2022), topic classification Adelani et al. (2023); Hedderich et al. (2020), machine translation Adelani et al. (2022); Nekoto et al. (2020), and NER Eiselen (2016); Adelani et al. (2021, 2022).
Datasets for QA and Information Retrieval tasks have also been created. They are, however, very few and cater to individual languages Abedissa et al. (2023); Wanjawa et al. (2023) or a small subset of languages spoken in individual countries Daniel et al. (2019); Zhang et al. (2022). Given the region's large number of linguistically diverse and information-scarce languages, multilingual and cross-lingual datasets are encouraged to catalyze research efforts. To the best of our knowledge, there are no publicly available cross-lingual open-retrieval African language QA datasets.
Comparison to Other Resources.Multilingual QA datasets have paved the way for language models to simultaneously learn across multiple languages, with both reading comprehension Lewis et al. (2020) and other QA datasets Longpre et al. (2021); Clark et al. (2020) predominantly utilizing publicly available data sources such as Wikipedia, SQUAD, and the Natural Questions dataset. To address the information scarcity of the typically used
\begin{table}
\begin{tabular}{r|c|c c c c c c c c c|c} \hline \hline & & \multicolumn{8}{c}{Pivot Language Span F1} \\ \hline
**Query Translation** & **Retrieval** & **bem** & **fon** & **hau** & **ibo** & **kin** & **swa** & **twi** & **wol** & **yor** & **zul** & avg \\ \hline HT & BM25 & 29.2 & **11.4** & 31.4 & 43.0 & 33.8 & 24.3 & 38.4 & 15.4 & 28.9 & 32.8 & 28.9 \\ HT & mDPR & 32.5 & 11.0 & **35.8** & 44.8 & 35.4 & 28.2 & 40.7 & 14.7 & 31.7 & **36.5** & 31.1 \\ HT & Hybrid & **34.7** & 11.3 & 35.5 & **46.1** & **39.2** & 27.5 & **41.8** & **16.2** & **32.4** & 34.6 & **32.0** \\ GMT & BM25 & — & — & 21.0 & 38.6 & 28.3 & 24.7 & 27.7 & — & 21.7 & 31.6 & 27.7 \\ GMT & mDPR & — & — & 31.5 & 39.3 & 35.3 & **29.1** & 31.1 & — & 22.9 & 36.0 & 32.2 \\ NLLB & BM25 & 23.8 & 3.6 & 24.6 & 37.6 & 29.3 & 25.2 & 25.7 & 4.4 & 17.3 & 26.8 & 19.8 \\ NLLB & mDPR & 24.1 & 5.1 & 27.2 & 39.6 & 33.3 & 25.9 & 28.2 & 5.2 & 21.4 & 30.4 & 24.0 \\ \hline & & \multicolumn{8}{c}{Pivot Language Span EM} \\ \hline HT & BM25 & 21.4 & **8.0** & 24.0 & 31.1 & 17.3 & 17.5 & 25.3 & 10.2 & 21.4 & 23.1 & 19.9 \\ HT & mDPR & 23.2 & 7.0 & 26.7 & 32.5 & 19.3 & **20.9** & 27.6 & 10.8 & 22.9 & 24.0 & 21.5 \\ HT & Hybrid & **25.2** & 7.3 & 26.3 & **33.3** & **22.2** & 19.5 & **28.2** & **11.1** & **23.2** & 23.1 & **21.9** \\ GMT & BM25 & — & — & **27.8** & 30.3 & 16.1 & 18.2 & 17.8 & — & 16.6 & 21.5 & 21.2 \\ GMT & mDPR & — & — & 22.7 & 30.1 & 20.7 & 20.5 & 20.4 & — & 16.6 & **24.9** & 22.3 \\ NLLB & BM25 & 14.6 & 0.8 & 19.0 & 28.9 & 15.6 & 18.5 & 15.9 & 3.3 & 12.7 & 19.1 & 13.8 \\ NLLB & mDPR & 14.3 & 2.1 & 20.7 & 29.1 & 18.7 & 18.9 & 18.2 & 2.7 & 14.5 & 20.6 & 16.0 \\ \hline \hline \end{tabular}
\end{table}
Table 8: F1 and EM scores on pivot language answer generation using an extractive multilingual reader model with different query translation and retrieval methods.
data sources for low-resource languages, cross-lingual datasets Liu et al. (2019); Asai et al. (2021) emerged that translate between low-resource and high-resource languages, thus providing access to a larger information retrieval pool which decreases the fraction of unanswerable questions. Despite these efforts, however, the inclusion of African languages remains extremely rare, as shown in Table 1, which compares our dataset to other closely related QA datasets. TyDi QA features Swahili as the sole African language out of the 11 languages it covers.
In recent years, efforts to create cross-lingual information retrieval datasets that include African languages have resulted in the creation of datasets such as AfriCLIRMatrix Ogundepo et al. (2022) and CLIRMatrix Sun and Duh (2020) which feature 15 and 5 African languages respectively. These CLIR datasets however are not specific to QA and are synthetically generated from Wikipedia.
## 7 Conclusion
In this work, we take a step toward bridging the information gap between native speakers of many African languages and the vast amount of digital information available on the web by creating AfriQA, the first cross-lingual question-answering dataset focused on African languages. AfriQA is an open-retrieval question answering with 12,000+ questions across 10 African languages. We evaluate our dataset on cross-lingual retrieval and reading comprehension tasks.
We anticipate that AfriQA will help improve access to relevant information for speakers of African languages. By leveraging the power of cross-lingual question answering, we hope to bridge the information gap and promote linguistic diversity and inclusivity in digital information access. Overall, this work represents a crucial step towards democratizing access to information and empowering underrepresented African communities by providing tools to engage with digital content in their native languages.
## Acknowledgements
We would like to thank Google Cloud for providing us access to computational resources through free cloud credits. We are grateful to Google Research for funding the dataset creation. Finally, we thank Knowledge4All for their administrative support throughout the project.
## Contributions
In this section, we provide more details about the contributions of each author.
**Data Annotation**: Andre Niyongabo Rubungo, Boyd Sinkala, Daniel Abidemi Aijsafe, Emeka Felix Onwuegbuzia, Emile Niyomutabazi, Eunice Mukonde, Falalu Ibrahim LAWAN, Habib MBOW, Ibrahim Said Ahmad, Jesujoba O. Alabi, Martin Namukombo, Mbonu Chinedu Emmanuel, Mofelouwa Adeyemi, Mofya Phiri, Ndumiso Mngoma, Neo Putini, Orevaoghene Ahia, Priscilla Amondi Amuok, Ruqayya Nasir Iro,Sonia Adhiambo, Albert Njoroge Kahira, Aremu Anuuluwapo, Ayodee Awokoya, Bernard Opoku, Chiamaka Chukwuneke, Christine Mwase, Clemencia Siro, Oyinkansola Fiyinfoluwa Awosan, Steven Arthur, Shamsuddeen Hassan Muhammad, Tunde Oluwaseyi Ajayi, Verrah Otiende, Chris Emezue, Claytone Sikasote, David Adelani, Happy Buzaaba, Ignatius Ezeani, Rooweither Mabuya, Salomey Osei, Abdou Aziz DIOP, Bonaventure F. P. Dossou, Gilles Hacheme
**Language Team Coordination**: Chris Emezue, Claytone Sikasote, David Adelani, Happy Buzaaba, Ignatius Ezeani, Rooweither Mabuya, Salomey Osei, Abdou Aziz DIOP, Albert Njoroge Kahira, Shamsuddeen Hassan Muhammad, Bonaventure F. P. Dossou, Gilles Hacheme
**Paper Writing**: Ogundepo Odunayo, David Adelani, Jonathan H. Clark, Sebastian Ruder, Clara E. Rivera, Tajuddeen R. Gowadabe, Tunde Oluwaseyi Ajayi, Chris Emezue, Claytone Sikasote, Happy Buzaaba, Ignatius Ezeani, Rooweither Mabuya, Salomey Osei, Abdou Aziz Diop, Abraham
\begin{table}
\begin{tabular}{c c|c|c c c c c c c c c c|c c c} \hline \multicolumn{3}{c|}{Translation} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{XOR-Full F1} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{Average} \\ Query & Answer & Retrieval & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{XOR-Full F1} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{Average} \\ \hline & & & **bem** & **fon** & **hau** & **ibo** & **kin** & **swa** & **twi** & **wol** & **vol** & **yor** & **zul** & **F1** & **EM** & **BLEU** \\ \hline GMT & GMT & BM25 & — & — & 20.4 & 30.4 & 24.2 & 18.1 & 14.9 & — & 16.1 & 19.7 & 20.5 & 12.1 & 18.3 \\ GMT & GMT & mDPR & — & — & **21.7** & **33.0** & **26.5** & **21.9** & 16.5 & 14.2 & **20.4** & **21.1** & **23.0** & **14.2** & **20.7** \\ NLLB & NLLB & BM25 & **13.6** & 2.6 & 17.5 & 26.5 & 19.9 & 19.2 & 18.4 & 3.2 & 12.7 & 12.5 & 14.6 & 7.5 & 12.9 \\ NLLB & NLLB & mDPR & 13.3 & 4.3 & 19.3 & 29.9 & 22.4 & 20.3 & **19.5** & **3.5** & 17.6 & 13.1 & 16.3 & 8.3 & 14.3 \\ \hline \end{tabular}
\end{table}
Table 9: XOR-Full F1 results combining different translation and retriever systems.
Toluwase Owodunni, Atnafu Lambebo Tonja, Ivanoulwa Shode, Bernard Opoku, Chiamaka Chukwuneke, Christine Mwasse, Clemencia Siro, Aremu Anouluwapo, Ayodele Awokoya, Oyinkansola Fiyinfoluwa AWOSAN, Steven Arthur, Verrah Otiende
**Output Data Preparation and Experimentation**: Ogundeepo Odunayo, Abraham Toluwase Owodunni, Atnafu Lambebo Tonja, Iyanouluwa Shode, Abdou Aziz DIOP
**Annotator Training**: Jonathan H. Clark, Sebastian Ruder, Clara E. Rivera, Tajuddeen R. Gwadabe
**Task Framing**: Jonathan H. Clark, Sebastian Ruder, Clara E. Rivera, Tajuddeen R. Gwadabe, Akari Asai, Ogundepo Odunayo
**Project Coordination**: Clara E. Rivera, Tajuddeen R. Gwadabe, Ogundepo Odunayo
**Documentation**: Clara E. Rivera, Tajuddeen R. Gwadabe, Ogundepo Odunayo
**Input Data Preparation and Annotation Tool Management**: Bonaventure F. P. Dossou, Gilles Hacheme
**Quality Control**: Clara E. Rivera Tajuddeen R. Gwadabe, Ogundepo Odunayo, Jonathan H. Clark, Sebastian Ruder
|
2305.18471 | Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and
Relaxed Assumptions | We provide a simple convergence proof for AdaGrad optimizing non-convex
objectives under only affine noise variance and bounded smoothness assumptions.
The proof is essentially based on a novel auxiliary function $\xi$ that helps
eliminate the complexity of handling the correlation between the numerator and
denominator of AdaGrad's update. Leveraging simple proofs, we are able to
obtain tighter results than existing results \citep{faw2022power} and extend
the analysis to several new and important cases. Specifically, for the
over-parameterized regime, we show that AdaGrad needs only
$\mathcal{O}(\frac{1}{\varepsilon^2})$ iterations to ensure the gradient norm
smaller than $\varepsilon$, which matches the rate of SGD and significantly
tighter than existing rates $\mathcal{O}(\frac{1}{\varepsilon^4})$ for AdaGrad.
We then discard the bounded smoothness assumption and consider a realistic
assumption on smoothness called $(L_0,L_1)$-smooth condition, which allows
local smoothness to grow with the gradient norm. Again based on the auxiliary
function $\xi$, we prove that AdaGrad succeeds in converging under
$(L_0,L_1)$-smooth condition as long as the learning rate is lower than a
threshold. Interestingly, we further show that the requirement on learning rate
under the $(L_0,L_1)$-smooth condition is necessary via proof by contradiction,
in contrast with the case of uniform smoothness conditions where convergence is
guaranteed regardless of learning rate choices. Together, our analyses broaden
the understanding of AdaGrad and demonstrate the power of the new auxiliary
function in the investigations of AdaGrad. | Bohan Wang, Huishuai Zhang, Zhi-Ming Ma, Wei Chen | 2023-05-29T09:33:04Z | http://arxiv.org/abs/2305.18471v2 | # Convergence of AdaGrad for Non-convex Objectives:
###### Abstract
We provide a simple convergence proof for AdaGrad optimizing non-convex objectives under only affine noise variance and bounded smoothness assumptions. The proof is essentially based on a novel auxiliary function \(\xi\) that helps eliminate the complexity of handling the correlation between the numerator and denominator of AdaGrad's update. Leveraging simple proofs, we are able to obtain tighter results than existing results (Faw et al., 2022) and extend the analysis to several new and important cases. Specifically, for the over-parameterized regime, we show that AdaGrad needs only \(\mathcal{O}(\frac{1}{\varepsilon^{2}})\) iterations to ensure the gradient norm smaller than \(\varepsilon\), which matches the rate of SGD and significantly tighter than existing rates \(\mathcal{O}(\frac{1}{\varepsilon^{4}})\) for AdaGrad. We then discard the bounded smoothness assumption, and consider a realistic assumption on smoothness called \((L_{0},L_{1})\)-smooth condition, which allows local smoothness to grow with the gradient norm. Again based on the auxiliary function \(\xi\), we prove that AdaGrad succeeds in converging under \((L_{0},L_{1})\)-smooth condition as long as the learning rate is lower than a threshold. Interestingly, we further show that the requirement on learning rate under the \((L_{0},L_{1})\)-smooth condition is necessary via proof by contradiction, in contrast with the case of uniform smoothness conditions where convergence is guaranteed regardless of learning rate choices. Together, our analyses broaden the understanding of AdaGrad, and demonstrate the power of the new auxiliary function in the investigations of AdaGrad.
AdaGrad, Convergence Analysis
## 1 Introduction
Adaptive optimizers have been a great success in deep learning. Compared to stochastic gradient descent (SGD), adaptive optimizers use the gradient information of iterations to dynamically adjust the learning rate, which is observed to converge much faster than SGD in a wide range of deep learning tasks (Vaswani et al., 2017; Dosovitskiy et al., 2020; Yun et al., 2019). Such a superiority has attracted numerous researchers to analyze the behavior of adaptive optimizers.
AdaGrad (Duchi et al., 2011) is among the earliest adaptive optimizers and enjoys favorable convergence rate for online convex optimization. Specifically, the design of AdaGrad is quite simple: it tracks the gradient magnitudes of the past iterations and use its reciprocal to scale the current gradient. The pseudo-codes of the norm version of AdaGrad (i.e., AdaGrad-Norm) and AdaGrad are presented in Algorithm 1 and Algorithm 2, respectively.
Despite the popularity and the simplicity of AdaGrad, its theoretical analysis is not satisfactory when optimizing non-convex objectives. Specifically, until recently, Ward et al. (2020) analyze the convergence of AdaGrad-Norm and achieve \(\mathcal{O}(\log T/\sqrt{T})\) rate. However, Their result is based on the assumption that the stochastic gradient \(g_{t}\) is uniformly bounded across the iterations, which does not hold even for quadratic functions, let alone deep neural networks. In comparison, the analysis of SGD does not require such an assumption.
```
Input: Objective function \(f(\mathbf{w})\), learning rate \(\eta>0\), initial point \(\mathbf{w}_{1}\in\mathbb{R}^{d}\), initial conditioner \(\mathbf{\nu}_{1}\in\mathbb{R}^{+}\)
1:For\(t=1\to\infty\):
2: Generate stochastic gradient \(g_{t}\)
3: Calculate \(\mathbf{\nu}_{t}=\mathbf{\nu}_{t-1}+\left\|g_{t}\right\|^{2}\)
4: Update \(\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{1}{\sqrt{\mathbf{\nu}_{t}}}g_{t}\)
5:EndFor
```
**Algorithm 1** AdaGrad-Norm
A very recent exception (Faw et al., 2022) relaxes the assumptions and proves that AdaGrad-Norm converges by only assuming uniformly bounded smoothness (c.f. our Assumption 1) and affine noise variance (c.f. our Assumption 2), which matches the conditions of SGD. However, the proof in (Faw et al., 2022) is rather complicated (around 30 pages), which is hard to understand the intuition behind and to extend to the analysis of other cases. Moreover, the convergence rate in (Faw et al., 2022) does not get better when strong growth condition holds (i.e., our Assumption 2 with \(D_{0}=0\)) while SGD does. We believe such a gap is vital as strong growth condition holds in over-parameterized models (Vaswani et al., 2019), which are widely adopted in deep learning.
We know that the convergence analysis of SGD under the same set of assumptions is quite simple. **What makes the analysis of AdaGrad so complicated?** We can understand the difficulty from the classical descent lemma
\[\mathbb{E}[f(\mathbf{w}_{t+1})|\mathcal{F}_{t}]\leq f(\mathbf{w}_{t})+ \underbrace{\mathbb{E}\left[\left\langle\nabla f(\mathbf{w}_{t}),\mathbf{w}_{t+1}-\bm {w}_{t}\right\rangle|\mathcal{F}_{t}\right]}_{\text{First Order}}+\underbrace{ \frac{L}{2}\mathbb{E}\left[\left\|\mathbf{w}_{t+1}-\mathbf{w}_{t}\right\|^{2}| \mathcal{F}_{t}\right]}_{\text{Second Order}}, \tag{1}\]
where \(\mathcal{F}_{t}:=\sigma(g_{1},\cdots,g_{t-1})\) denotes the sigma field of the stochastic gradients up to \(t-1\). Then
* for SGD, \(\mathbf{w}_{t+1}-\mathbf{w}_{t}=-\eta g_{t}\) and hence the "First Order" term is \(-\eta\|\nabla f(\mathbf{w}_{t})\|^{2}\), which is negative and able to decrease the objective sufficiently,
* for AdaGrad(-Norm), \(\mathbf{w}_{t+1}-\mathbf{w}_{t}=-\eta\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\). As \(\mathbf{\nu}_{t}\) correlates with \(g_{t}\), the "First Order" term does not admit a clear form.
To deal with the correlation in AdaGrad(-Norm), a common practice is to use a surrogate \(\tilde{\mathbf{\nu}}_{t}\) of \(\mathbf{\nu}_{t}\)(Ward et al., 2020; Defossez et al., 2020; Faw et al., 2022), which is measurable with respect to \(\mathcal{F}_{t}\), to decompose the "First Order" term as follows,
\[\mathbb{E}\left[\left\langle\nabla f(\mathbf{w}_{t}),\mathbf{w}_{t+1}- \mathbf{w}_{t}\right\rangle|\mathcal{F}_{t}\right]=\mathbb{E}\left[\left\langle \nabla f(\mathbf{w}_{t}),-\eta\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\rangle| \mathcal{F}_{t}\right]=\mathbb{E}\left[\left\langle\nabla f(\mathbf{w}_{t}),-\eta \frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\rangle|\mathcal{F}_{t}\right]\] \[+\mathbb{E}\left[\left\langle\nabla f(\mathbf{w}_{t}),\eta g_{t} \left(\frac{1}{\sqrt{\mathbf{\tilde{\nu}}_{t}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}} \right)\right\rangle|\mathcal{F}_{t}\right].\]
The first term equals \(-\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\), which is negative and desired. However, the last term is an additional error term, which is very challenging to deal with. Existing results either assume bounded stochastic gradient to work around it (Ward et al., 2020), or resolve it through complicated analysis (Faw et al., 2022) (c.f. Section 3).
In this paper, we propose a novel auxiliary function \(\xi(t)=\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\) for the convergence analysis of AdaGrad(-Norm), and show the error term can be bounded by \(\mathbb{E}^{|\mathcal{F}_{t}}[\xi(t-1)-\xi(t)]\) (c.f. Lemma 4), which can be reduced by telescoping. As explained in Section 3, such a auxiliary function is rooted in the non-increasing nature of the adaptive learning rate \(\frac{\eta}{\sqrt{\mathbf{\nu}_{k}}}\).
With the new and simplified proof, we are able to obtain stronger results for AdaGrad-Norm and extend the analysis to other important scenarios.
* Under strong growth condition (or the so-called over-parameterized regime), our convergence rate for AdaGrad-Norm is \(\mathcal{O}(\frac{1}{T})\), which matches that of SGD and stronger than existing results (Faw et al., 2022). This demonstrates that AdaGrad-Norm converges faster in the over-parameterized regime than in the under-parameterized regime.
* We extend the analysis to AdaGrad by utilizing a coordinate version \(\tilde{\xi}(t)=\sum_{l=1}^{d}\frac{\partial_{l}f(\mathbf{w}_{t})^{2}}{\sqrt{\mathbf{ \nu}_{t,l}}}\) of \(\xi(t)\) and obtain similar convergence. To the best of our knowledge, this is the first convergence result of AdaGrad without the requirement of bounded gradient norm. We also prove the convergence for randomly-reshuffled AdaGrad, which is the version of AdaGrad used in deep learning practice.
* We go beyond the uniform smoothness and consider a realistic non-uniformly smooth condition called \((L_{0},L_{1})\)-smooth condition (Assumption 6). We prove that AdaGrad(-Norm) still converges under \((L_{0},L_{1})\)-smooth condition, but requires the learning rate smaller than a threshold, whose necessity is conversely verified with a counterexample. Together, AdaGrad can converge under the non-uniform smoothness but may not be exactly tuning-free.
The rest of this paper is organized as follows. In Section 2, we define notations and introduce assumptions; in Section 3, we describe the motivation to use the auxiliary function; in Section 4, we derive the convergence result of AdaGrad-Norm under \(L\)-smooth condition; in Section 5, we extend the result to AdaGrad; in Section 6, we analyze the convergence of AdaGrad(-Norm) under \((L_{0},L_{1})\)-smooth condition; Section 7 presents the related works.
## 2 Preliminary
**Notations.** The following notations are used throughout this paper.
* (Vector operators) \(\odot\) stands for the Hadamard product between vectors, and \(g^{\odot 2}\triangleq g\odot g\). \(\langle\mathbf{w},\mathbf{v}\rangle\) stands for the \(L^{2}\) inner product between \(\mathbf{w}\) and \(\mathbf{v}\), and \(\|\mathbf{w}\|\triangleq\sqrt{\langle\mathbf{w},\mathbf{w}\rangle}\).
* (Stochastic operators) \(\mathcal{F}_{t}=\sigma(g_{t-1},\cdots,g_{1})\) stands for the sigma field of historical gradients up to time \(t-1\) and thus \(\{\mathbf{w}_{t}\}_{t=1}^{\infty}\) is an adapted random process with respect to \(\{\mathcal{F}_{t}\}_{t=1}^{\infty}\). For brevity, we abbreviate the expectation conditional on \(\mathcal{F}_{t}\) as \(\mathbb{E}^{|\mathcal{F}_{t}|}[*]\triangleq\mathbb{E}[*|\mathcal{F}_{t}]\).
**Assumptions.** Throughout this paper, we assume that \(f\) is lower bounded. We also need the following assumptions:
**Assumption 1** (\(L\)-smooth condition): _We assume that \(f\) is differentiable and its gradient satisfies that \(\forall\mathbf{w}_{1},\mathbf{w}_{2}\in\mathbb{R}^{d}\), we have \(\|\nabla f(\mathbf{w}_{1})-\nabla f(\mathbf{w}_{2})\|\leq L\|\mathbf{w}_{1}-\mathbf{w}_{2}\|\)._
**Assumption 2** (Affine noise variance): _We assume that there exist positive constants \(D_{0}\) and \(D_{1}\) such that \(\forall t\geq 1\), \(\mathbb{E}^{|\mathcal{F}_{t}|}[\|g_{t}\|^{2}]\leq D_{0}+D_{1}\|\nabla f(\mathbf{w}_ {t})\|^{2}\)._
To the best of our knowledge, the above two assumptions are the weakest requirements for the convergence of AdaGrad(-Norm) among the existing literature.
## 3 Motivation of the auxiliary function
As mentioned in Introduction, the main obstacle in the analysis of AdaGrad(-Norm) is to bound the error term \(\mathbb{E}^{|\mathcal{F}_{t}}\langle\nabla f(\mathbf{w}_{t}),\eta g_{t}(\frac{1}{ \sqrt{\nu_{t}}}-\frac{1}{\sqrt{\nu_{t}}})\rangle\). Most of the existing works assume that \(\|g_{t}\|\) is uniformly bounded, and choose \(\tilde{\mathbf{\nu}}_{t}=\mathbf{\nu}_{t-1}\). In this case, the error term can be shown to be as small as the "Second Order" term in Eq. (1) and can be further bounded. If the bounded gradient assumption is removed, Faw et al. (2022) shows that _most of the iterations are "good"_, in the sense that the error term is smaller than \(\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\), which hence won't affect the negativity of the "First Order" term in Eq. (1) after decomposition. However, it is complicated to deal with the "bad" iterations, which occupies the main space of the proof in (Faw et al., 2022).
Instead, to deal with the error term, we propose a simple auxiliary function \(\xi(t)\) that can be canceled out during telescoping. The choice of \(\xi(t)\) is motivated as follows. By choosing \(\tilde{\mathbf{\nu}}_{t}=\mathbf{\nu}_{t-1}\), we find that the error term can be rewritten as
\[\mathbb{E}^{|\mathcal{F}_{t}}\left\langle\nabla f(\mathbf{w}_{t}), \eta g_{t}\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}} \right)\right\rangle\leq \eta\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\nabla f(\mathbf{w}_{t})\| \|g_{t}\|\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}} \right)\right]\] \[= \eta\mathbb{E}^{|\mathcal{F}_{t}}\left[\left(\frac{\|\nabla f(\bm {w}_{t})\|\|g_{t}}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{\|\nabla f(\mathbf{w}_{t})\|\|g_{ t}\|}{\sqrt{\mathbf{\nu}_{t}}}\right)\right], \tag{2}\]
where the inequality is due to the Cauchy-Schwarz inequality and \(\mathbf{\nu}_{t}\) is non-decreasing. Note that if we have both \(\|\nabla f(\mathbf{w}_{t})\|\approx\|\nabla f(\mathbf{w}_{t-1})\|\) and \(\|g_{t}\|\approx\|g_{t-1}\|\), the term (2) approximately equals to \(\eta\mathbb{E}^{|\mathcal{F}_{t}}\left[\left(\frac{\|\nabla f(\mathbf{w}_{t-1})\| \|g_{t-1}\|}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{\|\nabla f(\mathbf{w}_{t})\|\|g_{t}\|} {\sqrt{\mathbf{\nu}_{t}}}\right)\right]\). In this case, we can use \(\hat{\xi}(t)=\frac{\|\nabla f(\mathbf{w}_{t})\|\|g_{t}\|}{\sqrt{\mathbf{\nu}_{t}}}\) as an auxiliary function, and the sum of the expected error term satisfies
\[\sum_{t=1}^{T}\mathbb{E}\left\langle\nabla f(\mathbf{w}_{t}),\eta g_{t}\left( \frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\right)\right\rangle \lesssim\sum_{t=1}^{T}\mathbb{E}\left[\hat{\xi}(t-1)-\hat{\xi}(t)\right]=\hat {\xi}(0)-\mathbb{E}[\hat{\xi}(T)].\]
The RHS of the above inequality is bounded regardless of \(T\). This is the motivation to use the auxiliary function. However, we do not have \(\|g_{t}\|\approx\|g_{t-1}\|\) but only have \(\|\nabla f(\mathbf{w}_{t})\|\approx\|\nabla f(\mathbf{w}_{t-1})\|\) (due to bounded smoothness, i.e., Assumption 1). To resolve this challenge, we convert \(\|g_{t}\|\) to \(\|\nabla f(\mathbf{w}_{t})\|\) by Assumption 2 in the above inequality, and use \(\xi(t)\triangleq\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\) instead of \(\frac{\|\nabla f(\mathbf{w}_{t})\|\|g_{t}\|}{\sqrt{\mathbf{\nu}_{t}}}\) as the auxiliary function. A formal statement of the above methodology can be seen in Lemma 4.
**Remark 1**: _Note that the above methodology is mainly based on that the adaptive learning rate is non-increasing. Therefore, we believe that similar approach can be applied to the analysis of other adaptive optimizers with non-increasing adaptive learning rates, such as AMSGrad._
## 4 A refined convergence analysis of AdaGrad-Norm
In this section, we present our refined analysis of AdaGrad-Norm based on the auxiliary function \(\xi(t)\). The refined convergence rate is given by the following theorem.
**Theorem 2**: _Let Assumptions 1 and 2 hold. Then, for AdaGrad-Norm with any learning rate \(\eta>0\), we have that with probability at least \(1-\delta\),_
\[\min_{t\in[T]}\|\nabla f(\mathbf{w}_{t})\|^{2}\leq\frac{2\sqrt{2D_{0}}(2C_{2}\ln(2 \sqrt{2D_{0}T}+C_{3})+C_{1})}{\sqrt{T}\delta^{2}}+\frac{C_{3}(C_{1}+2C_{2}\ln( 2\sqrt{2D_{0}T}+C_{3}))}{T\delta^{2}},\]
_where \(C_{1}\), \(C_{2}\), and \(C_{3}\) are constants defined as \(C_{1}:=4(f(\mathbf{w}_{1})-f^{*}+\frac{\eta D_{1}}{2}\frac{\|\nabla f(\mathbf{w}_{0}) \|^{2}}{\sqrt{\mathbf{\nu}_{0}}}+(2\eta(L\eta D_{1})^{2}+\eta D_{1}(L\eta)^{2}+ \frac{\eta}{2}D_{0})\frac{1}{\sqrt{\mathbf{\nu}_{0}}}-\frac{L}{2}\eta^{2}\ln\mathbf{ \nu}_{0})/\eta\), \(C_{2}:=2L\eta\), and \(C_{3}:=4D_{1}C_{1}+48C_{2}D_{1}\ln(4C_{2}D_{1}+e)+2\sqrt{\mathbf{\nu}_{0}}\)._
**Remark 3**: _Faw et al. (2022) also prove that AdaGrad-Norm converges under Assumptions 1 and 2. Their rate is \(\frac{1}{T}\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|^{2}=\mathcal{O}(\frac{\log \frac{9}{T}}{\sqrt{T}})\). Compared to their result, our result has a tighter dependence over \(T\). Moreover, when restricted to the strong growth condition, i.e., \(D_{0}=0\), our result gives a rate \(\mathcal{O}(\frac{1}{T})\), much faster than that in (Faw et al., 2022) and matching that of SGD. Such an improvement counts as strong growth condition characterizes the landscapes of over-parameterized models (Vaswani et al., 2019). Theorem 2 also shows that AdaGrad-Norm enjoys the tuning-free ability under \(L\)-smooth condition, i.e., it converges without tuning the learning rate._
**Proof of Theorem 2** The proof starts with the so-called expected descent lemma:
\[\mathbb{E}^{|\mathcal{F}_{t}}f(\mathbf{w}_{t+1})\leq f(\mathbf{w}_{t})+\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle \nabla f(\mathbf{w}_{t}),\mathbf{w}_{t+1}-\mathbf{w}_{t}\right\rangle+\frac{L}{2}\|\mathbf{w}_ {t+1}-\mathbf{w}_{t}\|^{2}\right]\] \[= f(\mathbf{w}_{t})+\underbrace{\mathbb{E}^{|\mathcal{F}_{t}}\left\langle \nabla f(\mathbf{w}_{t}),-\eta\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\rangle}_{ \text{First Order}}+\underbrace{\frac{L}{2}\eta^{2}\mathbb{E}^{|\mathcal{F}_{t} }\left\|\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\|^{2}}_{\text{Second Order}}. \tag{3}\]
As discussed in Section 1, the "First Order" term does not have a simple form due to the correlation between \(g_{t}\) and \(\mathbf{\nu}_{t}\). We follow the standard approach in existing literature to approximate \(\mathbf{\nu}_{t}\) with the surrogate \(\mathbf{\nu}_{t-1}\), which is measurable with respect to \(\mathcal{F}_{t}\). The first-order term can then be decomposed into
\[\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\nabla f(\mathbf{w}_{t }),-\eta\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\rangle\right]= \mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\nabla f(\mathbf{w}_{t }),-\eta\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t-1}}}\right\rangle\right]+\mathbb{E}^{| \mathcal{F}_{t}}\left[\left\langle\nabla f(\mathbf{w}_{t}),\eta(\frac{1}{\sqrt{ \mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}})g_{t}\right\rangle\right]\] \[= -\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}+ \mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\nabla f(\mathbf{w}_{t}),\eta\left( \frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\right)g_{t} \right\rangle\right]. \tag{4}\]
The last term is an error term, coming from the gap between \(\mathbf{\nu}_{t-1}\) and \(\mathbf{\nu}_{t}\). Plugging Eq. (4) back to Eq. (3) and we obtain
\[\mathbb{E}^{|\mathcal{F}_{t}}f(\mathbf{w}_{t+1})\leq f(\mathbf{w}_{t})+\underbrace{ -\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}}_{\text{First Order Main}}+\underbrace{\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\nabla f(\mathbf{w}_{t }),\eta\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}} \right)g_{t}\right\rangle\right]}_{\text{Error}}+\underbrace{\frac{L}{2}\eta^{2 }\mathbb{E}^{|\mathcal{F}_{t}}\left\|\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\|^ {2}}_{\text{Second Order}}.\]
The rest of the proof can be divided into two stages: in Stage I, We proceed by bounding the "Error" term through the auxiliary function \(\xi(t)\triangleq\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\), and bound \(\sum_{t=1}^{T}\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}\). In Stage II, we convert the bound of \(\sum_{t=1}^{T}\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}\) into the bound of \(\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|^{2}\).
**Stage I: Bounding the "Error" term.** The following lemma summarizes the intuition in Section 3.
**Lemma 4**: _Define an auxiliary function \(\xi(t)\triangleq\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\), \(t\geq 1\). Then, the "Error" term can be bounded as_
\[\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\nabla f(\mathbf{w}_{t }),\eta\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}} \right)g_{t}\right\rangle\right]\leq\frac{3}{4}\eta\frac{\|\nabla f(\mathbf{w}_{t })\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}+\frac{1}{2}\frac{\eta}{\sqrt{\mathbf{\nu}_{t-1}}} D_{0}\mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+ \sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]\] \[+\frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\left[\xi(t-1)- \xi(t)\right]+\left(\eta(L\eta D_{1})^{2}+\frac{\eta}{2}D_{1}(L\eta)^{2} \right)\frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{3}}.\]
* By a simple calculation, we have \[\left|\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\nabla f(\bm {w}_{t}),\eta\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t} }}\right)g_{t}\right\rangle\right]\right|=\left|\mathbb{E}^{|\mathcal{F}_{t}} \left[\left\langle\nabla f(\mathbf{w}_{t}),\eta\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu }_{t-1}})(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})}g_{t}\right\rangle\right]\right|\] \[\leq \eta\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\nabla f(\mathbf{w}_{t})\| \frac{\|g_{t}\|^{3}}{(\sqrt{\mathbf{\nu}_{t-1}})(\sqrt{\mathbf{\nu}_{t}})(\sqrt{\mathbf{ \nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})}\right]\leq\eta\frac{\|\nabla f(\mathbf{w}_{t})\| ^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^ {2}}{\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}}}\right],\] where the first inequality is due to Cauchy-Schwarz inequality, and the second inequality is because \(\mathbf{\nu}_{t}\geq\|g_{t}\|^{2}\). By the mean-value inequality (\(2ab\leq a^{2}+b^{2}\)), \[\eta\frac{\|\nabla f(\mathbf{w}_{t})\|}{\sqrt{\mathbf{\nu}_{t-1}}}\mathbb{E}^{| \mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_ {t-1}}}\right]\leq\frac{1}{2}\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\bm {\nu}_{t-1}}}+\frac{1}{2}\frac{\eta}{\sqrt{\mathbf{\nu}_{t-1}}}\left(\mathbb{E}^{| \mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_ {t-1}}}\right]\right)^{2}.\] (5) We focus on the last quantity \[(\mathbb{E}^{|\mathcal{F}_{t}}[\frac{\|g_{t}\|^{2}}{\sqrt{\mathbf{\nu}_{t}}+ \sqrt{\mathbf{\nu}_{t-1}}}])^{2}\] By further applying Holder's inequality, \[\left(\mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{ \sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}}}\right]\right)^{2}\leq\mathbb{E}^{| \mathcal{F}_{t}}\|g_{t}\|^{2}\cdot\mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_ {t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]\] \[\leq(D_{0}+D_{1}\|\nabla f(\mathbf{w}_{t})\|^{2})\mathbb{E}^{| \mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_ {t-1}})^{2}}\right],\] where in the last inequality we use Assumption 1. Plugging the above inequality back to Eq. (5), the "Error" term can be bounded as \[\left|\mathbb{E}^{|\mathcal{F}_{t}}\left[\langle\nabla f(\mathbf{w}_{t }),\eta(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}})g_{t} \rangle\right]\right|\leq\frac{1}{2}\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{ \sqrt{\mathbf{\nu}_{t-1}}}+\frac{1}{2}\frac{\eta}{\sqrt{\mathbf{\nu}_{t-1}}}D_{0} \mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+ \sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]\] \[+\frac{1}{2}\frac{\eta}{\sqrt{\mathbf{\nu}_{t-1}}}D_{1}\|\nabla f( \mathbf{w}_{t})\|^{2}\ \mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+ \sqrt{\mathbf{\nu}_{t-1}})^{2}}\right].\] (6) In the RHS of the above equality, the first term is a negative half of the "First Order Main" term, and the second term is \(\frac{1}{\eta L_{\sqrt{\mathbf{\nu}_{t-1}}}}\) times of the "Second order" term, and thus is at the same order of
the "Second order" term due to \(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}\) is upper bounded. We focus on the last term, and utilize the observation that
\[\frac{\|g_{t}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t- 1}})^{2}}\leq\frac{\|g_{t}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}\sqrt{\mathbf{\nu}_{t}}(\sqrt {\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})}=\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1 }{\sqrt{\mathbf{\nu}_{t}}}.\]
Thus, the last term can be bounded as
\[\frac{1}{2}\frac{\eta D_{1}\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}} }\mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+ \sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]\leq\frac{1}{2}\eta D_{1}\|\nabla f(\mathbf{w}_ {t})\|^{2}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}- \frac{1}{\sqrt{\mathbf{\nu}_{t}}}\right),\]
which can be further decomposed into
\[\frac{1}{2}\eta D_{1}\|\nabla f(\mathbf{w}_{t})\|^{2}\mathbb{E}^{| \mathcal{F}_{t}}\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_ {t}}}\right)\] \[= \frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{\| \nabla f(\mathbf{w}_{t-1})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{\|\nabla f(\mathbf{w}_ {t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\right)+\frac{\eta}{2}D_{1}\frac{\|\nabla f(\bm {w}_{t})\|^{2}-\|\nabla f(\mathbf{w}_{t-1})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}.\]
By Assumption 1, \(\|\nabla f(\mathbf{w}_{t})\|-\|\nabla f(\mathbf{w}_{t-1})\|\leq\|\nabla f(\mathbf{w}_{t})- \nabla f(\mathbf{w}_{t-1})\|\leq L\|\mathbf{w}_{t}-\mathbf{w}_{t-1}\|\). Therefore,
\[\frac{1}{2}\eta D_{1}\|\nabla f(\mathbf{w}_{t})\|^{2}\mathbb{E}^{| \mathcal{F}_{t}}\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu} _{t}}}\right)\] \[= \frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{\| \nabla f(\mathbf{w}_{t-1})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{\|\nabla f(\mathbf{w}_ {t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\right)+\frac{\eta}{2}D_{1}\frac{2L\|\mathbf{w}_{t }-\mathbf{w}_{t-1}\|\|\nabla f(\mathbf{w}_{t})\|+L^{2}\|\mathbf{w}_{t}-\mathbf{w}_{t-1}\|^{2}} {\sqrt{\mathbf{\nu}_{t-1}}}\] \[= \frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}(\xi(t-1)-\xi(t) )+\frac{\eta}{2}D_{1}\frac{2L\eta^{\frac{\|g_{t-1}\|\|\nabla f(\mathbf{w}_{t})\|} {\sqrt{\mathbf{\nu}_{t-1}}}+(L\eta\frac{\|g_{t-1}\|}{\sqrt{\mathbf{\nu}_{t-1}}})^{2}} {\sqrt{\mathbf{\nu}_{t-1}}},\]
where the last inequality we use \(\xi(t)\triangleq\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t}}}\) and \(\mathbf{w}_{t}-\mathbf{w}_{t-1}=\eta\frac{g_{t-1}}{\sqrt{\mathbf{\nu}_{t-1}}}\). Applying again the Cauchy-Schwarz inequality, we obtain
\[\frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\|\nabla f(\mathbf{w} _{t})\|^{2}\left(\frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}}\right)\] \[\leq \frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\left(\xi(t-1)- \xi(t)\right)+\frac{1}{4}\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{ \nu}_{t-1}}}+\eta(L\eta D_{1})^{2}\frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}} ^{3}}+\frac{\eta}{2}D_{1}(L\eta)^{2}\frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1 }}^{3}}. \tag{7}\]
Applying the above inequality back into Eq. (6), the "Error" term can be bounded as
\[\mathbb{E}^{|\mathcal{F}_{t}}[\langle\nabla f(\mathbf{w}_{t}),\eta( \frac{1}{\sqrt{\mathbf{\nu}_{t-1}}}-\frac{1}{\sqrt{\mathbf{\nu}_{t}}})g_{t})]\leq\frac {1}{2}\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}+\frac{1}{ 2}\frac{\eta}{\sqrt{\mathbf{\nu}_{t-1}}}D_{0}\mathbb{E}^{|\mathcal{F}_{t}}\left[ \frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]\] \[+\frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\left(\xi(t-1)- \xi(t)\right)+\frac{1}{4}\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu} _{t-1}}}+\eta(L\eta D_{1})^{2}\frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{ 3}}+\frac{\eta}{2}D_{1}(L\eta)^{2}\frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{ 3}}.\]
Rearranging the RHS of the above inequality leads to the claim.
Applying Lemma 4 back to the descent lemma, we then have
\[\mathbb{E}^{|\mathcal{F}_{t}}[f(\mathbf{w}_{t+1})]\leq f(\mathbf{w}_{t})-\eta \frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}+\frac{3}{4}\eta\frac{ \|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}+\frac{1}{2}\frac{\eta}{ \sqrt{\mathbf{\nu}_{t-1}}}D_{0}\mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^ {2}}{(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]\] \[+\frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F}_{t}}\left(\xi(t-1)- \xi(t)\right)+\left(\eta(L\eta D_{1})^{2}+\frac{\eta}{2}D_{1}(L\eta)^{2}\right) \frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{4}}+\frac{L}{2}\eta^{2}\mathbb{ E}^{|\mathcal{F}_{t}}\left\|\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\|^{2}\] \[= f(\mathbf{w}_{t})-\frac{1}{4}\eta\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}} {\sqrt{\mathbf{\nu}_{t-1}}}+\frac{1}{2}\frac{\eta}{\sqrt{\mathbf{\nu}_{t-1}}}D_{0} \mathbb{E}^{|\mathcal{F}_{t}}\left[\frac{\|g_{t}\|^{2}}{(\sqrt{\mathbf{\nu}_{t}}+ \sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]+\frac{\eta}{2}D_{1}\mathbb{E}^{|\mathcal{F} _{t}}\left(\xi(t-1)-\xi(t)\right)\] \[+\left(\eta(L\eta D_{1})^{2}+\frac{\eta}{2}D_{1}(L\eta)^{2}\right) \frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{4}}+\frac{L}{2}\eta^{2}\mathbb{ E}^{|\mathcal{F}_{t}}\left\|\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\|^{2}. \tag{8}\]
Taking expectation with respect to \(\mathcal{F}_{t}\) to the above inequality then leads to
\[\mathbb{E}[f(\mathbf{w}_{t+1})]\leq \mathbb{E}[f(\mathbf{w}_{t})]-\frac{1}{4}\eta\mathbb{E}\left[\frac{\| \nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}\right]+\frac{\eta}{2}D_{1} \mathbb{E}\left(\xi(t-1)-\xi(t)\right)\] \[+ \frac{\eta D_{0}}{2}\mathbb{E}\left[\frac{\|g_{t}\|^{2}}{\sqrt{\bm {\nu}_{t-1}}(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})^{2}}\right]+\frac{L} {2}\eta^{2}\mathbb{E}\left\|\frac{g_{t}}{\sqrt{\mathbf{\nu}_{t}}}\right\|^{2}+( \eta(L\eta D_{1})^{2}+\frac{\eta}{2}D_{1}(L\eta)^{2})\mathbb{E}\frac{\|g_{t-1 }\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{3}}.\]
The sum over \(t\) from \(1\) to \(T\) of the last three terms above can be bounded by
\[\frac{\eta D_{0}}{2}\sum_{t=1}^{T}\mathbb{E}\left[\frac{\|g_{t}\|^ {2}}{\sqrt{\mathbf{\nu}_{t-1}}(\sqrt{\mathbf{\nu}_{t}}+\sqrt{\mathbf{\nu}_{t-1}})^{2}} \right]+\sum_{t=1}^{T}\frac{L}{2}\eta^{2}\mathbb{E}\left\|\frac{g_{t}}{\sqrt{ \mathbf{\nu}_{t}}}\right\|^{2}+(\eta(L\eta D_{1})^{2}+\sum_{t=1}^{T}\frac{\eta}{2} D_{1}(L\eta)^{2})\mathbb{E}\frac{\|g_{t-1}\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}^{3}}\] \[\leq \left(2\eta(L\eta D_{1})^{2}+\eta D_{1}(L\eta)^{2}\right)+\frac{ \eta}{2}D_{0}\right)\frac{1}{\sqrt{\mathbf{\nu}_{0}}}+\frac{L}{2}\eta^{2}(\mathbb{ E}\ln\mathbf{\nu}_{T}-\ln\mathbf{\nu}_{0}),\]
where the inequality is due to that if \(\{a_{i}\}_{i=0}^{\infty}\) is a series of non-negative real numbers with \(a_{0}>0\), then \(\sum_{t=1}^{T}\frac{a_{t}}{\sqrt{(\sum_{s=0}^{t}a_{s})^{3}}}\leq 2\frac{1}{ \sqrt{a_{0}}}\), \(\sum_{t=1}^{T}\frac{a_{t}}{\sum_{s=0}^{t}a_{s}}\leq\ln\sum_{t=0}^{T}a_{t}-\ln a _{0}\), and \(\sum_{t=1}^{T}\frac{a_{t}}{\sqrt{\sum_{s=0}^{t}a_{s}}(\sqrt{\sum_{s=0}^{t-1}a_{ s}}+\sqrt{\sum_{s=0}^{t}a_{s}})^{2}}\leq\frac{1}{\sqrt{a_{0}}}\). Therefore, summing Eq. (8) over \(t\) from \(1\) to \(T\) leads to
\[\frac{1}{4}\eta\sum_{t=1}^{T}\mathbb{E}\frac{\|\nabla f(\mathbf{w}_{t} )\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}\leq f(\mathbf{w}_{1})-\mathbb{E}[f(\mathbf{w}_{T})]+ \frac{\eta D_{1}}{2}\mathbb{E}[\xi(0)-\xi(t)]\] \[+\left(2\eta(L\eta D_{1})^{2}+\eta D_{1}(L\eta)^{2})+\frac{\eta}{2 }D_{0}\right)\frac{1}{\sqrt{\mathbf{\nu}_{0}}}+\frac{L}{2}\eta^{2}(\mathbb{E}\ln \mathbf{\nu}_{T}-\ln\mathbf{\nu}_{0}). \tag{9}\]
Applying the definition of \(C_{1}\) and \(C_{2}\), we have \(\sum_{t=1}^{T}\mathbb{E}\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}} \leq C_{1}+C_{2}\mathbb{E}\ln\mathbf{\nu}_{T}\). In Stage II, we translate such an inequality to the bound of \(\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|^{2}\).
**Stage II: Bound \(\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|^{2}\).** We bound \(\mathbb{E}[\sqrt{\mathbf{\nu}_{T}}]\) by divide-and-conquer. We first consider the iterations satisfying \(\nabla f(\mathbf{w}_{t})>\frac{D_{0}}{D_{1}}\):
\[C_{1}+C_{2}\mathbb{E}\ln\mathbf{\nu}_{T}\geq \sum_{t=1}^{T}\mathbb{E}\left[\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{ \sqrt{\mathbf{\nu}_{t-1}}}\right]\geq\sum_{t=1}^{T}\mathbb{E}\left[\frac{\|\nabla f( \mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2} >\frac{D_{0}}{D_{1}}}\right]\] \[\geq \frac{1}{2D_{1}}\sum_{t=1}^{T}\mathbb{E}\left[\frac{\|g_{t}\|^{2} }{\sqrt{\mathbf{\nu}_{t-1}}}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}>\frac{D_{0}}{D_ {1}}}\right]\geq\frac{1}{2D_{1}}\mathbb{E}\left[\frac{\sum_{t=1}^{T}\|g_{t}\|^ {2}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}>\frac{D_{0}}{D_{1}}}}{\sqrt{\mathbf{\nu}_{ T}}}\right], \tag{10}\]
where in the third inequality, we use the following fact,
\[2D_{1}\|\nabla f(\mathbf{w}_{t})\|^{2}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}>\frac{ D_{0}}{D_{1}}}\geq(D_{0}+D_{1}\|\nabla f(\mathbf{w}_{t})\|^{2})\mathds{1}_{\|\nabla f (\mathbf{w}_{t})\|^{2}>\frac{D_{0}}{D_{1}}}\geq\mathbb{E}^{|\mathcal{F}_{t}}\|g_{t} \|^{2}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}>\frac{D_{0}}{D_{1}}}.\]
We then consider the iterations satisfying \(\nabla f(\mathbf{w}_{t})\leq\frac{D_{0}}{D_{1}}\),
\[\frac{1}{2D_{1}}\sum_{t=1}^{T}\mathbb{E}\left[\frac{\|g_{t}\|^{2}} {\sqrt{\mathbf{\nu}_{T}}}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}\leq\frac{D_{0}}{ D_{1}}}\right]+\frac{1}{2D_{1}}\mathbb{E}\frac{\mathbf{\nu}_{0}}{\sqrt{\mathbf{\nu}_{T}}} \leq\frac{1}{2D_{1}}\mathbb{E}\left[\frac{\sum_{t=1}^{T}\|g_{t}\|^{2} \mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}\leq\frac{D_{0}}{D_{1}}}+\mathbf{\nu}_{0}} {\sqrt{\sum_{t=1}^{T}\|g_{t}\|^{2}\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2} \leq\frac{D_{0}}{D_{1}}}+\mathbf{\nu}_{0}}}\right]\] \[= \frac{1}{2D_{1}}\mathbb{E}\sqrt{\sum_{t=1}^{T}\|g_{t}\|^{2} \mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}\leq\frac{D_{0}}{D_{1}}}+\mathbf{\nu}_{0}} \leq\frac{1}{2D_{1}}\sqrt{\mathbb{E}\left[\sum_{t=1}^{T}\|g_{t}\|^{2} \mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}\leq\frac{D_{0}}{D_{1}}}\right]}+\mathbf{ \nu}_{0}\] \[\leq \frac{1}{2D_{1}}\sqrt{\mathbb{E}\left[\sum_{t=1}^{T}(D_{1}\| \nabla f(\mathbf{w}_{t}))\|^{2}+D_{0})\mathds{1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}\leq \frac{D_{0}}{D_{1}}}+\mathbf{\nu}_{0}\right]}\leq\frac{1}{2D_{1}}\sqrt{2D_{0}T+ \mathbf{\nu}_{0}}. \tag{11}\]
Here in the second inequality we use Jensen's inequality, and in the third we use Assumption 2. Putting Eq. (10) and Eq. (11) together, we then have
\[\frac{1}{2D_{1}}\mathbb{E}[\sqrt{\mathbf{\nu}_{T}}]=\frac{1}{2D_{1}} \mathbb{E}\left[\frac{\sum_{t=1}^{T}\|g_{t}\|^{2}\mathds{1}_{\|\nabla f(\mathbf{ w}_{t})\|^{2}>\frac{D_{0}}{D_{1}}}}{\sqrt{\mathbf{\nu}_{T}}}\right]+\frac{1}{2D_{1}} \sum_{t=1}^{T}\mathbb{E}\left[\frac{\|g_{t}\|^{2}}{\sqrt{\mathbf{\nu}_{T}}}\mathds {1}_{\|\nabla f(\mathbf{w}_{t})\|^{2}\leq\frac{D_{0}}{D_{1}}}\right]+\frac{1}{2D_ {1}}\mathbb{E}\frac{\mathbf{\nu}_{0}}{\sqrt{\mathbf{\nu}_{T}}}\] \[\leq \frac{1}{2D_{1}}\sqrt{2D_{0}T+\mathbf{\nu}_{0}}+C_{1}+C_{2}\mathbb{E }\ln\mathbf{\nu}_{T}\leq\frac{1}{2D_{1}}\sqrt{2D_{0}T+\mathbf{\nu}_{0}}+C_{1}+2C_{2} \ln\mathbb{E}\sqrt{\mathbf{\nu}_{T}}.\]
Here in the last inequality we use Jensen's inequality. Solving the above inequality with respect to \(\mathbb{E}[\sqrt{\mathbf{\nu}_{T}}]\), we have \(\mathbb{E}[\sqrt{\mathbf{\nu}_{T}}]\leq 2\sqrt{2D_{0}T+\mathbf{\nu}_{0}}+4D_{1}C_{1}+48C_{2 }D_{1}\ln(4C_{2}D_{1}+e)\). As
\[C_{1}+2C_{2}\ln\mathbb{E}\sqrt{\mathbf{\nu}_{T}}\geq\sum_{t=1}^{T} \mathbb{E}\left[\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu}_{t-1}}} \right]\geq\mathbb{E}\left[\frac{\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|^{2}}{ \sqrt{\mathbf{\nu}_{T}}}\right]\geq\frac{\mathbb{E}\left[\sqrt{\sum_{t=1}^{T}\| \nabla f(\mathbf{w}_{t})\|^{2}}\right]^{2}}{\mathbb{E}[\sqrt{\mathbf{\nu}_{T}}]},\]
where the last inequality is due to Holder's inequality. Applying the estimation of \(\mathbb{E}\sqrt{\mathbf{\nu}_{T}}\), we obtain that \(\mathbb{E}\left[\sqrt{\sum_{t=1}^{T}\|\nabla f(\mathbf{w}_{t})\|^{2}}\right]^{2} \leq(2\sqrt{2D_{0}T+\mathbf{\nu}_{0}}+C_{3})(C_{1}+2C_{2}\ln(2\sqrt{2D_{0}T+\mathbf{ \nu}_{0}}+C_{3}))\).
By further applying Markov's inequality, we conclude the proof.
**Remark 5**: _The proof in (Faw et al., 2022) can also be divided into two stages with similar goals. Our proof is simpler in both stages, and we discuss the reason here. As pointed out in Section 3, our proof in Stage I is simpler is due to the novel auxiliary function \(\xi\). Moreover, our conclusion in Stage I is also stronger, which laid a better foundation for Stage II: Faw et al. (2022) can only derive the bound of \(\mathbb{E}\sum_{t\in\tilde{S}}\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{\nu} _{t}}}\), where \(\tilde{S}\) is a subset of \([T]\). This raises additional challenges for Stage II in (Faw et al., 2022), as our divide-and-conquer technique can no longer be applied. Faw et al. (2022) resolve this through a recursively-improving technique, which not only require a complicated proof, but also entangles \(D_{0}\) and \(D_{1}\) and leads to a sub-optimal rate under strong growth condition._
## 5 Extending the analysis to AdaGrad
In this section, we extend the convergence analysis of AdaGrad-Norm to AdaGrad. Such a result is attractive since AdaGrad is more commonly used in practice than AdaGrad-Norm. A natural hope is to prove the convergence of AdaGrad under the same set of assumptions in Theorem 2. However, challenge arises when we try to derive an AdaGrad version of Lemma 4. Concretely, the "First-Order Main" term becomes \(\eta\sum_{l=1}^{d}\frac{\partial_{l}f(\mathbf{w}_{t})^{2}}{\sqrt{\mathbf{r}_{l-1,l}}}\) (we use \(\mathbf{\nu}_{t,l}\) as the \(l\)-th coordinate of \(\mathbf{\nu}_{t}\), and similar does \(g_{t,l}\)), while the bound of the "Error" term includes a term \(\frac{\eta}{4}\sum_{l=1}^{d}\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{r}_{l- 1,l}}}\) and thus can not be controlled. Such a mismatch is due to that \(\mathbb{E}[g_{t,i}^{2}]\) can only be bounded by the full gradient \(\|\nabla f(\mathbf{w}_{t})\|\) instead of the partial derivative of the corresponding coordinate \(|\partial_{i}f(\mathbf{w}_{t})|\) (see Appendix C.2 for details). Therefore, to derive the convergence of AdaGrad, we strengthen Assumption 2 to let the affine noise variance hold coordinate-wisely.
**Assumption 3** (Coordinate-wise affine noise variance assumption): _We assume that there exist positive constants \(D_{0}\) and \(D_{1}\) such that \(\forall t\geq 1\) and \(\forall i\in[d]\), \(\mathbb{E}[|g_{t,i}|^{2}|\mathcal{F}_{t}]\leq D_{0}+D_{1}\partial_{i}f(\mathbf{w}_ {t})^{2}\)._
Note that Assumption 3 is _still general than most of the assumptions in existing works_. As an example, the bounded noise variance assumption is its special case. Next, we obtain the convergence result for AdaGrad as follows.
**Theorem 6** (AdaGrad): _Let Assumptions 1 and 3 hold. Then, for AdaGrad with any learning rate \(\eta>0\), we have that with probability at least \(1-\delta\), \(\min_{t\in[T]}\|\nabla f(\mathbf{w}_{t})\|^{2}=\mathcal{O}(\frac{1+\ln(1+\sqrt{D_{ 0}T})}{T\delta^{2}})+\mathcal{O}(\frac{\sqrt{D_{0}}(1+\ln(1+\sqrt{D_{0}T}))}{ \sqrt{T}\delta^{2}})\)._
The proof is a coordinate-wise version of the proof for Theorem 2 with some modifications, where we leverage a coordinate-wise version of \(\xi(t)\), i.e., \(\tilde{\xi}(t)=\sum_{l=1}^{d}\frac{\partial_{l}f(\mathbf{w}_{t})^{2}}{\sqrt{\mathbf{r} _{l,l}}}\) (please refer to Appendix C.1 for details).
We still seek to relax Assumption 3 back to Assumption 2. This is because Assumption 3 may preclude some basic objectives. We demonstrate this idea through the following example.
**Example 1**: _Consider the following linear regression problem: \(f(\mathbf{w})=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}(\langle\mathbf{w},\mathbf{x}\rangle)^{2}= \|\mathbf{w}\|^{2}\), where \(\mathcal{D}\) is a standard Gaussian distribution over \(\mathbb{R}^{d}\) with the absolute value of each coordinate truncated by \(1\). At point \(\mathbf{w}\), define the stochastic gradient \(g(\mathbf{w})\) as \(2\mathbf{x}\mathbf{x}^{\top}\mathbf{w}\), where \(\mathbf{x}\) is sampled according to \(\mathcal{D}\). One can easily verify that \(g(\mathbf{w})\) is an unbiased estimation of \(\nabla f(\mathbf{w})\). For this example, \(\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\|g(\mathbf{w})\|^{2}=\Theta(\|\mathbf{w}\|^{2})\) and \(\|\nabla f(\mathbf{w})\|^{2}=\Theta(\|\mathbf{w}\|^{2})\). Therefore, Assumption 2 holds with \(L_{0}=0\). However, \(\|\partial_{1}f(\mathbf{w})\|^{2}=4(\mathbf{w})_{1}^{2}\) and \(\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}(g(\mathbf{w}))_{1}^{2}=\Theta(\|\mathbf{w}\|^{2})\), we see that \(\lim_{|(\mathbf{w})_{2}|\rightarrow\infty}\frac{\|\partial_{1}f(\mathbf{w})\|^{2}}{ \mathbb{E}_{\mathbf{x}\sim\mathcal{D}}(g(\mathbf{w}))_{1}^{2}}=\infty\), which violates Assumption 3._
On the other hand, note that the above example obeys a stronger assumption on the smoothness, i.e, for every fixed \(x\), the stochastic gradient \(g(\mathbf{w})\) is globally Lipschitz. It is natural to ask whether we can relax Assumption 3 by strengthening the assumption on the smoothness. In Section 3, we explain that we use \(\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{r}_{l-1}}}\) instead of \(\frac{\|\mathbf{a}_{t}\|\|\nabla f(\mathbf{w}_{t})\|}{\sqrt{\mathbf{r}_{l-1}}}\) as the auxiliary function due to \(g_{t}\not\approx g_{t-1}\). Therefore, tightening the assumption on the smoothness may help us to ensure \(g_{t}\approx g_{t-1}\), and we no longer need to bound \(g_{t}\) using Assumption 2, and as a result we will not encounter the mismatch between \(\frac{\partial_{1}f(\mathbf{w}_{t})^{2}}{\sqrt{\mathbf{r}_{l-1,i}}}\) and \(\frac{\|\nabla f(\mathbf{w}_{t})\|^{2}}{\sqrt{\mathbf{r}_{l-1,i}}}\). Motivated by the above example, we make the assumption that we
have the access to an stochastic oracle \(g(\mathbf{w},\zeta)\), and \(g_{t}\) is generated by \(g_{t}=g(\mathbf{w}_{t},\zeta_{t})\) where \(\zeta_{t}\) is sampled independently from a distribution \(\mathcal{D}\). We further assume that \(g\) is \(L\) Lipschitz with respect to \(\mathbf{w}\) for a fixed \(\zeta\). Such an assumption is common in stochastic optimization literature (Yun et al., 2021; Shi and Li, 2021). Unfortunately, such assumptions are still not adequate to ensure \(g_{t}\approx g_{t-1}\) since \(g_{t}\) and \(g_{t-1}\) may use different noise \(\zeta\). A good news is that, for the without-replacement version of AdaGrad (also called randomly-reshuffled AdaGrad. Please refer to Algorithm 5), every \(\zeta\) appears once within one epoch and the above methodology can be used. Note that randomly-reshuffled AdaGrad is the version of AdaGrad commonly adopted in deep learning. Thus although we slightly change the analyzed algorithm, the problem we consider is still of significance.
```
0: Objective function \(f(\mathbf{w}):=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\mathbf{w})\), learning rate \(\eta>0\), \(\mathbf{w}_{1,1}\in\mathbb{R}^{d}\), \(\mathbf{\nu}_{1,0}\in\mathbb{R}^{d,+}\)
1:For\(t=1\rightarrow\infty\):
2:For\(i=1\to n\):
3: Uniformly sample \(\{\tau_{t,1},\cdots,\tau_{t,n}\}\) as a random permutation of \([n]\)
4: Calculate \(g_{t,i}=\nabla f_{\tau_{t,i}}(\mathbf{w}_{\tau_{t,i}})\)
5: Update \(\mathbf{\nu}_{t,i}=\mathbf{\nu}_{t,i-1}+g_{t,i}^{\odot 2}\)
6: Update \(\mathbf{w}_{t,i+1}=\mathbf{w}_{t,i}-\eta\frac{1}{\sqrt{\mathbf{w}_{t,i}}}\odot g_{k,i}\)
7:EndFor
8: Update \(\mathbf{w}_{t+1,1}=\mathbf{w}_{t,n+1}\), \(\mathbf{\nu}_{t+1,0}=\mathbf{\nu}_{t,n}\)
9:EndFor
```
**Algorithm 3** Randomly-reshuffled AdaGrad
As mentioned above, we require the following assumptions for the convergence of randomly-reshuffled AdaGrad.
**Assumption 4** (**Assumption 2, reformulated**): _Let \(\mathbf{w}_{k,i}\) and \(g_{k,i}\) be the ones in Algorithm 5. Then, there exist constants \(D_{0}\) and \(D_{1}\), such that, \(\forall k,i\), \(\mathbb{E}_{j\sim\text{Uniform}(n)}\|\nabla f_{j}(\mathbf{w}_{k,i})\|^{2}\leq D_{0} +D_{1}\|\nabla f(\mathbf{w}_{k,i})\|^{2}\)._
**Assumption 5** (**Stochastic \(L\)-smooth condition**): _We assume that \(\forall i\in[n]\), \(f_{i}\) is differentiable and its gradient satisfies \(\forall\mathbf{w}_{1},\mathbf{w}_{2}\in\mathbb{R}^{d}\), we have \(\|\nabla f_{i}(\mathbf{w}_{1})-\nabla f_{i}(\mathbf{w}_{2})\|\leq L\|\mathbf{w}_{1}-\mathbf{w} _{2}\|\)._
**Theorem 7** (**Randomly-reshuffled AdaGrad**): _Let Assumptions 4 and 5 hold. Then, for randomly-reshuffled AdaGrad with any \(\eta>0\), \(\min_{t\in[T]}\|\nabla f(\mathbf{w}_{t,0})\|^{2}=\mathcal{O}(\frac{1+\ln(1+\sqrt{D _{0}T})}{T})+\mathcal{O}(\frac{\sqrt{D_{0}}(1+\ln(1+\sqrt{D_{0}T}))}{\sqrt{T}})\)._
The proof utilizes a randomly-reshuffling version of \(\xi(t)\), i.e. \(\bar{\xi}(t)=\sum_{i=1}^{n}\sum_{l=1}^{d}\frac{|\partial_{l}f(\mathbf{w}_{t,1})|| \partial_{l}f_{\tau_{t,l}}(\mathbf{w}_{t,i})|}{\sqrt{\mathbf{w}_{t,i}}}\), and we defer the details to Appendix C.3. Theorem 7 shows that randomly-reshuffled AdaGrad does converge under affine noise variance assumption, and extends our analysis techniques to new setting of AdaGrad.
## 6 Convergence of AdaGrad over non-uniformly smooth landscapes
So far, the characterizations of AdaGrad(-Norm) has closely matched those of SGD over the uniformly smooth landscape. However, in practice, the objective function is usually non-uniformly smooth. Simple examples include polynomial functions with degree larger than \(2\), and deep neural networks. A natural question is that whether AdaGrad still works well over non-smooth landscapes. In this section, we analyze AdaGrad(-Norm) under the \((L_{0},L_{1})\)-smooth condition Zhang et al. (2019), which is considered as a preciser characterization of the landscape of neural networks through exhaustive experiments.
**Assumption 6** (\((L_{0},L_{1})\)-smooth condition): _We assume that \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is differentiable. We further assume that there exists positive constants \(L_{0}\) and \(L_{1}\), such that if \(\mathbf{w}_{1},\mathbf{w}_{2}\in\mathbb{R}^{d}\) satisfies \(\|\mathbf{w}_{1}-\mathbf{w}_{2}\|\leq\frac{1}{L_{0}}\), then \(\|\nabla f(\mathbf{w}_{1})-\nabla f(\mathbf{w}_{2})\|\leq(L_{1}\|\nabla f(\mathbf{w}_{1}) \|+L_{0})\|\mathbf{w}_{1}-\mathbf{w}_{2}\|\)._
Assumption 6 degenerates to Assumption 1 with \(L=L_{0}\) if \(L_{1}=0\). Therefore, Assumption 6 is more general than Assumption 1. Moreover, Assumption 6 holds for polynomials of any degree and even exponential functions. Zhang et al. (2019) demonstrate through extensive experiments that over the tasks where adaptive optimizers outperforms SGD, Assumption 6 is obeyed.
**Theorem 8**: _Let Assumptions 2 and 6 hold. Then, for AdaGrad-Norm with \(\forall\eta<\frac{1}{L_{1}}\min\{\frac{1}{64D_{1}},\frac{1}{8\sqrt{D_{1}}}\}\), we have that with probability at least \(1-\delta\),_
\[\min_{t\in[T]}\|\nabla f(\mathbf{w}_{t})\|^{2}=\mathcal{O}\left(\frac{1+\ln(1+ \sqrt{D_{0}T})}{T\delta^{2}}\right)+\mathcal{O}\left(\frac{\sqrt{D_{0}}(1+\ln( 1+\sqrt{D_{0}T}))}{\sqrt{T}\delta^{2}}\right).\]
The proof can be found in Appendix D.1, the key insight of which is that the additional error terms caused by \((L_{0},L_{1})\)-smooth condition is at the same order of the "Error" term in the proof of Theorem 2. Theorem 8 shows that AdaGrad-Norm can provably overcome the non-uniform smoothness in the objective function and converges. Similar result can be derived for AdaGrad. Compared to Theorem 2, we additionally require the learning rate to be lower than a threshold. To see such a requirement is not artificial due to the proof, we provide the following theorem.
**Theorem 9**: _For every learning rate \(\eta>\frac{9\sqrt{5}}{2L_{1}}\), there exist a lower-bounded objective function \(f\) obeying Assumption 6 and a corresponding initialization point \(\mathbf{w}_{0}\), such that AdaGrad with learning rate \(\eta\) and initialized at \(\mathbf{w}_{0}\) diverges over \(f\)._
The proof can be found in Appendix D.2. Theorem 9 shows that the learning rate requirement in Theorem 8 is tight w.r.t. \(L_{1}\) up to constants. The tuning-free ability of AdaGrad under \(L\)-smooth condition, i.e, AdaGrad converges under any learning rate, has been considered to be superiority of AdaGrad over SGD. However, combining Theorem 8 and Theorem 9, we demonstrate that such a property is lost when under a more realistic assumption on the smoothness. On the other hand, Zhang et al. (2019) show that SGD converges arbitrarily slowly under \((L_{0},L_{1})\)-smooth condition. Together with Theorem 8, we find another superiority of AdaGrad: it can provably overcome non-uniform smoothness but SGD can not.
## 7 Related works
**Convergence of AdaGrad over non-convex landscapes.**Duchi et al. (2011) and McMahan and Streeter (2010) simultaneously propose AdaGrad. Since then, there is a line of works analyzing the convergence of AdaGrad over non-convex landscapes (Ward et al., 2020; Li and Orabona, 2019; Zou et al., 2018; Li and Orabona, 2020; Defossez et al., 2020; Gadat and Gavra, 2020; Kavis et al., 2022; Faw et al., 2022). We summarize their assumptions and conclusions in Table 1.
**Non-uniform smoothness.** The convergence analysis of optimizers under non-uniform smoothness is initialized by Zhang et al. (2019), who propose the \((L_{0},L_{1})\)-smooth condition and verify its validity in deep learning. They further prove that Clipped SGD converges under such a condition. Since then, their analysis has been extended to other clipped optimizers (Zhang et al., 2020; Yang et al., 2022; Crawshaw et al., 2022). However, there is no such a result for AdaGrad.
## 8 Conclusion
In this paper, we analyze AdaGrad over non-convex landscapes. Specifically, we propose a novel auxiliary function to bound the error term that is brought by the update of AdaGrad. Based on this auxiliary function, we are able to significantly simplify the proof of AdaGrad-Norm and establish a tighter convergence rate in the over-parameterized regime. We further extend the analysis to AdaGrad and non-uniformly smooth landscapes through different variants of the auxiliary function. One future direction is to explore and compare the convergences of AdaGrad and Adam under the \((L_{0},L_{1})\)-smooth condition, given the fact that convergences of AdaGrad and SGD are clearly separated under this condition.
|
2306.02179 | Buying Time: Latency Racing vs. Bidding in Transaction Ordering | We design TimeBoost: a practical transaction ordering policy for rollup
sequencers that takes into account both transaction timestamps and bids; it
works by creating a score from timestamps and bids, and orders transactions
based on this score.
TimeBoost is transaction-data-independent (i.e., can work with encrypted
transactions) and supports low transaction finalization times similar to a
first-come first-serve (FCFS or pure-latency) ordering policy. At the same
time, it avoids the inefficient latency competition created by an FCFS policy.
It further satisfies useful economic properties of first-price auctions that
come with a pure-bidding policy. We show through rigorous economic analyses how
TimeBoost allows players to compete on arbitrage opportunities in a way that
results in better guarantees compared to both pure-latency and pure-bidding
approaches. | Akaki Mamageishvili, Mahimna Kelkar, Jan Christoph Schlegel, Edward W. Felten | 2023-06-03T19:20:39Z | http://arxiv.org/abs/2306.02179v2 | # Buying Time: Latency Racing vs. Bidding for Transaction Ordering
###### Abstract
We design TimeBoost: a practical transaction ordering policy for rollup sequencers that takes into account both transaction timestamps and bids; it works by creating a score from timestamps and bids, and orders transactions based on this score.
TimeBoost is transaction-data-independent (i.e., can work with encrypted transactions) and supports low transaction finalization times similar to a first-come first-serve (FCFS or pure-latency) ordering policy. At the same time, it avoids the inefficient latency competition created by an FCFS policy. It further satisfies useful economic properties of first-price auctions that come with a pure-bidding policy. We show through rigorous economic analyses how TimeBoost allows players to compete on arbitrage opportunities in a way that results in better guarantees compared to both pure-latency and pure-bidding approaches.
Transaction ordering; First-come-first-serve; First-price auctions. 1
Footnote 1: This work was completed in the author’s role at Offchain Labs.
1. _Existing use-cases are already centralized_. Decentralized blockchains such as Ethereum are still _ephemerally centralized_ with respect to ordering--for a given block, similar to a centralized sequencer, only a single miner/validator is in complete control of the inclusion and ordering of transactions within the block. Similarly, current layer-2 "rollup"
protocols (such as Arbitrum and Optimism) also employ a centralized sequencer to order transactions in a batch posted to the underlying Ethereum base-chain.
2. _Ordering policies are mostly orthogonal to the problem of sequencer decentralization_. While decentralizing the sequencer is an important active research direction, we note that a suitable transaction ordering policy can be chosen orthogonally to the method of sequencer decentralization. In particular, the decentralized protocol can first be used to agree on single _pre-ordering_ or _scoring_ of transactions, following which a specific ordering policy can be applied. In other words, the output of the decentralized protocol can be thought of simulating the input of a virtual centralized sequencer on which the ordering policy gets applied. An example of this is seen in the recent line of works on fair-ordering [3, 8, 9, 11, 20]--they can be thought of as a decentralized implementation of a first-come-first-serve ordering policy which combines local transaction orderings from many nodes. Furthermore, while current centralized sequencer implementations are semi-trusted in that they receive transactions in plaintext and are expected not to deviate from the specified ordering policy or insert transactions of their own, we note that transaction data can be hidden from the sequencer by using threshold decryption by a committee (i.e., the sequencer only sees encrypted transactions and orders them, only after which a committee decrypts the plaintext) or trusted hardware (such as Intel SGX). Through these techniques, the adversarial behavior of the sequencer can be substantially restricted. The study of ordering policies is important even when the sequencer is trusted (or is suitably constrained as mentioned above) due to the presence of other profit-seeking entities in the system. For instance, after the sequencer publishes state after execution of previous transaction(s), arbitrage opportunities can be created; players in the system will compete with each other to take advantage of these opportunities. Similar situations can also arise due to state updates from external systems.
### 1.1 Existing Ordering Policies
Ordering policies used on blockchains today fall roughly into three categories described below.
First-come first-serve (FCFS).One natural ordering policy is the first-come, first-serve (FCFS) rule. Here, transactions are sequenced in the same order that they were received from users. There are several advantages to FCFS: to begin, it is simple to implement and seems intuitively fair--after all, it is a commonly used policy even for real-world interactions. FCFS also minimizes transaction latency: transactions can be continuously sequenced as they arrive, and do not need to conform to the discrete granularity of blocks. The sequencer in the layer-2 rollup Arbitrum employs an FCFS policy. One major disadvantage of FCFS however, is that creates _latency competition_ in the sense that entities are incentivized to position themselves as close to the sequencer as possible in order to be the first to react to any new market information. This is a well known and studied problem within traditional financial systems. Indeed, high frequency trading (HFT) firms invest millions of dollars into low-latency infrastructure that can operate sub-microsecond or even finer scales; their trading accounts for roughly half of all trading volume [13]. This inclination to latency investment is highly inefficient since the investment happens externally to the system (as opposed to bidding; see below) and therefore cannot be used beneficially within the system. Recent works [1, 17] have also shown the potential for similar strategic manipulation within a pure FCFS protocol in the decentralized setting.
One crucial point to emphasize here is that this latency competition in FCFS _does not disappear even if transaction data is hidden_ (e.g., transactions are encrypted). This is because any state changes (from the sequencer or even from external systems) can trigger a profit opportunity wherein it is beneficial to have the quickest access to the sequencer. As a specific example, an update on the trading price of a token can create an arbitrage opportunity whose profit will go only to the player who can submit its transaction to the sequencer first2. This kind of latency-based arbitrage has already been seen in Arbitrum, which implements a centralized FCFS sequencer.
Footnote 2: Another approach if the sequencer broadcasts state information in a random order to clients is to create many dummy client copies, thereby increasing the chances that some copy gets the feed faster.
**Per-block transaction bidding.** A second natural policy is to group transactions into blocks, then order transactions within a block based on their _bid_. Specifically, each transaction is submitted along with a fee or bid; the sequencer now collects all transactions submitted within some time interval and sequences them by the descending order of their bids. This essentially simulates a first-price all-pay auction [10] (i.e., players bid independently; the highest bid wins but all players need to pay their bid amount) to take advantage of a particular arbitrage opportunity. Since players submit their bids independently, the bidding policy can work as expected even when transactions are encrypted (since state or market updates create arbitrage opportunities).
One advantage of a bidding policy (compared to FCFS) is that the payment is internal to the system and therefore can be utilized within it to e.g., subsidize protocol operation costs.
When the block-time is large (e.g., \(12s\) as in Ethereum), it is expected that for almost all arbitrage opportunities, all interested players can post their bid within the time interval in an attempt to take advantage of the opportunity. However, when the block-time is small (this is typically the case in layer-2 protocols to increase scalability), perhaps surprisingly, having a connection with lower latency can provide a substantial advantage. This is because when the market update happens close to end of the block time, only players with a faster connection will be able get their transaction included in the block; consequently, they may be able to take advantage of the arbitrage opportunity with a smaller (or even a zero) bid.
Looking ahead, our TimeBoost policy (which combines both arrival times and bidding) will enable arbitrageurs to prefer bidding even when block times are small, thereby allowing the protocol to capture this value rather than it being lost to external latency infrastructure.
**Block or MEV auctions.** A third widely-used policy auctions off the complete rights to choose and order transactions within a block. Here, the sequencer does not order transactions itself but rather accepts block proposals from external players (often called block _builders_) and chooses the proposal from the builder who pays the most. These auctions initially arose from the realization that significant profit (often referred to as maximal (previously miner) extractable value or MEV [2, 6]) can be extracted by manipulating the ordering of user transactions. In the past two years, through companies such as Flashbots and Bloxroute, an MEV marketplace has been created on Ethereum _outside of the protocol_ to connect block proposers (entities in charge of proposing or sequencing a block) to block builders (players who find MEV opportunities and order user transactions to take advantage of them)--the result has been the extraction of hundreds of millions of dollars in profit from user transactions [15, 18].
While some MEV (such as arbitrage, which provides incentives for price discovery) is benign and can be done without the knowledge of user transactions, other forms of MEV extraction crucially rely on the transaction data. Recent works [18, 19] have shown such
MEV to be significantly detrimental to users. The emergence of such MEV extraction has largely been attributed to the rationality of block proposers as well as the lack of regulation. For example, in traditional financial systems, it is often illegal or at the very least heavily constrained to profit from the knowledge of user transactions (for instance, payment-for-order-flow (PFOF): the selling of user transaction data is illegal in the UK, and, while legal in the US, still requires users to be provided with guarantees of "best execution").
A design goal for our work is therefore to design ordering policies that are data-independent, i.e., they do not use transaction data for ordering. This will allow them to be used even when transactions are encrypted at the time of sequencing.
### Our contributions
TimeBoost: An ordering policy that combines FCFS and bidding.We propose TimeBoost, an ordering policy that combines both FCFS-style timestamps and first-price auction style bids. Below, we describe several natural goals that went into our design.
1. **Data independence.** The policy should not utilize the transaction data for ordering. This is a natural goal in order to support encrypted transactions and prevent data-dependent MEV attacks on transaction ordering.
2. **Low finalization time.** The policy should be able to sequence transactions within a short time \(g\) (the specific parameter can be set according to the application). This is important to improve the user experience with the system since transactions will be sequenced within time \(g\) after they are received.
3. **Independence of irrelevant transactions.** The ordering between two given transactions should not depend on the presence of other transactions. This is useful to prevent an adversary from inserting irrelevant transactions that results in flipping the ordering between two target transactions. Importantly, this property also ensures that a transaction submitter's strategy need only consider transactions that are relevant to the party's goals--for example if Alice is trying to capture a particular arbitrage opportunity, she need only worry about other transactions affecting that opportunity.
4. **Inclination to spending via bids instead of latency infrastructure.** As mentioned before, investments into latency infrastructure are highly inefficient from the system standpoint since the value spent cannot be utilized effectively by the system. Therefore, a natural goal is to disincentivize latency investment and instead incentivize players to bid for their transactions. Looking ahead, perhaps surprisingly, we find that the pure bidding policy results in a larger latency competition than our TimeBoost policy which combines bidding with FCFS style timestamps.
TimeBoost details.Intuitively, TimeBoost works by assigning _scores_ to transactions based on both their arrival times and their bid. The final ordering is taken to be descending in the transaction scores. More specifically, for a transaction with arrival time \(t\) and bid \(b\), TimeBoost assigns it the score \(S(\mathsf{tx})=\pi(b)-t\) where intuitively \(\pi\) represents a function for "buying time"--by increasing the transaction bid, users can reduce their effective timestamp (or equivalently, increase their score). Section 3 describes how to choose the function \(\pi\).
Importantly, there is a limit to how much time can be "bought" through the bid--in particular, no transaction can outbid a transaction received some \(g\) time earlier. Such a property is required to ensure the quick finalization of user transactions. At the same time, transactions received less than \(g\) time before can always be outbid; this means that
arbitrageurs always have \(g\) time to compete for any arbitrage opportunity as opposed to a pure bidding policy and will therefore prefer bidding over latency infrastructure investments.
We also show that TimeBoost satisfies all the useful economic properties of first-price all-pay auctions. Further, we show that players spend exactly the same amount in total with TimeBoost, as they would spend if only latency investment was allowed, except that most of the investment is done through bidding and therefore can be captured within the protocol for e.g., lowering user fees or for protocol development.
## 2 Ordering Policies
### Preliminaries
A transaction \(\mathtt{tx}\) that arrives at the sequencer can be characterized by a tuple \((\mathtt{data},t,b)\) where \(\mathtt{data}\) represents the transaction data, \(t\) denotes the arrival time, and \(b\) denotes the transaction bid (note that when transactions are of different sizes, \(b\) can be instead be considered to be a bid per unit size). Let \(\mathcal{T}\) denote the set of all possible transactions; in principle this can be infinite or even uncountable (e.g., if arrival times are in \(\mathbb{R}^{+}\)) and our results do hold for these cases. For practical use-cases, typically, arrival times can be assumed to be in \(\mathbb{Q}^{+}\) and bids can be assumed to be in \(\mathbb{N}^{\geq 0}\).
An ordering policy now defines how a sequencer orders a finite set \(\mathcal{T}^{\prime}\) of transactions that it has received. A formal definition is given below:
[(Data-Independent) Ordering Policy] An ordering policy _(or algorithm)_\(\mathbb{P}\)_takes as input a finite subset \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) of transactions and outputs a linear ordering \(\mathbb{P}(\mathcal{T}^{\prime})\). For \(\mathtt{tx}\in\mathcal{T}^{\prime}\), let \(\mathbb{P}(\mathcal{T}^{\prime},\mathtt{tx})\) denote the position of transaction \(\mathtt{tx}\) in the ordering \(\mathbb{P}(\mathcal{T}^{\prime})\). In other words, given \(\mathcal{T}^{\prime}\) and \(\mathtt{tx}_{a},\mathtt{tx}_{b}\in\mathcal{T}^{\prime}\), \(\mathbb{P}\) outputs \(\mathtt{tx}_{a}\) before \(\mathtt{tx}_{b}\) if \(\mathbb{P}(\mathcal{T}^{\prime},\mathtt{tx}_{a})<\mathbb{P}(\mathcal{T}^{ \prime},\mathtt{tx}_{b})\).
A policy is further called data-independent if it does not make use of the transaction data (i.e., it only uses the arrival time and the bid).
Since we want our ordering policies to not be based on the transaction content, we only consider data-independent policies for the rest of the paper. For simplicity, we can therefore represent a transaction \(\mathtt{tx}\) simply by the tuple \((\mathtt{tx}.t,\mathtt{tx}.b)\). Furthermore, since ties can be broken by some chosen technique, without loss of generality, we can also assume \((\mathtt{tx}.t,\mathtt{tx}.b)\) tuples are unique. While the tie-breaking can be dependent on e.g., transaction ciphertext or metadata, this does not affect our analysis and therefore can be safely ignored for the purpose of our paper.
### Independence of Irrelevant Transactions (IIT)
A useful property for our ordering policy to have is to prevent the ordering decision between transactions \(\mathtt{tx}_{a}\) and \(\mathtt{tx}_{b}\) to change depending on what other transactions are being ordered; in other words, the ordering decision should not depend on irrelevant transactions. Intuitively, this is done to ensure that an adversary cannot create dummy transactions in order to flip the ordering decision between two transactions, and so that a party's bidding strategy can ignore transactions irrelevant to that party. We define this property of independence of irrelevant transactions (IIT) below.
[Independence of Irrelevant Transactions] We say that a policy \(\mathbb{P}\) satisfies independence of irrelevant transactions (IIT) if for any pair of transactions \(\mathtt{tx}_{a},\mathtt{tx}_{b}\) and any
pair of finite subsets \(\mathcal{T}_{1},\mathcal{T}_{2}\subset\mathcal{T}\), the following holds:_
\[\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\}\cup\mathcal{T}_{1}, \mathsf{tx}_{a}) <\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\}\cup\mathcal{T}_{1}, \mathsf{tx}_{b})\] \[\Leftrightarrow\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\}\cup \mathcal{T}_{2},\mathsf{tx}_{a}) <\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\}\cup\mathcal{T}_{2}, \mathsf{tx}_{b}).\]
### IIT Implies a Score-Based Policy
We now show that the IIT property implies that a score-based policy needs to be used--that is, also needs to be independent of the set \(\mathcal{T}^{\prime}\) being ordered.
Intuitively, a score-based policy works as follows: for transaction \(\mathsf{tx}\), it assigns a score \(S(\mathsf{tx})\) based only on the arrival time \(\mathsf{tx}.t\) and the bid \(\mathsf{tx}.b\). Here too, scoring ties can be broken in a pre-specified manner. The output sequence is then taken to the descending order of transaction scores. Score-based policies are formally defined below:
[Score-based policy] A score is a function \(S:\mathcal{T}\to\mathbb{R}\) that assigns to each possible transaction \(\mathsf{tx}\in\mathcal{T}\) a score \(S(\mathsf{tx})\). An ordering policy \(\mathbb{P}\) is called score-based if there exists a score function \(S\) such that \(\mathbb{P}\) sorts transactions according to \(S\). In other words, there exists \(S\) such that for any \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) and \(\mathsf{tx}_{a},\mathsf{tx}_{b}\in\mathcal{T}^{\prime}\), it holds that \(\mathbb{P}(\mathcal{T}^{\prime},\mathsf{tx}_{a})<\mathbb{P}(\mathcal{T}^{ \prime},\mathsf{tx}_{b})\) if and only if \(S(\mathsf{tx}_{a})>S(\mathsf{tx}_{b})\).
For finite \(\mathcal{T}\), we can directly show that IIT implies score-based policies. To show the result for infinite sets, we need to employ the following set-theoretic axiom (defined below) by Cantor [4]. Similar definitions have also been used in in the context of utility theory [7]
[Cantor's Axiom [4]] We say that a pair \((\mathbb{P},\mathcal{T})\) satisfies Cantor's axiom if there exists a countable set \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) such that for any pair of transactions \(\mathsf{tx}_{a},\mathsf{tx}_{b}\in\mathcal{T}\) there exists an instance of \(\mathbb{P}\) in which some transaction in \(\mathcal{T}^{\prime}\) is ordered between \(\mathsf{tx}_{a}\) and \(\mathsf{tx}_{b}\).
Formally there is a finite set \(\mathcal{T}^{\prime\prime}\subset\mathcal{T}\) with \(\mathsf{tx}_{a},\mathsf{tx}_{b}\in\mathcal{T}^{\prime\prime}\) and a \(\mathsf{tx}_{c}\in\mathcal{T}^{\prime}\cap\mathcal{T}^{\prime\prime}\) (possibly \(\mathsf{tx}_{c}=\mathsf{tx}_{a}\) or \(\mathsf{tx}_{c}=\mathsf{tx}_{b}\)) such that
\[\mathbb{P}(\mathcal{T}^{\prime\prime},\mathsf{tx}_{a}) \leq\mathbb{P}(\mathcal{T}^{\prime\prime},\mathsf{tx}_{c})\leq \mathbb{P}(\mathcal{T}^{\prime\prime},\mathsf{tx}_{b}),\] \[\text{or}\] \[\mathbb{P}(\mathcal{T}^{\prime\prime},\mathsf{tx}_{b}) \leq\mathbb{P}(\mathcal{T}^{\prime\prime},\mathsf{tx}_{c})\leq \mathbb{P}(\mathcal{T}^{\prime\prime},\mathsf{tx}_{a}).\]
We can now establish the following correspondence between IIT and score-based policies.
[\(\mathsf{IIT}\Leftrightarrow\mathsf{Score-Based})\) Let \(\mathcal{T}\) denote the set of all transactions. The following hold for any ordering policy \(\mathbb{P}\):
1. If \(\mathcal{T}\) is countable, then \(\mathbb{P}\) satisfies IIT if and only if it is score-based.
2. If \(\mathcal{T}\) is uncountable and \((\mathbb{P},\mathcal{T})\) satisfies Cantor's axiom, then \(\mathbb{P}\) satisfies IIT if and only if it is score-based.
Proof.: It is straightforward to see that a score-based algorithm satisfies the independence of irrelevant transactions (since the score of a transaction depends only on itself and not other transactions).
For the opposite direction, we first prove the second part of the theorem (the uncountable case). We define an order \(\prec\) over \(\mathcal{T}\) where
\[\mathsf{tx}_{a}\prec\mathsf{tx}_{b}:\Leftrightarrow\mathbb{P}(\{\mathsf{tx}_{ a},\mathsf{tx}_{b}\},\mathsf{tx}_{a})<\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\}, \mathsf{tx}_{b}).\]
Since \(\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\})\) is a well-defined for any two transactions \(\mathsf{tx}_{a},\mathsf{tx}_{b}\in\mathcal{T}\), the order \(\prec\) is complete and anti-symmetric. By independence and since \(\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_{c}\})\) is a well-defined order for any three transactions \(\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_{c}\in\mathcal{T}\) we have
\[\mathsf{tx}_{a}\prec\mathsf{tx}_{b}\prec\mathsf{tx}_{c}\] \[\Rightarrow (\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_{c}\}, \mathsf{tx}_{a})<\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b}\},\mathsf{tx}_{b })\text{ and }\mathbb{P}(\{\mathsf{tx}_{b},\mathsf{tx}_{c}\},\mathsf{tx}_{b })<\mathbb{P}(\{\mathsf{tx}_{b},\mathsf{tx}_{c}\},\mathsf{tx}_{c}))\] \[\Rightarrow \mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_{c}\}, \mathsf{tx}_{a})<\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_{c }\},\mathsf{tx}_{b})<\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_ {c}\},\mathsf{tx}_{c})\] \[\Rightarrow \mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{b},\mathsf{tx}_{c}\}, \mathsf{tx}_{a})<\mathbb{P}(\{\mathsf{tx}_{a},\mathsf{tx}_{c}\},\mathsf{tx}_{ c})\] \[\Rightarrow \mathsf{tx}_{a}\prec\mathsf{tx}_{c}\]
Therefore, \(\prec\) is transitive. We let \(\mathsf{tx}_{a}\preceq\mathsf{tx}_{b}\) iff \(\mathsf{tx}_{a}\prec\mathsf{tx}_{b}\) or \(\mathsf{tx}_{a}=\mathsf{tx}_{b}\).
The Cantor axiom and independence imply that there is a countable \(\mathcal{T}^{\prime}\subset\mathcal{T}\) so that the order \(\prec\) satisfies that for any \(\mathsf{tx}_{a},\mathsf{tx}_{b}\in\mathcal{T}\) there is a \(\mathsf{tx}_{c}\in\mathcal{T}^{\prime}\) such that
\[\mathsf{tx}_{a}\prec\mathsf{tx}_{b}\Rightarrow\mathsf{tx}_{a}\preceq\mathsf{ tx}_{c}\preceq\mathsf{tx}_{b}\]
By Theorem 1.1 in [5], this, in turn, implies that there is a numerical representation of the order \(\prec\) which is a score \(S:\mathcal{T}\to\mathbb{R}\) such that for any two transactions \(\mathsf{tx}_{a},\mathsf{tx}_{b}\in\mathcal{T}\) we have \(\mathsf{tx}_{a}\prec\mathsf{tx}_{b}\) if and only if \(S(\mathsf{tx}_{a})>S(\mathsf{tx}_{b})\).
For the first part of the theorem, note that the previous argument also works for a countable \(\mathcal{T}\) and in that case we can choose \(\mathcal{T}^{\prime}=\mathcal{T}\) where the Cantor axiom is now trivially satisfied.
The above result extends to the case where the policy creates a weak ordering (which can be made strict through a tie-breaking procedure) rather than a strict ordering of transactions. In that case, Definitions 2 and 3 are adapted to weak orders, and we get a score that might assign the same value to two different transactions. The relaxation to weak orders is useful for the case that the set of transactions is uncountable and not a subset of the real numbers (e.g. if \(\mathcal{T}=\mathbb{R}_{+}^{2}\)). In that case, the Cantor axiom is impossible to satisfy for strict orders but satisfiable for weak orders.
**Discussion.** We note that in our context, assuming \(\mathcal{T}\) is countable or even finite is safe, as there is a finite smallest time increment for timestamps and a finite smallest bid increment. Moreover, the ordering policy deals with ordering transactions in a finite time interval and bids will be upper-bounded by the maximum value in the system (e.g., the maximum number of tokens). However, for the subsequent economic analysis, it will be more convenient to work with the continuum where differences in time stamps and bids can be arbitrarily small.
Having proven that score-based algorithms are essentially the only ones satisfying the independence of irrelevant transactions property, we turn to selecting the most natural one among them. Note that FCFS is the scoring function that corresponds to scoring transactions by their timestamp only while scoring transactions only by bids corresponds to the first-price auction solution. In the next section, we show how our scoring policy TimeBoost corresponds to a simple mixture of these two strategies.
## 3 TimeBoost Description
We now formally define the TimeBoost ordering policy in this section. As mentioned before, we want TimeBoost to satisfy the independence of irrelevant transactions property (i.e., it needs to be a scoring function based on Theorem 3.1) and also provide low confirmation-latency
for transactions. Therefore, we will only allow TimeBoost to consider transactions within a time \(g\) interval; this granularity \(g\) can be set suitably based on the particular usecase.
**Basic model.** Suppose there are \(n\) transactions in the \(g\) time interval, labeled with \(\mathsf{tx}_{1},tx_{2},\cdots\mathsf{tx}_{n}\), and sorted by increasing arrival time. Each transaction \(\mathsf{tx}_{i}\) is characterized by a pair of a timestamp or arrival time, denoted by \(t_{i}\), and a bid, denoted by \(b_{i}\geq 0\). Formally, we view a transaction as a tuple of non-negative reals, \(\mathsf{tx}_{i}=(t_{i},b_{i})\in\mathbb{R}^{+}\times\mathbb{R}^{\geq 0}\).
**TimeBoost scoring function.** Intuitively, for the TimeBoost scoring function, we propose to allow users to "buy time" using their transaction bid; in other words, transactions will be sorted by increasing timestamps (as in FCFS) but now users are allowed to decrease their effective timestamp (i.e., increase their score) through bids.
Formally, the score of a transaction \(\mathsf{tx}_{i}=(t_{i},b_{i})\) is computed as follows:
\[S(t_{i},b_{i})=\pi(b_{i})-t_{i}. \tag{1}\]
where \(\pi(b_{i})\) denotes the priority or advantage gained by bidding \(b_{i}\). Transactions are now chosen in descending order of their scores.
**Choosing a bidding function \(\pi\).** To choose the bidding function \(\pi\) for TimeBoost, we start by defining several natural properties that should be satisfied.
1. \(\pi(0)=0\). This normalization implies that paying \(0\) bid gives no additional advantage.
2. \(\pi^{\prime}(b)>0\) for all \(b\in\mathbb{R}^{+}\) where \(\pi^{\prime}\) denotes the first derivative of \(\pi\) with respect to the bid. This implies that the priority increases with the bid, which gives incentive to bid more for a higher priority.
3. \(\lim_{b\to\infty}\pi(b)=g\). This implies that no transaction can outbid a transaction which arrived \(g\) time earlier (but any time advantage of less than \(g\) can be outbid). Through this, we can guarantee that the transaction ordering can be finalized within time \(g\).
4. \(\pi^{\prime\prime}(b)<0\) for all \(b\in\mathbb{R}^{+}\) where \(\pi^{\prime\prime}\) denotes the second derivative of \(\pi\) with respect to the bid. This means that priority is concave, or equivalently, the cost of producing the (bidding) signal is convex. This is generally necessary to obtain the interior solution of the equilibrium condition.
The simplest bidding function satisfying the above constraints is the function:
\[\pi(b_{i}):=\frac{gb_{i}}{b_{i}+c} \tag{2}\]
where \(c\) is some constant. We will use this as the bidding function for TimeBoost. In the next section, we provide an economic analysis for TimeBoost. For this, we will assume that \(c=1\).
**Complexity.** For any incoming transaction \(\mathsf{tx}=(t,b)\), the sequencer can finalize \(\mathsf{tx}\) after a delay of \(g-\pi(b_{i})\). This is because after this point, no later transaction can outbid \(\mathsf{tx}\). If transactions arrive at rate \(r\), the space complexity of the sequencing algorithm is \(\Theta(r)\) and the computational cost per transaction is \(\Theta(\log r)\), assuming pending transactions are stored in a priority queue, ranked by score.
### TimeBoost Economic Analysis Overview
We now describe the model for analyzing the economics for our TimeBoost ordering policy. The next two sections will describe this analysis in detail.
**Basic model.** Consider an arbitrage opportunity that occurs at some time (w.l.g., this can be taken as time \(0\)). Users (from now on referred to as players) need to now take a decision on (1) how to send their transaction to the sequencer; this corresponds to the investment in
latency; and (2) how much extra to bid for their transaction to get higher priority. We will analyze a simple economic model of this decision problem.
Assume that it costs user \(i\) the amount \(c_{i}(t)\) to get its transaction received by the sequencer \(t\) time after the arbitrage opportunity arises. The only requirement on \(c_{i}(t)\), for now, is that it is decreasing in increasing \(t\). When the arbitrage opportunity arises, a player \(i\) has a valuation \(v_{i}\) to have its transaction included for execution the earliest, among those transactions contending for the same opportunity.
**Analysis organization.** We begin with an analysis with two players in Section 4. Within this, we consider different models based on when the latency investment needs to occur. Broadly, we consider two models for latency investment: ex-ante (Section 4.1) and ex-post (Section 4.2). Ex-ante means that the latency investment needs to happen _before_ learning the arbitrage opportunity while ex-post means that the latency investment can occur after learning about the arbitrage opportunity.
In Section 5, we generalize our results to many competing players.
## 4 Analysis of TimeBoost with 2 Players
As a starting point, assume that there are two players with valuations \(v_{1}\) and \(v_{2}\), distributed as per the cumulative distribution functions (CDFs) \(F_{1}\) and \(F_{2}\). That is, the probability that the valuation of player \(i\) is less or equal to \(x\) is equal to \(F_{i}(x)\).
For each valuation \(v\), the player may choose their specific latency investment. We can model this as a function \(t_{i}:V\to\mathbb{R}\), such that, \(t_{i}(v)\) is the latency / time chosen by a player \(i\) with valuation \(v\). For simplicity, assume that the cost functions and value distributions are the same: \(c_{i}(t)=c(t)\) and \(F_{i}=F\). Throughout the paper, we assume that \(F:[0,1]\to[0,1]\) is a uniform distribution with \(F_{i}(x)=x\) iff \(x\in[0,1]\), for \(i\in\{1,2\}\), when final numerical values are derived. Obtaining numerical values for different distribution functions is very similar to that of uniform distribution, but we choose a uniform for simplicity of exposition. However, most of the computations are done for general distribution functions.
We now consider two different assumptions regarding the investment in latency improvement. In the first model (ex-ante), we assume that the players need to invest in their latency infrastructure in advance: they acquire or rent servers close to the sequencer prior to knowing the value of the arbitrage opportunities they are competing for. In the second (ex-post) model, we assume that the players are able to invest in the latency after they learn about their valuation of the arbitrage. This corresponds to the case where the arbitrage opportunity itself takes some time to be realized3. In this case, the transaction sender can schedule its transaction through the third-party service, which guarantees the delivery of the transaction within some time interval, once the arbitrage opportunity is realized. In both cases, bidding is naturally assumed to be an interim decision, and in fact, one of the biggest advantages, as the valuation is already learned.
Footnote 3: Example of such an opportunity is a 12-second delay on the Ethereum network for a transaction to be scheduled.
### Ex-Ante Latency Investment
In this model, players learn their valuations only after they have already invested in latency infrastructure. If players can only compete through latency, the interaction between them becomes a static game. We study equilibria solutions of these games. A similar setting is
considered in [16]. The results obtained in the following two subsections are concrete cases of folk results in the microeconomic theory; however, we include their proofs for completeness.
#### Only latency investment
As a simple first step, we start by analyzing the game where only latency investment is allowed. Let \(x_{i}\) be the amount invested in latency by player \(i\) (so that he obtains a delay of \(t_{i}(x_{i})\)). Let \(V_{i}\) denote the valuation random variable of player \(i\). Then, player \(i\) has the following ex-ante payoff:
\[\text{Payoff}_{i}=\begin{cases}E[V_{i}]-x_{i}&\text{if player invests strictly more than the other player}\\ \frac{1}{2}E[V_{i}]-x_{i}&\text{if he invests an equal amount (assuming random tie-breaking)}\\ -x_{i}&\text{otherwise}\end{cases}\]
First, we note that there is no pure strategy Nash equilibrium solution of the game, in which player strategy sets consist of \(\mathbb{R}^{+}\). It is easy to show this by case distinction on valuations: there are simple deviating strategies in each case. Next, we focus on the mixed equilibrium solution and obtain the following result.
There is a symmetric equilibrium in mixed strategies where each player \(i\) chooses \(x_{i}\) uniformly at random on the interval \((0,E[V_{i}])\).
Proof.: By construction, the payoff of player \(j\) of playing \(x_{j}\leq E[V_{j}]\) against the uniform strategy on \((0,E[V_{i}])\) is
\[F(x_{j})E[V_{j}]-x_{j}=\frac{x_{j}}{E[V_{i}]}E[V_{j}]-x_{j}=0.\]
Choosing a strategy \(x_{j}>E[V_{j}]\) gives a negative pay-off. Therefore, each \(0\leq x_{j}\leq E[V_{j}]\) is the best response of player \(j\), and mixing uniformly among them is also the best response.
The above-described equilibrium is unique up to a change of strategy on a null set and in any mixed equilibrium, both players obtain the same payoffs as in this equilibrium. Note that the result is independent of the latency cost function. The only property used is that if a player invests more than the other player in the latency technology, its transaction is scheduled earlier.
#### Budget constraints
We now model the fact that players may not have access to an arbitrary amount of money they need to invest to improve their latency, but are instead are constrained by a budget. Let \(B_{i}\) denote the budget of player \(i\), meaning that player \(i\) cannot spend more than \(B_{i}\). We consider an asymmetric case where one (weak) player has a budget \(B_{1}<E[V_{i}]\) and the other (strong) player has a larger budget with \(B_{2}>B_{1}\). First, note that similar to the previous section with unlimited access to money, there is no pure strategy Nash equilibrium. Therefore, we switch to mixed strategy equilibrium. Let \(F_{i}\) denote the probability distribution of playing different strategies.
There exists a mixed Nash equilibrium solution in the game in which the weak player receives a payoff of \(0\) and the strong player receives a payoff of \(E[V_{i}]-B_{1}\).
Proof.: The following strategy profile in which the first player plays according to the following (mixed) strategy:
\[F_{1}(x)=\begin{cases}\frac{x}{E[V_{i}]}+\frac{E[V_{i}]-B_{1}}{E[V_{i}]},&x\in(0, B_{1}],\\ \frac{E[V_{i}]-B_{1}}{E[V_{i}]},&x=0,\\ 1,&x>B_{1},\end{cases}\]
the second player plays according to
\[F_{2}(x)=\begin{cases}\frac{x}{E[V_{i}]},&x\in[0,B_{1}),\\ 1,&x\geq B_{1},\end{cases}\]
is a mixed strategy equilibrium. The first, weak player obtains an expected payoff \(0\) for any choice of \(0\leq x_{1}<B_{1}\). The second, strong player obtains an expected payoff of \(E[V_{i}]-B_{1}\) for any choice \(0<x_{2}\leq B_{1}\). Choosing \(x_{2}>B_{1}\) is wasteful for the second player and will not occur in equilibrium. Thus, both players are indifferent between all pure strategies in support of \(F_{1}\) resp. \(F_{2}\) and for player \(2\) choosing an action outside of the support of \(F_{2}\) is dominated. The mixed strategies form a Nash equilibrium.
Similarly to the unconstrained case, the above-described equilibrium is unique up to a change of strategy on a null set and in any mixed equilibrium, both players obtain the same payoffs as in this equilibrium. Also similarly to the previous section, the result is independent of the latency cost function. The only property to derive this result is that if a player invests more than the other player in the latency technology, its transaction is faster (has a lower timestamp).
#### Ex-ante Latency with Interim Bidding
We now analyze the model where both latency and bidding are allowed but the latency is ex-ante. That is, investment in latency happens before players learn their valuations but after learning their valuation players can use bidding to improve the transaction score.
We consider a version where players learn the other players' latency investment decisions before bidding. This models the fact that players will typically play the game repeatedly and can therefore observe latency levels of each other.
In the following let \(x=(x_{1},x_{2})\) be the latency investment levels chosen by the two bidders and let \(\Delta:=t_{2}(x_{2})-t_{1}(x_{1})\) be the corresponding difference in latency. W.l.o.g. assume \(\Delta\geq 0\). First, we consider the case that \(\Delta=0\):
**Proposition 9**.: _There is a completely separating equilibrium of the bidding game when both bidders have made the same ex-ante investment._
Proof.: Given in Section 4.1.4
Next, we consider the case that \(\Delta\neq 0\). For the case of different ex-ante investment we get partially separating equilibria where bidders do not bid for low valuations and bid for high valuations. The bidding strategies are asymmetric in general. However, for sufficiently large \(g\) the equilibrium becomes approximately symmetric and approximately efficient. See Figure 1 for a graphical illustration.
**Proposition 10**.: _There is an equilibrium of the bidding game which is separating conditional on bidding: There is a threshold \(\sqrt{\frac{\Delta}{g-\Delta}}\), such that a bidder does not bid if his valuation is
below the threshold and bids if his valuation is above the threshold. Conditional on bidding, the high latency bidder \(i\) produces a higher signal than the low latency bidder \(j\) for equal valuations: \(\pi_{i}(v)-t_{i}>\pi_{j}(v)-t_{j}\), for \(v>\sqrt{\frac{\Delta}{g-\Delta}}\)._
Proof.: Given in Section 4.1.4
The equilibrium analysis in Propositions 4.1 and 4.1 indicates how efficient our transaction ordering policy is as a function of the latency investment of bidders. If bidders have the same latency we have a standard all pay auction which yields a fully efficient outcome. If there is a difference in latency we have no bidding for low valuation bidders and approximately equal signals produced for equal valuations for high valuation bidders. Conditional on entry, low latency bidders underbid and high latency bidders overbid relative to the standard all pay strategies. Efficiency depends on the latency difference and the \(g\) parameter. If \(g\) is chosen sufficiently large the auction is approximately efficient. A too low \(g\) can be detected by low bidding activity. Hence our transaction policy can strike a balance between fairness, low latency and efficiency if properly parameterized.
#### Proofs
Proof of Proposition 4.1. We want to determine bidding signals \(\pi_{1}(v_{1},\Delta)\) and \(\pi_{2}(v_{2},\Delta)\), which are functions of valuations and the difference in latency. For a given \(\Delta\) denote the inverse of \(\pi_{1}(\cdot,\Delta)\) and \(\pi_{2}(\cdot,\Delta)\) by \(\tilde{v}_{1}(\cdot,\Delta)\) and \(\tilde{v}_{2}(\cdot,\Delta)\). Then bidder \(1\) solves at the interim stage
\[\max_{\pi\geq 0}Pr[\pi-t_{1}(x_{1})\geq\pi_{2}(v_{2},x)-t_{2}(x_{2})]v_{1}- \frac{\pi}{g-\pi}=F(\tilde{v}_{2}(\pi+\Delta,\Delta))v_{1}-\frac{\pi}{g-\pi},\]
We obtain the first order condition:
\[f(\tilde{v}_{2}(\pi+\Delta,\Delta))v_{1}\frac{\partial\tilde{v}_{2}(\pi+\Delta,\Delta)}{\partial\pi}=\frac{g}{(g-\pi)^{2}}\]
For the uniform distribution, this simplifies to:
\[v_{1}\frac{\partial\tilde{v}_{2}(\pi+\Delta,\Delta)}{\partial\pi}=\frac{g}{(g -\pi)^{2}}.\]
Figure 1: Example of equilibrium signaling functions for \(g=10\) and \(\Delta=0.1\). Timestamps are normalized so that \(t_{2}=0\). The blue function is the equilibrium signal \(\pi_{1}(v)-t_{1}\) for bidder \(1\) as a function of the valuation. The red function is the equilibrium signal \(\pi_{2}(v)-t_{2}\) for bidder \(2\) as a function of the valuation.
Similarly, for bidder 2 we obtain
\[v_{2}\frac{\partial\tilde{v}_{1}(\pi-\Delta,\Delta)}{\partial\pi}=\frac{g}{(g- \pi)^{2}}.\]
The two equations give a system of differential equations that need to be solved for \(\pi_{1}\) and \(\pi_{2}\) or alternatively for \(\tilde{v}_{1}\) and \(\tilde{v}_{2}\). Alternatively, we can write the system as:
\[\tilde{v}_{1}(\pi,\Delta)\frac{\partial\tilde{v}_{2}(\pi+\Delta,\Delta)}{ \partial\pi}=\frac{g}{(g-\pi)^{2}}. \tag{3}\]
\[\tilde{v}_{2}(\pi,\Delta)\frac{\partial\tilde{v}_{1}(\pi-\Delta,\Delta)}{ \partial\pi}=\frac{g}{(g-\pi)^{2}}. \tag{4}\]
The solution to (3) and (4) in case of equal investment (so that \(\Delta=0\)) and a symmetric equilibrium is given by the following formula:
\[\tilde{v}_{1}(\pi,0)=\tilde{v}_{2}(\pi,0)=\sqrt{2\int_{0}^{\pi}\frac{g}{(g- \pi)^{2}}d\pi}=\sqrt{\frac{2\pi}{g-\pi}}. \tag{5}\]
We solve for the signal as a function of the valuation:
\[v^{2}=\frac{2\pi}{g-\pi}\Leftrightarrow\pi=\frac{gv^{2}}{2+v^{2}}.\]
Proof of Proposition 10.: When \(x_{1}\neq x_{2}\), we can first sum up (3) and (4) to obtain a differential equation for the expected payoff \(v_{1}v_{2}\):
\[\frac{d(v_{1}(\pi)v_{2}(\pi+\Delta))}{d\pi}=\frac{g}{(g-\pi)^{2}}+\frac{g}{(g -\pi-\Delta)^{2}}. \tag{6}\]
Integrating both sides of the differential equation above gives the solution:
\[v_{1}(\pi)v_{2}(\pi+\Delta)=\frac{\pi}{(g-\pi)}+\frac{\pi+\Delta}{g-\pi- \Delta}+K. \tag{7}\]
To determine the constant we need to determine boundary conditions. For bidder 1, at the threshold where he is indifferent between bidding and not bidding, we have \(\pi_{1}=0\) and for bidder 2, at the threshold where he is indifferent between bidding and not bidding, he needs to overcome the handicap, we have \(\pi_{2}=\Delta\). At the threshold bidder 2 should make the same profit as from pooling,
\[v_{1}(0)v_{2}(\Delta)=\frac{\Delta}{g-\Delta}\Rightarrow K=0.\]
Combining (7) and (4) we obtain a separable differential equation:
\[\frac{dv_{1}(\pi,\Delta)}{v_{1}(\pi,\Delta)}=d\pi\frac{g}{(g-\pi-\Delta)^{2}} \left(\frac{\pi}{g-\pi}+\frac{\pi+\Delta}{g-\pi-\Delta}\right)^{-1}. \tag{8}\]
Combining (7) and (3) we obtain another separable differential equation:
\[\frac{dv_{2}(\pi+\Delta,\Delta)}{v_{2}(\pi+\Delta,\Delta)}=d\pi\frac{g}{(g- \pi)^{2}}\left(\frac{\pi}{g-\pi}+\frac{\pi+\Delta}{g-\pi-\Delta}\right)^{-1}. \tag{9}\]
Integrating both parts of the equation (8) solves the (logarithm of) the value as a function of the bid:
\[\ln(v_{1}(\pi))-\ln(v_{1}(0))=\int_{0}^{\pi}\frac{g}{(g-\pi-\Delta)^{2}}\left( \frac{\pi}{g-\pi}+\frac{\pi+\Delta}{g-\pi-\Delta}\right)^{-1}d\pi.\]
Similarly, integrating both parts of the equation (9) solves the (logarithm of) the value as a function of the bid:
\[\ln(v_{2}(\pi+\Delta))-\ln(v_{2}(\Delta))=\int_{0}^{\pi}\frac{g}{(g-\pi)^{2}} \left(\frac{\pi}{g-\pi}+\frac{\pi+\Delta}{g-\pi-\Delta}\right)^{-1}d\pi.\]
To determine the marginal valuations \(v_{1}(0)\) and \(v_{2}(\Delta)\) at which the two bidders start bidding, note that the support of \(\pi_{i}-t_{i}\) and that of \(\pi_{j}-t_{j}\) need to coincide for valuations where we have separation of types. Therefore, \(v_{1}(0)=v_{2}(\Delta)\). Since \(v_{1}(0)v_{2}(\Delta)=\frac{\Delta}{g-\Delta}\) it follows that \(v_{1}(0)=v_{2}(\Delta)=\sqrt{\frac{\Delta}{g-\Delta}}\). This is the threshold where bidders start bidding. It follows that for \(\Delta\neq 0\)
\[v_{1}(\pi)=\sqrt{\frac{\Delta}{g-\Delta}}\exp\left(\int_{0}^{\pi}\frac{g}{(g- \pi-\Delta)^{2}}\left(\frac{\pi}{g-\pi}+\frac{\pi+\Delta}{g-\pi-\Delta}\right) ^{-1}d\pi\right)\]
and
\[v_{2}(\pi)=\sqrt{\frac{\Delta}{g-\Delta}}\exp\left(\int_{0}^{\pi-\Delta}\frac{ g}{(g-\pi)^{2}}\left(\frac{\pi}{g-\pi}+\frac{\pi+\Delta}{g-\pi-\Delta}\right) ^{-1}d\pi\right).\]
To compare the equilibrium signals \(\pi_{1}(v)-t_{1}\) and \(\pi_{2}(v)-t_{2}\) for \(v>\sqrt{\frac{\Delta}{g-\Delta}}\), we need to compare \(\pi_{1}(v)+\Delta\) to \(\pi_{2}(v)\).
From the expressions for the valuations as a function of the bid, we can observe (observe that \(\frac{g}{(g-\pi-\Delta)^{2}}\geq\frac{g}{(g-\Delta)^{2}}\)) that
\[v_{1}(\pi)>v_{2}(\pi+\Delta),\]
for \(\pi>0\). It follows that
\[\pi_{1}(v)\leq\pi_{2}(v)-\Delta,\]
for \(v\geq\sqrt{\frac{\Delta}{g-\Delta}}\).
### Ex-Post Latency with Bidding
We now analyze the ex-post model with bidding; here both the latency investment, and the bid can be made after the valuation is observed. First, we start with only the latency investment decision. The expected utility of player \(i\) is equal to:
\[Pr[t(v_{i})<t(v_{j})]v_{i}-c(t(v_{i})),\]
where \(j\in\{1,2\}\setminus i\).
We can look at this from a dual perspective: by \(v(t)\) we define the inverse of \(t(v)\). This is the so-called _Revelation Principle_. Instead of some function of the type, we report our type directly. Then, the optimization problem becomes:
\[\max_{v}Pr[v\geq v_{2}]v_{1}-c(t(v)). \tag{10}\]
By replacing the probability with \(F(v)\), we get that it is equivalent to
\[\max_{v}F(v)v_{1}-c(t(v)).\]
By the first order condition, we get:
\[v_{1}f(v)-c^{\prime}(t(v))t^{\prime}(v)|_{v=v_{1}}=0,\]
where \(f\) is a density function of the valuation distribution \(F\). By plugging in \(v=v_{1}\), it is equal to:
\[v_{1}f(v_{1})-c^{\prime}(t(v_{1}))t^{\prime}(v_{1}). \tag{11}\]
For the uniform distribution and cost function \(c=\frac{1}{t}\), first order condition gives the following differential equation:
\[v_{1}+\frac{t^{\prime}(v_{1})}{t^{2}(v_{1})}=0. \tag{12}\]
Solving this equation gives \(t(v)=\frac{2}{c_{1}+v^{2}}.\) By the boundary condition that \(0\) valuation type should wait infinitely (or equivalently pay \(0\) in the latency), we obtain the value of the constant in the solution: \(c_{1}=0\). Therefore, cost incurred is equal to \(\frac{1}{t}=\frac{v^{2}}{2}\). On average each player pays:
\[\int_{0}^{1}\frac{v^{2}}{2}f(v)dv|_{0}^{1}=\frac{1}{6},\]
for better latency, before learning their types. The cost of producing score \(s=\frac{gm}{m+1}-t\) is:
\[c(s):=m+\frac{1}{t}. \tag{13}\]
We decompose total expenditure into \(2\) parts, for bidding and for time, by representing \(m\) and \(c(t(v))\) as functions of \(v\) and taking integrals:
\[b(g):=\int_{0}^{1}m(v)f(v)dv\text{ and }\int_{0}^{1}\frac{1}{t(v)}f(v)dv.\]
The limit of \(b(g)\) when \(g\) tends to infinity is equal to \(\frac{1}{6}\). \(b(g)\) is an increasing function in \(g\), for \(g\) large enough.
Proof.: Given in Section 4.2.1
The proposition implies that by taking large enough \(g\), the system extracts almost all value invested in the latency through bidding. Starting from some threshold value on \(g\), extraction increases with increasing \(g\).
We can verify whether the constructed equilibrium is unique by checking the conditions given in [12].
We can calculate a few values of \(b(g)\). In particular, \(b(1000)\approx 0.1294\), meaning a player pays approximately \(77\%\) of the total expenditure in bids, and \(b(10000)\approx 0.1537\), meaning a player pays approximately \(92\%\) of the total expenditure in bids.
Note that in the proof of the proposition 11, the total investment in both latency and bidding, \(c(v)\), is the same value \(\frac{v^{2}}{2}\), as in the case of only investing in the latency. We show that this is not a coincidence. In general, assume that there is an arbitrary signaling technology described by an increasing, differentiable cost function \(C(s)\). The following result shows the revenue equivalence of ex-post bidding:
**Proposition 13**.: _Both players spend the same amount on average for any cost function \(C\)._
Proof.: Given in Section 4.2.1
The amount spent depends only on the value belief distribution function.
#### Proofs
Proof of Proposition 11.: The optimization problem of the player in the equilibrium is to minimize cost, subject to the score equation constraint. By plugging in \(t=\frac{gm}{m+1}-s\), we obtain the minimization problem:
\[\min_{m}\left(m+\frac{m+1}{gm-s(m+1)}=:x(m)\right).\]
The first order condition on \(x(m)\) gives:
\[\frac{dx(m)}{dm}=1+\frac{gm-s(m+1)-(m+1)(g-s)}{(gm-s(m+1))^{2}}=1-\frac{g}{(gm -s(m+1))^{2}}=0, \tag{14}\]
gives that the value of \(m\) that minimizes the cost function. The solutions of the last equation are \(gm-sm-s=\sqrt{g}\) equivalent to \(m=\frac{s+\sqrt{g}}{g-s}\) and \(gm-sm-s=-\sqrt{g}\) equivalent to \(m=\frac{s-\sqrt{g}}{g-s}\), or the boundary condition \(m=0\). For \(m=0\), the value \(x(0)=-\frac{1}{s}\), while for \(m=\frac{s+\sqrt{g}}{g-s}\), the value
\[x\left(\frac{s+\sqrt{g}}{g-s}\right)=\frac{s+\sqrt{g}}{g-s}+\frac{\frac{s+ \sqrt{g}}{g-s}+1}{g\frac{s+\sqrt{g}}{g-s}-s(\frac{s+\sqrt{g}}{g-s}+1)}=\frac{ 1+2\sqrt{g}+s}{g-s}.\]
Accordingly, the marginal cost of producing signal \(s\) is:
\[c^{\prime}(s)=\begin{cases}\frac{(1+\sqrt{g})^{2}}{(g-s)^{2}},&\text{ if }s>- \sqrt{g},\\ \frac{1}{s^{2}},&\text{ if }s\leq-\sqrt{g}.\end{cases}\]
We solve a similar differential equation as (11), just with different marginal cost function \(c^{\prime}\), and instead of time function \(t\), we have a score function \(s\) of valuation \(v\). The differential equation becomes:
\[vf(v)-c^{\prime}(s)s^{\prime}(v)=0. \tag{15}\]
We need to solve for the \(s(v)\) function. For types \(v\) with \(\frac{2}{v^{2}}\geq\sqrt{g}\) who only use latency we have the same solution as before
\[s(v)=-\frac{2}{v^{2}}.\]
The marginal type who is indifferent between using only latency and using a combination of the two technologies is given by
\[u=\sqrt{\frac{2}{\sqrt{g}}}.\]
which gives the boundary condition \(s(u)=-\sqrt{g}\) for the differential equation describing the behavior of types who choose a signal \(s\geq-\sqrt{g}\):
\[v=\frac{(1+\sqrt{g})^{2}}{(g-s)^{2}}s^{\prime}(v).\]
We obtain the solution
\[s(v)=(4c_{1}g^{3/2}+2c_{1}g^{2}+2c_{1}g+g(v^{2}-2)-4\sqrt{g}-2)/(2c_{1}g+4c_{1} \sqrt{g}+2c_{1}+v^{2}). \tag{16}\]
The value of the constant \(c\) is obtained from the boundary condition that a zero-value player does not invest and it is equal to
\[c_{1}=\frac{1}{(1+\sqrt{g})^{2}}.\]
Therefore, plugging in the constant value in the solution (16) and simplifying it gives:
\[s(v)=\frac{gv^{2}-4\sqrt{g}-2}{v^{2}+2}.\]
Plugging this into the formula of \(c(s)\), gives the cost value as a function of valuation \(v\):
\[c(v)=\frac{1+2\sqrt{g}+\frac{gv^{2}-4\sqrt{g}-2}{v^{2}+2}}{g-\frac{gv^{2}-4 \sqrt{g}-2}{v^{2}+2}}=\frac{v^{2}}{2}.\]
Separate expenditure in the bidding is calculated by the following formula:
\[b(g)= \int_{u}^{1}m(v)f(v)dv=\int_{\sqrt{\frac{2}{\sqrt{g}}}}^{1}\frac{ \frac{gv^{2}-4\sqrt{g}-2}{v^{2}+2}+\sqrt{g}}{g-\frac{gv^{2}-4\sqrt{g}-2}{v^{2 }+2}}dv=\] \[\int_{\sqrt{\frac{2}{\sqrt{g}}}}^{1}\frac{v^{2}(g+\sqrt{g})-4 \sqrt{g}-2}{2g-4\sqrt{g}-2}dv=\] \[\frac{1}{2g+4\sqrt{g}+2}\left(\frac{g+\sqrt{g}}{3}(1-\frac{2}{ \sqrt{g}}\sqrt{\frac{2}{\sqrt{g}}})-(4\sqrt{g}+2)(1-\sqrt{\frac{2}{\sqrt{g}}} )\right).\]
The dominant term in the nominator above is \(g\) and also in the denominator, it is \(6g\). Therefore, \(\lim_{g\to\infty}b(g)=\frac{1}{6}\).
Proof of Proposition 13.: We are interested in the equilibrium signaling strategy \(s(v)\). Suppose that this strategy is increasing (so no pooling of types) and differentiable. Then, we can define a differentiable function
\[\tilde{C}(v):=C(s(v)).\]
To figure out what \(\tilde{C}(v)\) is, we have to consider an optimization problem with the first player:
\[\max_{v}Pr[v\geq v_{2}]v_{1}-C(s(v))=Pr[v\geq v_{2}]v_{1}-\tilde{C}(v).\]
Taking first order conditions with respect to \(v\) gives:
\[v_{1}f(v)-\tilde{C}^{\prime}(v)|_{v=v_{1}}=0,\]
that is, \[v_{1}f(v_{1})=\tilde{C}^{\prime}(v_{1}).\] For the uniform distribution: \[v_{1}=\tilde{C}^{\prime}(v_{1}).\] Using the boundary condition \(\tilde{C}(0)=0\) and integrating we get \[\tilde{C}(v_{1})=v_{1}^{2}/2.\] More generally: \[\tilde{C}(v_{1})=\int_{-\infty}^{v_{1}}vf(v)dv.\]
## 5 Analysis of TimeBoost with \(n\) players
In this section, we consider \(n\) players with the same valuation distribution as in the 2 players case. The optimization problem is now the following:
\[\max_{v}Pr[v\geq\max\{v_{2},\cdots,v_{n}\}]v_{1}-c(t(v)),\]
similarly to (10). By replacing the probability with cumulative distribution, this is equivalent to:
\[\max_{v}F_{n-1}(v)v_{1}-c(t(v)),\]
where \(F_{n-1}(x)\) is a cumulative distribution function of the random variable \(X:=\max\{X_{1},\cdots,X_{n-1}\}\). By independence we have
\[F_{n-1}(x)=F(x)^{n-1}.\]
The first-order condition and plugging in \(v=v_{1}\) gives the following differential equation, similar to (11):
\[f_{n-1}(v_{1})v_{1}-c^{\prime}(t(v_{1}))t^{\prime}(v_{1})=0,\]
where \(f_{n-1}(v_{1})=(n-1)v_{1}^{n-2}\) is a density function of maximum among \(n-1\) uniformly distributed random variables. The differential equation w.r.t. \(t(v)\) becomes:
\[(n-1)v_{1}^{n-1}+\frac{t^{\prime}(v_{1})}{t^{2}(v_{1})}=0.\]
Solving the equation gives \(t(v)=\frac{n}{c+(n-1)v^{n}}\). The same boundary condition ensures that \(c=0\), that is, \(t(v)=\frac{n}{(n-1)v^{n}}\). Each player pays:
\[\frac{n-1}{n}\int_{0}^{1}v^{n}dv=\frac{n-1}{n}\frac{v^{n+1}}{n+1}|_{0}^{1}= \frac{n-1}{n(n+1)}.\]
Together, the players pay \(\frac{n-1}{n+1}\), that converges to 1 as \(n\) converges to infinity. Note that the first place in the transaction order is given to the maximum-value player. The average valuation of the maximum value player is \(\frac{n}{n+1}\), order statistics. This value also converges to 1 as \(n\) tends to infinity.
The analysis is the same as in the case of 2 players, until the differential equation that solves score function \(s\). Instead of (15), for \(n\) players we solve:
\[(n-1)vv^{n-1}-c^{\prime}(s)s^{\prime}(v)=0. \tag{17}\]
For types \(v\) with \(\frac{n}{(n-1)v^{2}}\geq\sqrt{g}\), who only use latency, we have the same solution as before
\[s(v)=-\frac{n}{(n-1)v^{2}}.\]
Marginal type investing in bidding is:
\[u=\sqrt{\frac{n}{(n-1)\sqrt{v}}}.\]
Plugging in functional forms of \(c\) and \(s\) in (17) gives the same limit results as in Proposition 11. Next, we show a revenue equivalence for \(n\) players. The argument is similar to 2 players' case. Assume that there is an arbitrary signaling technology described by an increasing, differentiable cost function \(C(s)\).
All \(n\) players spend the same amount on average for any cost function \(C\).
Proof.: We are interested in the equilibrium signaling strategy \(s(v)\). Suppose that this strategy is increasing (so no pooling of types) and differentiable. Then, we can define a differentiable function
\[\tilde{C}(v):=C(s(v)).\]
To figure out what \(\tilde{C}(v)\) is, we have to consider an optimization problem of the first player:
\[\max_{v}Pr[v\geq\max\{v_{2},\cdots,v_{n}\}]v_{1}-\tilde{C}(v)=F(v)^{n-1}v_{1} -\tilde{C}(v).\]
Taking first order conditions with respect to \(v\):
\[\left[(n-1)v_{1}f(v)F(v)^{n-2}-\tilde{C}^{\prime}(v)\right]\big{|}_{v=v_{1}}=0,\]
For the uniform distribution, we get:
\[(n-1)v_{1}^{n-1}=\tilde{C}^{\prime}(v_{1}).\]
Using the boundary condition \(\tilde{C}(0)=0\) and integrating we get
\[\tilde{C}(v_{1})=\frac{(n-1)v_{1}^{n}}{n}.\]
More generally:
\[\tilde{C}(v_{1})=\int_{-\infty}^{v_{1}}(n-1)vf(v)F(v)^{n-2}dv.\]
## 6 Comparison of TimeBoost with a Pure Bidding Policy
We now compare TimeBoost to what to a pure bidding policy. Recall that for the bidding policy, all transactions sent in fixed time intervals of length \(g\) are collected, and sorted in decreasing order of their bids. This effectively simulates a first-price all-pay auction for each interval. We note this can be thought of as a quantized version of TimeBoost, because it produces the same sequence that would be produced by first rounding off each transaction's arrival timestamp to the nearest multiple of \(g\) and then applying TimeBoost.
Generically speaking, a first-price auction where only the winning bidder pays and first-price all-pay auctions are both _payoff equivalent_ for Bayesian-Nash incentive compatible mechanisms, (see e.g., [14]). In our setting, the following result holds for each individual arbitrage opportunity.
**Proposition 15** (see [14]).: _The expected payoff of the bidding game where the only the highest bidder pays their bid is equal to the expected payoff in the bidding game where the highest bidder wins but all players pay their bids, independently of valuation distributions._
For simplicity, to compare TimeBoost with a pure bidding policy, we consider two players. It is straightforward to generalize to more parties. For a given arbitrage opportunity, two cases arise as described below depending on whether transactions can be submitted within the same \(g\)-time interval as the arbitrage opportunity or not:
1. Both players can submit their transactions within the same \(g\) interval. For the pure bidding policy, if both players can get their transaction submitted inside the same \(g\)-time interval as the arbitrage opportunity, then they will both compete for it. It is easy to see that when the valuations of the two parties are the same, the bidding strategy for the pure-bidding policy vs the ex-ante latency with bidding policy will be the same. In other words, in this scenario, TimeBoost maintains the economic properties of the first-price auction pure-bidding policy.
2. Only one player can get its transaction within the same \(g\) interval. If only one player can get its transaction inside the same \(g\)-time interval as the arbitrage opportunity, then in the pure-bidding policy, that player can pay a \(0\) bid and still take advantage of it. In contrast, since TimeBoost does not require discrete boundaries, both players will always have \(g\) time to submit their transactions (recall that bidding can be used to get priority over any transaction received up to \(g\) time earlier). This means that even for a reasonably small \(g\) (say \(0.5\) sec), both parties will always be able to compete for the opportunity. In equilibrium, this results in bids equal to value of the arbitrage.
Analysis for the second case.Suppose the first party (denoted by A) can reach the sequencer in \(s_{1}\) time, and the second party (denoted by B) can reach in \(s_{2}\) time, with \(s_{1}<s_{2}\). Then, with the pure-bidding policy, A can wait until \(g-s_{1}\) seconds pass since the beginning of a new block creation, and send its transaction to the sequencer at exactly \(g-s_{1}\), while B has to send its transaction by time \(g-s_{2}\) in order to be included in the same block.
Assuming that arbitrage opportunities are uniformly distributed over the \(g\)-interval, this means that, with probability \(\frac{g-s_{1}-(g-s_{2})}{g}=\frac{s_{2}-s_{1}}{g}\), B has no chance to win the race against A, even if it values the arbitrage opportunity much more than A. When \(g\) is large (e.g., on Ethereum with \(12\) sec block times), this latency advantage is not a big issue, as A would only have an advantage with probability \((s_{2}-s_{1})/12\). In contrast, for faster blockchains, or layer-2 rollups which have shorter block-times to achieve scalability, this latency advantage can be significantly more important in the pure-bidding policy vs in TimeBoost. For instance, when \(g=0.5\)sec, A's latency advantage is \(24\) times greater than what it was in Ethereum. This means that compared to TimeBoost, a pure-bidding strategy will either result in substantial latency competition (when \(g\) is small) or will not be able to provide low transaction finalization time (since \(g\) will be large).
## 7 Discussion on Sequencer Decentralization
We now briefly discuss how TimeBoost can be supported by a decentralized sequencer--i.e., a committee of \(\ell\) sequencers (of which at most some \(f\) can be dishonest). We only provide possible implementations here; a formal rigorous analysis is outside the scope of this paper.
In the decentralized setting, transactions to be sequenced are now submitted by users to all sequencers instead of just one. Note that as before, threshold decryption techniques can be used for transaction privacy before ordering.
The most natural way to support TimeBoost in a decentralized setting is to have a protocol for sequencers to agree on both the timestamp and the bid of transactions. After this is done, the TimeBoost policy can simply be applied on the consensus output of the decentralized committee to obtain the final ordering. Agreeing on the bid is easy since we can have the same bid be submitted to all sequencers for a given transaction. Agreeing on the timestamp is a more challenging problem since the same transaction can arrive at different nodes. While it adds significant complexity, one potential technique here is to employ a fair-ordering protocol (this can be as a simple as e.g., computing the median timestamp [11, 20] or support more complicated techniques as in [3, 8, 9]). We leave the formal analysis of such a decentralized TimeBoost implementation to future work.
## 8 Conclusion
We designed TimeBoost: a policy for transaction ordering that takes into account both transaction arrival times and bids. We showed that any ordering scheme that guarantees the independence of different latency races is a generalized scoring rule. By choosing a suitably designed mixture of timestamps and bids, we showed the economic efficiency of the system: transaction senders spend most of their resources on bidding instead of latency improvement, which can later be used by the protocol for improvement and development.
Acknowledgments.We are grateful to Lee Bousfield, Chris Buckland, Potuz Heluani, Raul Jordan, Mallesh Pai, Ron Siegel, Terence Tsao as well as participants at the Swiss National Bank Technology and Finance Seminar for interesting discussions and valuable feedback.
|
2310.16832 | LightSpeed: Light and Fast Neural Light Fields on Mobile Devices | Real-time novel-view image synthesis on mobile devices is prohibitive due to
the limited computational power and storage. Using volumetric rendering
methods, such as NeRF and its derivatives, on mobile devices is not suitable
due to the high computational cost of volumetric rendering. On the other hand,
recent advances in neural light field representations have shown promising
real-time view synthesis results on mobile devices. Neural light field methods
learn a direct mapping from a ray representation to the pixel color. The
current choice of ray representation is either stratified ray sampling or
Plucker coordinates, overlooking the classic light slab (two-plane)
representation, the preferred representation to interpolate between light field
views. In this work, we find that using the light slab representation is an
efficient representation for learning a neural light field. More importantly,
it is a lower-dimensional ray representation enabling us to learn the 4D ray
space using feature grids which are significantly faster to train and render.
Although mostly designed for frontal views, we show that the light-slab
representation can be further extended to non-frontal scenes using a
divide-and-conquer strategy. Our method offers superior rendering quality
compared to previous light field methods and achieves a significantly improved
trade-off between rendering quality and speed. | Aarush Gupta, Junli Cao, Chaoyang Wang, Ju Hu, Sergey Tulyakov, Jian Ren, László A Jeni | 2023-10-25T17:59:05Z | http://arxiv.org/abs/2310.16832v2 | # LightSpeed: Light and Fast
###### Abstract
Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage. Using volumetric rendering methods, such as NeRF and its derivatives, on mobile devices is not suitable due to the high computational cost of volumetric rendering. On the other hand, recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices. Neural light field methods learn a direct mapping from a ray representation to the pixel color. The current choice of ray representation is either stratified ray sampling or Plucker coordinates, overlooking the classic light slab (two-plane) representation, the preferred representation to interpolate between light field views. In this work, we find that using the light slab representation is an efficient representation for learning a neural light field. More importantly, it is a lower-dimensional ray representation enabling us to learn the 4D ray space using feature grids which are significantly faster to train and render. Although mostly designed for frontal views, we show that the light-slab representation can be further extended to non-frontal scenes using a divide-and-conquer strategy. Our method offers superior rendering quality compared to previous light field methods and achieves a significantly improved trade-off between rendering quality and speed.
## 1 Introduction
Real-time rendering of photo-realistic 3D content on mobile devices such as phones is crucial for mixed-reality applications. However, this presents a challenge due to the limited computational power and memory of mobile devices. The current graphics pipeline requires storing tens of thousands of meshes for complex scenes and performing ray tracing for realistic lighting effects, which demands powerful graphics processing power that is not feasible on current mobile devices. Recently, neural radiance field (NeRF [23]) has been the next popular choice for photo-realistic view synthesis, which offers a simplified rendering pipeline. However, the computational cost of integrating the radiance field remains a bottleneck for real-time implementation on mobile devices. There have been several attempts to reduce the computational cost of this integration step, such as using more efficient radiance representations [13; 40; 28; 17; 5; 10] or distilling meshes from radiance field [34; 6; 39; 35; 27; 29]. Among these approaches, only a handful of mesh-based methods [6; 29] have demonstrated real-time rendering capabilities on mobile phones, but with a significant sacrifice in rendering fidelity. Moreover, all aforementioned methods require significant storage space (over \(200\)MB), which is undesirable for mobile devices with limited onboard storage.
Alternatively, researchers have used 4D light field1 (or lumigraph) to represent radiance along rays in empty space [11; 24; 12; 19], rather than attempting to model the 5D plenoptic function as in NeRF-based approaches. Essentially, the light field provides a direct mapping from rays to pixel values since the radiance is constant along rays in empty space. This makes the light field suitable for view synthesis, as long as the cameras are placed outside the convex hull of the object of interest. Compared to integrating radiance fields, rendering with light fields is more computationally efficient. However, designing a representation of light field that compresses its storage while maintaining high view-interpolation fidelity remains challenging. Previous methods, such as image quilts [38] or multiplane images (MPI) [41; 16; 32; 9], suffer from poor trade-offs between fidelity and storage due to the high number of views or image planes required for reconstructing the complex light field signal. Recent works [36; 4; 2; 31] have proposed training neural networks to represent light fields, achieving realistic rendering with a relatively small memory footprint. Among those, MobileR2L [4] uses less than 10MB of storage per scene, and it is currently the only method that demonstrates real-time performance on mobile phones.
Footnote 1: For the rest of the paper, we will use the term ‘light field’ to refer to the 4D light field, without explicitly stating the dimensionality.
However, prior neural light field (NeLF) representations, including MobileR2L, suffer from inefficiencies in learning due to the high number of layers (over \(60\) layers), and consequently, a long training time is required to capture fine scene details. One promising strategy to address this issue is utilizing grid-based representations, which have proven to be effective in the context of training NeRFs [30; 25; 17; 10]. Nonetheless, incorporating such grid-based representation directly to prior NeLFs is problematic due to the chosen ray parameterization. R2L [36] and MobileR2L [4] parameterize light rays using a large number of stratified 3D points along the rays, which were initially motivated by the discrete formulation of integrating radiance. However, this motivation is unnecessary and undermines the simplicity of 4D light fields because stratified sampling is redundant for rays with constant radiance. This becomes problematic when attempting to incorporate grid-based representations for more efficient learning, as the high-dimensional stratified-point representation is not feasible for grid-based discretization. Similarly, the \(6\)-dimensional Plucker coordinate used by Sitzmann _et al_. [31] also presents issues for discretization due to the fact that Plucker coordinates exist in a projective \(5\)-space, rather than Euclidean space.
In this paper, we present _LightSpeed_, the first NeLF method designed for mobile devices that uses a grid-based representation. As shown in Fig. 1, our method achieves a significantly better trade-off between rendering quality and speed compared to prior NeLF methods, while also being faster to train. These advantages make it well-suited for real-time applications on mobile devices. To achieve these results, we propose the following design choices:
**First**, we revisit the classic 4D light-slab (or two-plane) representation [12; 19] that has been largely overlooked by previous NeLF methods. This lower-dimensional parameterization allows us to compactly represent the rays and efficiently represent the light field using grids. To our knowledge,
Figure 1: Our LightSpeed approach demonstrates a superior trade-off between on-device rendering quality and latency while maintaining a significantly reduced training time and boosted rendering quality. **(a)** rendering quality and latency on the \(400\times 400\) Lego scene [23] running on an iPhone 13. **(b)** training curves for the \(756\times 1008\) Fern scene [22].
Attal _et al_. [2] is the only other NeLF method that has experimented with the light-slab representation. However, they did not take advantage of the grid-based representation, and their method is not designed for real-time rendering. **Second**, to address the heavy storage consumption of 4D light field grids, we take inspiration from k-planes [10] and propose decomposing the 4D grids into six 2D feature grids. This ensures that our method remains competitive for storage consumption compared to prior NeLF methods. **Third**, we apply the super-resolution network proposed by MobileR2L [4], which significantly reduces the computational cost when rendering high-resolution images. **Finally**, the light-slab representation was originally designed for frontal-view scenes, but we demonstrate that it can be extended to represent non-frontal scenes using a divide-and-conquer strategy.
Our contributions pave the way for efficient and scalable light field representation and synthesis, making it feasible to generate high-quality images of real-world objects and scenes. Our method achieves the highest PSNR and among the highest frame rates (\(55\) FPS on iPhone 14) on LLFF (frontal-view), Blender (\(360^{\circ}\)), and unbounded \(360^{\circ}\) scenes, proving the effectiveness of our approach.
## 2 Related work
**Light Field.** Light field representations have been studied extensively in the computer graphics and computer vision communities [38]. Traditionally, light fields have been represented using the 4D light slab representation, which parameterizes the light field by two planes in 4D space [12; 19]. More recently, neural-based approaches have been developed to synthesize novel views from the light field, leading to new light field representations being proposed.
One popular representation is the multi-plane image (MPI) representation, which discretizes the light field into a set of 2D planes. The MPI representation has been used in several recent works, including [41; 16; 32; 9; 7]. However, the MPI representation can require a large amount of memory, especially for high-resolution light fields. Another recent approach that has gained substantial attention is NeRF [23] (Neural Radiance Fields), which can synthesize novel views with high accuracy, but is computationally expensive to render and train due to the need to integrate radiance along viewing rays. There has been a substantial amount of works [37; 26; 28; 21; 13; 40; 28; 17; 5; 10; 34; 6; 39; 35; 27; 29; 36; 4; 2; 31] studying how to accelerate training and rendering of NeRF, but in the following, we focus on recent methods that achieve real-time rendering with or without mobile devices.
**Grid Representation of Radiance Field.** The first group of methods trade speed with space, by precomputing and caching radiance values using grid or voxel-like data structures such as sparse voxels [30; 13], octrees [40], and hash tables [25]. Despite the efficient data structures, the memory consumption for these methods is still high, and several approaches have been proposed to address this issue. First, Chen _et al_. [5] and Fridovich-Keil _et al_. [10] decompose voxels into matrices that are cheaper to store. Takikawa _et al_. [33] performs quantization to compress feature grids. These approaches have enabled real-time applications on desktop or server-class GPUs, but they still require significant computational resources and are not suitable for resource-constrained devices such as mobile or edge devices.
**Baking High Resolution Mesh.** Another group of methods adopts the approach of extracting high-resolution meshes from the learned radiance field [6; 29; 35]. The texture of the mesh stores the plenoptic function to account for view-dependent rendering. While these approaches have been demonstrated to run in real-time on mobile devices, they sacrifice rendering quality, especially for semi-transparent objects, due to the mesh-based representation. Additionally, storing high-resolution meshes with features is memory-intensive, which limits the resolution and complexity of the mesh that can be used for rendering.
**Neural Light Fields.** Recent works such as R2L [36], LFNS [31] and NeuLF [20] have framed the view-synthesis problem as directly predicting pixel colors from camera rays, making these approaches fast at inference time without the need for multiple network passes to generate a pixel color. However, due to the complexity of the 4D light field signal, the light field network requires sufficient expressibility to be able to memorize the signal. As a result, Wang _et al_. [36] end up using as many as 88 network layers, which takes three seconds to render one 200 x 200 image on iPhone 13. In this regard, Cao _et al_. [4] introduce a novel network architecture that dramatically reduces R2L's computation through super-resolution. The deep networks are only evaluated on a low-resolution ray bundle and then upsampled to the full image resolution. This approach, termed MobileR2L, achieves real-time rendering on mobile phones. NeuLF [20] also proposes to directly regress pixel colors
using a light slab ray representation but is unable to capture fine-level details due to lack of any sort of high-dimensional input encoding and is limited to frontal scenes. Another notable work, SIGNET [8], utilizes neural methods to compress a light field by using a ultra spherical input encoding to the light slab representation. However, SIGNET doesn't guarantee photorealistic reconstruction and hence deviates from task at hand. Throughout the paper, we will mainly compare our method to MobileR2L [4], which is currently the state-of-the-art method for real-time rendering on mobile devices and achieves the highest PSNR among existing methods.
It is important to note that training NeLF's requires densely sampled camera poses in the training images and may not generalize well if the training images are sparse, as NeLF's do not explicitly model geometry. While there have been works, such as those by Attal _et al_. [2], that propose a mixture of NeRF and local NeLF's, allowing learning from sparse inputs, we do not consider this to be a drawback since NeLF's focus on photo-realistic rendering rather than reconstructing the light field from sparse inputs, and they can leverage state-of-the-art reconstruction methods like NeRF to create dense training images. However, it is a drawback for prior NeLF's [36; 4] that they train extremely slowly, often taking more than two days to converge for a single scene. This is where our new method comes into play, as it offers improvements in terms of training efficiency and convergence speed.
## 3 Methodology
### Prerequisites
**4D Light Fields** or Lumigraphs are a representation of light fields that capture the radiance information along rays in empty space. They can be seen as a reduction of the higher-dimensional plenoptic functions. While plenoptic functions describe the amount of light (radiance) flowing in every direction through every point in space, which typically has five degrees of freedom, 4D light fields assume that the radiance is constant along the rays. Therefore, a 4D light field is a vector function that takes a ray as input (with four degrees of freedom) and outputs the corresponding radiance value. Specifically, assuming that the radiance \(\mathbf{c}\) is represented in the RGB space, a 4D light field is mathematical defined as a function, _i.e_.:
\[\mathcal{F}:\mathbf{r}\in\mathbb{R}^{M}\mapsto\mathbf{c}\in\mathbb{R}^{3}, \tag{1}\]
where \(\mathbf{r}\) is \(M\)-dimensional coordinates of the ray depending how it is parameterized.
Generating images from the 4D light field is a straightforward process. For each pixel on the image plane, we calculate the corresponding viewing ray \(\mathbf{r}\) that passes through the pixel, and the pixel value is obtained by evaluating the light field function \(\mathcal{F}(\mathbf{r})\). In this paper, our goal is to identify a suitable representation for \(\mathcal{F}(\mathbf{r})\) that minimizes the number of parameters required for learning and facilitates faster evaluation and training.
**MobileR2L.** We adopt the problem setup introduced by MobileR2L [6] and its predecessor R2L [36], where the light field \(\mathcal{F}(\mathbf{r})\) is modeled using neural networks. The training of the light field network is framed as distillation, leveraging a large dataset that includes both real images and images generated by a pre-trained NeRF. Both R2L and MobileR2L represent \(\mathbf{r}\) using stratified points, which involves concatenating the 3D positions of points along the ray through stratified sampling. In addition, the 3D positions are encoded using sinusoidal positional encoding [23]. Due to the complexity of the light field, the network requires a high level of expressiveness to capture fine details in the target scene. This leads to the use of very deep networks, with over 88 layers in the case of R2L. While this allows for detailed rendering, it negatively impacts the rendering speed since the network needs to be evaluated for every pixel in the image.
To address this issue, MobileR2L proposes an alternative approach. Instead of directly using deep networks to generate high-resolution pixels, they employ deep networks to generate a low-resolution feature map, which is subsequently up-sampled to obtain high-resolution images using shallow super-resolution modules. This approach greatly reduces the computational requirements and enables real-time rendering on mobile devices. In our work, we adopt a similar architecture, with a specific focus on improving the efficiency of generating the low-resolution feature map.
### LightSpeed
We first describe the light-slab ray representation for both frontal and non-frontal scenes in Sec. 3.2.1. Next, we detail our grid representation for the light-slab in Sec. 3.2.2 and explain the procedure for synthesizing images from this grid representation in Sec. 3.3. Refer to Fig. 2 for a visual overview.
#### 3.2.1 Ray Parameterization
**Light Slab (two-plane representation).** Instead of utilizing stratified points or Plucker coordinates, we represent each directed light ray using the classic two-plane parameterization[19] as an ordered pair of intersection points with two fixed planes. Formally,
\[\textbf{r}=(x,y,u,v), \tag{2}\]
where \((x,y)\in\mathbb{R}^{2}\) and \((u,v)\in\mathbb{R}^{2}\) are ray intersection points with fixed planes \(P_{1}\) and \(P_{2}\) in their respective coordinate systems. We refer to these four numbers as the ray coordinates in the 4D ray space. To accommodate unbounded scenes, we utilize normalized device coordinates (NDC) and select the planes \(P_{1}\) and \(P_{2}\) as the near and far planes (at infinity) defined in NDC.
Divided Light Slabs for Non-frontal Scenes.A single light slab is only suitable for modeling a frontal scene and cannot capture light rays that are parallel to the planes. To model non-frontal scenes, we employ a divide-and-conquer strategy by using a composition of multiple light slab representations to learn the full light field. We partition the light fields into subsets, and each subset is learned using a separate NeLF model. The partitions ensure sufficient overlap between sub-scenes, resulting in a continuous light field representation without additional losses while maintaining the frontal scene assumption. To perform view synthesis, we identify the scene subset of the viewing ray and query the corresponding NeLF to generate pixel values. Unlike Attal _et al_. [2], we do not perform alpha blending of multiple local light fields because our division is based on ray space rather than partitioning 3D space.
For _object-centric_\(360^{\circ}\) scenes, we propose to partition the scene into \(5\) parts using surfaces of a near-isometric trapezoidal prism and approximate each sub-scene as frontal (as illustrated in Fig. 3). For _unbounded_\(360^{\circ}\) scenes, we perform partitioning using k-means clustering based on camera orientation and position. We refer the reader to the supplementary material for more details on our choice of space partitioning.
#### 3.2.2 Feature Grids for Light Field Representation
Storing the 4D light-slab directly using a high-resolution grid is impractical in terms of storage and inefficient for learning due to the excessive number of parameters to optimize. The primary concern arises from the fact that the 4D grid size increases quartically with respect to resolutions. To address this, we suggest the following design choices to achieve a compact representation of the light-slab without exponentially increasing the parameter count.
Figure 2: **LightSpeed Model for Frontal Scenes.** Taking a low-resolution ray bundle as input, our approach formulates rays in two-plane ray representation. This enables us to encode each ray using multi-scale feature grids, as shown. The encoded ray bundle is fed into a decoder network consisting of convolutions and super-resolution modules yielding the high-resolution image.
**Lower Resolution Feature Grids.** Instead of storing grids at full resolution, we choose to utilize low-resolution feature grids to take advantage of the quartic reduction in storage achieved through resolution reduction. We anticipate that the decrease in resolution can be compensated by employing high-dimensional features. In our implementation, we have determined that feature grids of size \(128^{4}\) are suitable for synthesizing full HD images. Additionally, we adopt the approach from Instant-NGP [25] to incorporate multi-resolution grids, which enables an efficient representation of both global and local scene structures.
**Decompose 4D Grids into 2D Grids.** Taking inspiration from k-planes [10], we propose to decompose the 4D feature grid using \(\binom{4}{2}=6\) number of 2D grids, with each 2D grid representing a sub-space of the 4D ray space. This results in a storage complexity of \(\mathcal{O}(6N^{2})\), greatly reducing the storage required to deploy our grid-based approach to mobile devices.
### View Synthesis using Feature Grids
Similar to MobileR2L [4], LightSpeed takes two steps to render a high resolution image (see Fig. 2).
**Encoding Low-Resolution Ray Bundles.** The first step is to render a low-resolution (\(H_{L}\times W_{L}\)) feature map from the feature grids. This is accomplished by generating ray bundles at a reduced resolution, where each ray corresponds to a pixel in a downsampled image. We project each ray's 4D coordinates \(\mathbf{r}=(x,y,u,v)\) onto 6 2D feature grids \(\mathbf{G}_{xy},\mathbf{G}_{xu},\mathbf{G}_{xv},\mathbf{G}_{yu},\mathbf{G}_{ yv},\mathbf{G}_{uv}\) to obtain feature vectors from corresponding sub-spaces. The feature values undergo bilinear interpolation from the 2D grids, resulting in six interpolated \(F\)-dimensional features. These features are subsequently concatenated to form a \(6F\)-dimensional feature vector. As the feature grids are multi-resolutional with \(L\) levels, features \(g_{l}(\mathbf{r})\in\mathbb{R}^{6F}\) from different levels (indexed by \(l\)) are concatenated together to create a single feature \(g(\mathbf{r})\in\mathbb{R}^{6LF}\). Combining the features from all rays generates a low-resolution 2D feature map \(\tilde{\mathbf{G}}\in\mathbb{R}^{H_{L}\times W_{L}\times 6LF}\), which is then processed further in the subsequent step.
**Decoding High-Resolution Image.** To mitigate the approximation introduced by decomposing 4D grids into 2D grids, the features \(g(\mathbf{r})\) undergo additional processing through a MLP. This is implemented by applying a series of \(1\times 1\) convolutional layers to the low-resolution feature map. Subsequently, the processed feature map is passed through a sequence of upsampling layers (similar to MobileR2L [4]) to generate a high-resolution image.
## 4 Experiments
**Datasets.** We benchmark our approach on the real-world forward-facing [22][23], the realistic synthetic \(360^{\circ}\) datasets [23] and unbounded \(360^{\circ}\) scenes [3]. The forward-facing dataset consists of \(8\) real-world scenes captured using cellphones, with \(20\)-\(60\) images per scene and 1/8th of the images used for testing. The synthetic \(360^{\circ}\) dataset has \(8\) scenes, each having \(100\) training views and \(200\) testing views. The unbounded \(360^{\circ}\) dataset consists of \(5\) outdoor and \(4\) indoor scenes with a central object and a detailed background. Each scene has between \(100\) to \(300\) images, with \(1\) in \(8\) images used for testing. We use \(756\times 1008\) LLFF dataset images, \(800\times 800\) resolution for the \(360^{\circ}\) scenes, and 1/4th of the original resolution for the unbounded \(360^{\circ}\) scenes.
Figure 3: **Space Partitioning for Non-frontal scenes.** We partition _object-centric_\(360^{\circ}\) scenes into 5 parts as shown. Each colored face of the trapezoidal prism corresponds to a partitioning plane. Each scene subset is subsequently learned as a separate NeLF
**Training Details.** We follow a similar training scheme as MobileR2L: train the LightSpeed model using pseudo-data mined from a pre-trained NeRF teacher. We specifically train MipNeRF teachers to sample \(10\)k pseudo-data points for the LLFF dataset. For synthetic and unbounded \(360^{\circ}\) scenes, we mine \(30\)k samples per scene using Instant-NGP [25] teachers. Following this, we fine-tune the model on the original data. We optimize for the mean-squared error between generated and ground truth images. We refer the reader to the supplementary material for more training details.
We use \(63\times 84\) (\(12\times\) downsampled from the desired \(756\times 1008\) resolution) input ray bundles for the forward-facing scenes. For \(360^{\circ}\) scenes, we use \(100\times 100\) (\(8\times\) downsampled from the desired \(800\times 800\) image resolution) ray bundles. For unbounded scenes, we use ray bundles \(12\times\) downsampled from the image resolution we use. We train our frontal LightSpeed models as well as each sub-scene model in non-frontal scenes for \(200\)k iterations.
**Baselines and Metrics.** We compare our method's performance on bounded scenes with MobileR2L[6], MobileNeRF[6] and SNeRG[13]. We evaluate our method for rendering quality using three metrics: PSNR, LPIPS, and SSIM. For unbounded scenes, we report the PSNR metric on 6 scenes and compare it with MobileNeRF [6] and NeRFMeshing [27]. To further demonstrate the effectiveness of our approach, we compare our approach with others on two other criteria: (a) **On-device Rendering Speed**: We report and compare average inference times per rendered frame on various mobile chips, including Apple A15, Apple M1 Pro and Snapdragon SM8450 chips; and (b) **Efficient Training**: We compare the number of iterations LightSpeed and MobileR2L require to reach a target PSNR. We pick Lego scene from \(360^{\circ}\) scenes and Fern from forward-facing scenes as representative scenes to compare. We also report the storage requirements of our method per frontal scene and compare it with baselines.
### Results and Analysis
**Rendering Quality.** As in Tab. 1, we obtain better results on all rendering fidelity metrics on the two bounded datasets. We also outperform MobileNeRF and NeRFMeshing on 4 out of 6 unbounded \(360^{\circ}\) scenes. We refer the reader to Fig. 4 for a visual comparison of our approach with MobileR2L and NeRF. Our method has much better rendering quality, capturing fine-level details where MobileR2L, and in some cases, even the original NeRF model, fails. Note that we use Instant-NGP teachers for \(360^{\circ}\) scenes, which have slightly inferior performance to MipNeRF teachers used by MobileR2L. This further shows the robustness of our approach to inferior NeRF teachers.
**Storage Cost.** We report storage requirements in Tab. 1. Our approach has a competitive on-device storage to the MobileR2L model. Specifically, we require a total of \(16.3\) MB of storage per frontal scene. The increase in storage is expected since we're using grids to encode our light field. We also report storage values for lighter LightSpeed networks in the ablation study (see Tab. 5), all of which have similar or better rendering quality than the full-sized MobileR2L network.
**Training Speed.** We benchmark the training times and the number of iterations required for LightSpeed and MobileR2L in Tab. 2 with a target PSNR of \(24\) for Fern scene and \(32\) for the Lego scene. Our approach demonstrates a training speed-up of \(2.5\times\) on both scenes. Since we are modeling \(360^{\circ}\) scenes as a composition of \(5\) light fields, we can train them in parallel (which is not
Figure 4: **Qualitative Results on frontal and non-frontal scenes. Zoomed-in comparison between NeRF [23], MobileR2L [4] and our LightSpeed approach.**
possible for MobileR2L), further trimming down the training time. Moreover, the training speedup reaches \(\sim 4\times\) when networks are trained beyond the mentioned target PSNR (see Fig. 1).
**Inference Speed.** Tab. 3 shows our method's inference time as compared to MobileR2L and MobileNeRF. We maintain a comparable runtime as MobileR2L while having better rendering fidelity. Since on-device inference is crucial to our problem setting, we also report rendering times of a smaller 30-layered decoder network that has similar rendering quality as the MobileR2L model (see Tab. 5).
### Ablations
**Data Requirements.** We use \(10\)k samples as used by MobileR2L to train LightField models for frontal scenes. However, for non-frontal scenes, we resort to using \(30\)k pseudo-data samples per
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Synthetic \(360^{\circ}\)} & \multicolumn{3}{c}{Forward-Facing} \\ \cline{2-9} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & Storage \(\downarrow\) \\ \hline NeRF [23] & 31.01 & 0.947 & 0.081 & 26.50 & 0.811 & 0.250 & - \\ NeRF-PyTorch & 30.92 & 0.991 & 0.045 & 26.26 & 0.965 & 0.153 & - \\ \hline SNeRG [13] & 30.38 & 0.950 & 0.050 & 25.63 & 0.818 & 0.183 & 337.3 MB \\ MobileNeRF [6] & 30.90 & 0.947 & 0.062 & 25.91 & 0.825 & 0.183 & 201.5 MB \\ MobileR2L [4] & 31.34 & 0.993 & 0.051 & 26.15 & 0.966 & 0.187 & **8.2 MB** \\ \hline LightSpeed (Ours) & **32.23** & **0.994** & **0.038** & **26.50** & **0.968** & **0.173** & 16.3 MB \\ \hline Our Teacher & 32.96 & - & - & 26.85 & 0.827 & 0.226 & - \\ \hline \hline \multirow{4}{*}{Method} & \multicolumn{3}{c}{Unbounded \(360^{\circ}\)} \\ \cline{2-9} & Bicycle & Garden & Stump & Bonsai & Counter & Kitchen \\ \hline MobileNeRF [6] & 21.70 & 23.54 & **23.95** & - & - & - \\ NeRFMeshing [27] & 21.15 & 22.91 & 22.66 & 25.58 & 20.00 & 23.59 \\ \hline LightSpeed (Ours) & **22.51** & **24.54** & 22.22 & **28.24** & 25.46 & **27.82** \\ \hline Instant-NGP (Our teacher) [25] & 21.70 & 23.40 & 23.20 & 27.4 & **25.80** & 27.50 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative Comparison** on Forward-facing, Synthetic \(360^{\circ}\) and Unbounded \(360^{\circ}\) Datasets. LighSpeed achieves the best rendering quality with competitive storage. We use an out-of-the-box Instant-NGP [25] implementation [1] (as teachers for \(360^{\circ}\) scenes) which dose not report SSIM and LPIPS values. We omit storage for NeRF-based methods since they are not comparable.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Forward-Facing: Fern} & \multicolumn{2}{c}{Synthetic \(360^{\circ}\): Lego} \\ \cline{2-5} Method & Duration \(\downarrow\) & Iterations \(\downarrow\) & Duration \(\downarrow\) & Iterations \(\downarrow\) \\ \hline MobileR2L & 12.5 hours & 70k & 192 hours & 860k \\ LightSpeed & **4 hours** & **27k** & **75 hours** & **425k** \\ LightSpeed (Parallelized) & - & - & **15 hours** & **85k** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Training Time** for Lego and Fern scenes with 32 and 24 target PSNRs. LightSpeed trains significantly faster than MobileR2L. It achieves even greater speedup when trained in parallel for \(360^{\circ}\) scenes (parallel training is not applicable for frontal scenes).
scene. Dividing \(10\)k samples amongst \(5\) sub-scenes assigns too few samplers per sub-scene, which is detrimental to grid learning. We experimentally validate data requirements by comparing MobileR2L and LightSpeed trained for different amounts of pseudo-data. We train one \(400\times 400\) sub-scene from the Lego scene for 200k iterations with 1/5th of \(10\)k and \(30\)k samples, _i.e._, \(2\)k and \(6\)k samples. Tab. 4 exhibits significantly decreased rendering quality for the LightSpeed network as compared to MobileR2L when provided with less pseudo-data.
**Decoder Network Size.** We further analyze the trade-off between inference speed and rendering quality of our method and MobileR2L. To this end, we experiment with decoders of different depths and widths. Each network is trained for \(200\)k iterations and benchmarked on an iPhone 13. Tab. 5 shows that a \(30\)-layered LightSpeed model has a better inference speed and rendering quality as compared to the \(60\)-layered MobileR2L model. This \(30\)-layered variant further occupies less storage as compared to its full-sized counterpart. Furthermore, lighter LightSpeed networks obtain a comparable performance as the \(60\)-layered MobileR2L. Note that reducing the network capacity of MobileR2L results in significant drops in performance. This means that we can get the same rendering quality as MobileR2L with considerably reduced on-device resources, paving the way for a much better trade-off between rendering quality and on-device inference speed.
**Ray-Space Grid Encoding.** We provide an ablation in Tab. 6 below on how the proposed ray-space grid encoder helps as compared to just using the light-slab representation with a traditional frequency encoder. We compare different LightSpeed configurations with grid-encoder and frequency encoders. Networks are trained for 200k iterations on a full-resolution 800\(\times\)800 Lego sub-scene from Synthetic
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{3}{c}{2k Samples} & \multicolumn{3}{c}{6k Samples} \\ \cline{2-7} Method & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline MobileR2L & 30.19 & 0.9894 & 0.0354 & 30.56 & 0.9898 & 0.0336 \\ LightSpeed (Ours) & 30.44 & 0.9899 & 0.0299 & **31.2** & **0.9906** & **0.0284** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Pseudo-Data Requirement for Non-Frontal Scenes.** We analyze the importance of mining more pseudo-data for non-frontal scenes. Using 1/5th of \(10\)k and \(30\)k sampled pseudo-data points, we find more pseudo-data is crucial for the boosted performance of the LightSpeed model.
Figure 5: **Test PSNR v/s Training Iterations.** We compare test set PSNR obtained by LightSpeed (Grid)(ours), LightSpeed (frequency encoded), and Plücker-based neural light field as the training progresses for 3 different network configurations.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & PSNR \(\uparrow\) & Latency \(\downarrow\) & Storage \(\downarrow\) & FLOPs \(\downarrow\) \\ \hline
15-L W-256 MobileR2L & 27.69 & 14.54 ms & 2.4 MB & 12626M \\
30-L W-128 MobileR2L & 27.54 & 14.47 ms & 1.4 MB & 8950M \\
30-L W-256 MobileR2L & 29.21 & 18.59 ms & 4.5 MB & 23112M \\
60-L W-256 MobileR2L & 30.34 & 22.65 ms & 8.2 MB & 42772M \\ \hline
15-L W-256 LightSpeed & 30.37 & 14.94 ms & 10.5 MB & 12833M \\
30-L W-128 LightSpeed & 30.13 & 14.86 ms & 9.5 MB & 9065M \\
30-L W-256 LightSpeed & 31.70 & 20.35 ms & 12.6 MB & 23319M \\
60-L W-256 LightSpeed & 32.34 & 26.47 ms & 16.3 MB & 42980M \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Decoder Network Size.** Our approach maintains a much better tradeoff between inference speeds v/s rendering quality, with our smallest network achieving comparable quality to the MobileR2L. Benchmarking done on an iPhone 13. L is network depth, and W is network width.
\(360^{\circ}\) dataset. Further, we show the training dynamics of all the trained variants in Fig. 5 (red and green plots). As claimed, our approach offers better visual fidelity and training dynamics (iterations to reach a target PSNR) for both computationally cheaper small networks as well as full sized networks.
Comparison with Plucker Representation.Given the challenges of discretizing Plucker representation, we compare between using positionally encoded Plucker coordinates and our grid-based light-slab approach in Tab. 7 below for different network sizes to demonstrate the effectiveness of our approach. We train all models for 200k iterations on one 800\(\times\)800 Lego sub-scene. We also share training curves for the variants in question in Fig. 5 (red and blue curves). As claimed, our integrated approach performs better in terms of training time and test-time visual fidelity for large and small models (having less computational costs) alike whereas the Plucker-based network shows a sharp decline in visual fidelity and increased training times to reach a target test PSNR as network size is reduced.
## 5 Discussion and Conclusion
In this paper, we propose an efficient method, LightSpeed, to learn neural light fields using the classic two-plane ray representation. Our approach leverages grid-based light field representations to accelerate light field training and boost rendering quality. We demonstrate the advantages of our approach not only on frontal scenes but also on non-frontal scenes by following a divide-and-conquer strategy and modeling them as frontal sub-scenes. Our method achieves SOTA rendering quality amongst prior works at same time providing a significantly better trade-off between rendering fidelity and latency, paving the way for real-time view synthesis on resource-constrained mobile devices.
**Limitations.** While LightSpeed excels at efficiently modeling frontal and \(360^{\circ}\) light fields, it currently lacks the capability to handle free camera trajectories. The current implementation does not support refocusing, anti-aliasing, and is limited to static scenes without the ability to model deformable objects such as humans. We plan to explore these directions in future work.
**Broader Impact.** Focused on finding efficiencies in novel view synthesis, our study could significantly reduce costs, enabling wider access to this technology. However, potential misuse, like unsolicited impersonations, must be mitigated.
\begin{table}
\begin{tabular}{l l} \hline \hline Method & PSNR \(\uparrow\) \\ \hline
15-L W-256 LS (PE) & 28.84 \\
30-L W-256 LS (PE) & 30.63 \\
60-L W-256 LS (PE) & 32.16 \\ \hline
15-L W-256 LS (Grid) & 30.37 \\
30-L W-256 LS (Grid) & 31.70 \\
60-L W-256 LS (Grid) & 32.34 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Effect of using a Ray-Space Grid Encoder.** We demonstrate the effect of using a grid-based LightSpeed by comparing with a frequency encoded variant (no grid). L is network depth, and W is network width.
\begin{table}
\begin{tabular}{l l} \hline \hline Method & PSNR \(\uparrow\) \\ \hline
15-L W-256 Plucker & 28.65 \\
30-L W-256 Plucker & 30.84 \\
60-L W-256 Plucker & 32.14 \\ \hline
15-L W-256 LS & 30.37 \\
30-L W-256 LS & 31.70 \\
60-L W-256 LS & 32.34 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Light-Slab Grid Representation vs. Plucker Coordinates.** We compare the light-slab based LightSpeed (LS) with a positionally encoded variant of the Plucker ray representation. L is network depth, and W is network width. |
2301.00911 | Detecting Information Relays in Deep Neural Networks | Deep learning of artificial neural networks (ANNs) is creating highly
functional processes that are, unfortunately, nearly as hard to interpret as
their biological counterparts. Identification of functional modules in natural
brains plays an important role in cognitive and neuroscience alike, and can be
carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or
calcium imaging. However, we do not have such robust methods at our disposal
when it comes to understanding functional modules in artificial neural
networks. Ideally, understanding which parts of an artificial neural network
perform what function might help us to address a number of vexing problems in
ANN research, such as catastrophic forgetting and overfitting. Furthermore,
revealing a network's modularity could improve our trust in them by making
these black boxes more transparent. Here, we introduce a new
information-theoretic concept that proves useful in understanding and analyzing
a network's functional modularity: the relay information $I_R$. The relay
information measures how much information groups of neurons that participate in
a particular function (modules) relay from inputs to outputs. Combined with a
greedy search algorithm, relay information can be used to identify
computational modules in neural networks. We also show that the functionality
of modules correlates with the amount of relay information they carry. | Arend Hintze, Christoph Adami | 2023-01-03T01:02:51Z | http://arxiv.org/abs/2301.00911v2 | # Detecting Information Relays in Deep Neural Networks
###### Abstract
Deep learning of artificial neural networks (ANNs) is creating highly functional processes that are, unfortunately, nearly as hard to interpret as their biological counterparts. Identification of functional modules in natural brains plays an important role in cognitive and neuroscience alike, and can be carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or calcium imaging. However, we do not have such robust methods at our disposal when it comes to understanding functional modules in artificial neural networks. Ideally, understanding which parts of an artificial neural network perform what function might help us to address a number of vexing problems in ANN research, such as catastrophic forgetting and overfitting. Furthermore, revealing a network's modularity could improve our trust in them by making these black boxes more transparent. Here, we introduce a new information-theoretic concept that proves useful in understanding and analyzing a network's functional modularity: the relay information \(I_{R}\). The relay information measures how much information groups of neurons that participate in a particular function (modules) relay from inputs to outputs. Combined with a greedy search algorithm, relay information can be used to _identify_ computational modules in neural networks. We also show that the functionality of modules correlates with the amount of relay information they carry.
## 1 Introduction
Neural networks, be they natural or artificial deep-learned ones, notoriously are black boxes (Castelvecchi, 2016; Adadi and Berrada, 2018). To understand how groups of neurons perform computations, to obtain insight into the algorithms of the human mind, or to be able to trust artificial systems, we need to make the network's processing more transparent. To this end, various information-theoretic and other methods have been developed to shed light on the inner workings of neural networks. Transfer entropy (Schreiber, 2000) seeks to identify how much information is transferred from one node (or neuron) to the next, which in principle can detect causal links in a network (Amblard and Michel, 2011) or be used to understand general properties about how information is distributed among nodes (Tehrani-Saleh and Adami, 2020; Hintze and Adami, 2020). In general, information theory can be used to make inferences in cognitive- and neuroscience (McDonnell _et al._, 2011; Dimitrov _et al._, 2011; Timme and Lapish, 2018). Predictive information (Bialek _et al._, 2001; Ay _et al._, 2008) determines how much the outputs of a neural network depend on the inputs to the system or on hidden states. Integrated information (Tononi, 2015) quantifies how much a system combines inputs into a single experience and identifies the central component(s) in which that happens. Information theory is also used to determine cognitive control (Fan, 2014) and neural coding (Borst and Theunissen, 1999) in natural systems. Finally, information theory is used to characterize _representations_(Marstaller _et al._, 2013) that quantify how much (and where) information is stored about the environment.
Despite the diverse range of applications of information theory to neuronal networks, the question of which module or subset of nodes in an artificial neural network performs which function remains an open one. In network neuroscience (Sporns, 2022; Hagmann _et al._, 2008; Sporns and Betzel, 2016), this question is investigated using functional magnetic resonance imaging (fMRI) (Logothetis, 2008; He _et al._, 2009), electroencephalography (EEG (Thatcher, 2011), magnetoencephalography (MEG) (Thatcher, 2011), and other physiological methods. fMRI specifically identifies functional cortical areas by the increase in oxygen consumption required to perform a particular task. However, an fMRI detects two things at the same time: the activity of neurons involved in a specific task, and the fact that they often form spatially associated clusters. As a consequence, in an fMRI analysis, functional and structural modularity coincide. In deep convolutional neural networks, detecting modules based on their biological activity is obviously impossible. The essential computation of the dot product of the state vector and weight matrix does not differ depending on how involved nodes and weights are in function. Furthermore, artificial neural networks do not display any structure beyond the order of layers. The order of nodes in a layer is interchangeable, as long as the associated weights change with them.
One approach to determine functional modularity in the context of ANNs is to determine the degree of modularity from the weights that connect nodes (Shine _et al._, 2021), by determining how compartmentalized information is (Hintze _et al._, 2018; Kirkpatrick and Hintze, 2019), or by performing a knockout analysis that allows tasks to be associated with the processing nodes of the neural network (C G _et al._, 2018). However, results from such a knockout analysis are often not conclusive.
Functional modularity in ANNs is interesting for another reason: it appears to affect a phenomenon known as _catastrophic forgetting_(McCloskey and Cohen, 1989; French, 1999), where a network trained on one task can achieve high performance, but catastrophically loses this performance when the network is sequentially trained on a new task. The degree of modularity appears to be related to the method by which these networks are trained. Using a genetic algorithm to modify weights (neuroevolution, see (Stanley _et al._, 2019)) seems to produce modular structures automatically, as was also observed in the evolution of metabolic networks (Hintze and Adami, 2008). This modularity appears to protect ANNs from catastrophic forgetting (Ellessen _et al._, 2015). Neural networks trained via backpropagation are unlikely to be modular since this training method recruits all weights into the task trained (Hintze, 2021). Similarly, dropout regularization (Hinton _et al._, 2012) is believed to cause all weights to be involved in solving a task (making the neural network more robust), which in turn prevents overfitting.
While many methods seek to prevent catastrophic forgetting (Parisi _et al._, 2019), such as Elastic Weight Consolidation (EWC) (Kirkpatrick _et al._, 2017), algorithms such as LIME (Ribeiro _et al._, 2016), and even replay during sleep (Golden _et al._, 2022), it is still argued that catastrophic forgetting has not been solved (Kemker _et al._, 2018). If catastrophic forgetting is due to a lack of modularization of information, it becomes crucial to accurately measure this modularization to identify learning schemes that promote modules. The problem of identifying modules responsible for different functions is further aggravated when information theory and perturbation analysis (via node knockout) disagree (Bohm _et al._, 2022; Sella, 2022).
When identifying candidate neurons in hidden layers that might contain information about the inputs that are used in decision-making, perturbing those neurons by noise or knockout should disrupt function. Similarly, hidden nodes _not_ containing information about inputs should, when perturbed in this manner, not alter the outputs encoding decisions. However, if information is stored _redundantly_, perturbing only part of the redundant nodes will not necessarily disrupt function, even though they carry information. At the same time, nodes without function or information can still accidentally perturb outputs when experiencing noise (Bohm _et al._, 2022; Sella, 2022).
Here, we develop a new information-theoretic measure that quantifies how much information a set of nodes _relays_ between inputs and outputs (relay information \(I_{R}\)). This measure can be applied to all combinations of neurons (sets) to identify which set of a given size contains the most information. While the number of sets of neurons is exponential in size, the number of tests required to find the set with the largest amount of information can be significantly reduced by taking advantage of the fact that a smaller subset cannot have more information than its superset. Thus, this measure can be combined with a greedy search algorithm that identifies the relevant computational modules connecting the inputs to the outputs. We will demonstrate on a wide range of examples the function and applicability of this new method. Specifically, using a positive control in which the nodes relaying the information from inputs to outputs are known, we demonstrate that relay information indeed allows us to recover the relevant functional nodes. We compare this control to a regularly-trained neural network, and show that perturbations on nodes carrying relay information cause failures in their predicted functionality.
Methods
### Training Artificial Neural Networks
The neural networks used here are implemented using PyTorch (Paszke _et al._, 2019) and trained on the MNIST handwritten numerals dataset (LeCun _et al._, 1998). The MNIST dataset consists of 60,000 training images and 10,000 test images of the ten numerals 0-9. Each grey-scale image has \(28\times 28\) pixels with values normalized between \(-1.0\) to \(1.0\). Here, we use two different networks. The _full_ network has 784 input nodes, followed by a layer of 20 hidden nodes with a standard summation aggregation function, and a tanh threshold function. The output layer needs 10 nodes to account for the ten possible numeral classes and uses the same aggregation and threshold function as the hidden layer. The _composite_ network is an aggregate of ten sub-networks each trained to recognize only a single number. In each of the sub-networks, the hidden layer has two nodes, with a single node in the output layer.
Networks are trained using the Adam optimizer (Kingma and Ba, 2015) until they either reach a recognition accuracy of 95% or else reach a given fixed number of training epochs. The node in the output layer with the highest activation is used to indicate the network's prediction of the numeral depicted in the image (argmax function).
### Composing an Artificial Neural Network from Specialized Networks
In a typical ANN performing the MNIST classification task, all nodes of the hidden layer are involved in relaying the information from the input to the output layer: a phenomenon we previously termed _informational smearing_(Hintze _et al._, 2018), as the information is "smeared" over many neurons (as opposed to being localized to one or a few neurons). Our control network is constructed in such a manner that functionality is strictly distributed over very specific nodes. Specifically, we construct a network with 20 hidden nodes by aggregating ten sub-networks with two hidden nodes each. Each of the sub-networks is only trained to recognize a single numeral amongst the background of the other nine, using only two hidden nodes. By combining these 10 sub-networks networks into the _composite model_, we can create a control in which the relay neurons (the two hidden neurons in each of the sub-networks) are guaranteed to only relay information about a very specific function (see Figure 1). Note that those composite networks do not undergo further training.
### Information-Theoretic Measure of Computational Modules
An artificial neural network can be viewed as an information-theoretic channel (Shannon, 1948) that relays the information received at the input layer to the output layer while performing some computations along the way. To measure the throughput of information, define the random variable \(X_{\text{in}}\) with ten states (one for each numeral) and Shannon entropy \(H(X_{\text{in}})\), while the outputs form a random variable \(X_{\text{out}}\) with entropy \(H(X_{\text{out}})\). The mutual information between both \(I(X_{\text{in}};X_{\text{out}})\) (see Equation (1)) consequently measures how much the output symbol distribution is determined by the inputs (and vice versa, as this is an un-directed measurement):
\[I(X_{\text{in}};X_{\text{out}})=H(X_{\text{in}})+H(X_{\text{out}})-H(X_{\text {out}},X_{\text{in}})\;. \tag{1}\]
Here, \(H(X_{\text{out}},X_{\text{in}})\) stands for the joint entropy of the input and output variables.
At the initialization of the network, weights are randomly seeded, giving rise to a network that randomly classifies images. In this case, the confusion matrix is relatively uniform and the conditional entropy \(H(X_{\text{out}}|X_{\text{in}})=H(X_{\text{out}},X_{\text{in}})-H(X_{\text{in}} )\approx H(X_{\text{out}})\), leading to low information \(I(X_{\text{in}};X_{\text{out}})\). However, over the course of training, the prediction accuracy increases, leading ultimately to a strictly diagonal confusion matrix and a vanishing conditional entropy \(H(X_{\text{out}}|X_{\text{in}})\), implying that every numeral is properly classified. In this case, the information channel has maximal information (information equals capacity) when measured over the training or test set. Note that, when we calculate the entropy of the inputs \(H(X_{\text{in}})\), we use only image labels (not all possible images).
We can view this joint channel as being composed of two sequential channels: one from the inputs to the hidden states, and one from the hidden states to the outputs. The information that the outputs receive is still determined by the inputs, but now via the hidden variable \(Y\). A perfect channel can only exist if the hidden layer has sufficient bandwidth to transmit all of the entropy present at the inputs, that is, \(H(Y)\geq H(X_{\rm in})\).
We can now write the information that flows from the inputs via the hidden states to the outputs in terms of the shared information between all three random variables
\[I(X_{\rm in};X_{\rm out};Y) = H(X_{\rm in})+H(X_{\rm out})+H(Y) \tag{2}\] \[- H(X_{\rm in},X_{\rm out})-H(X_{\rm in},Y)-H(X_{\rm out},Y)\] \[+ H(X_{\rm in},X_{\rm out},Y)\;.\]
Because information _must_ pass through the hidden layer, this "triplet information" must be equal to the information \(I(X_{\rm in};X_{\rm out})\) (see Figure 2).
Figure 1: Illustration of the composite network. For each of the ten numerals, an independent neural network (sub-network) is trained to recognize a single numeral among the others. Each of those ten networks has 784 input nodes to receive data from the \(28\times 28\) pixel-wide MNIST images. Each hidden layer has two nodes followed by a single node at the output layer (top panel). The composite network (bottom panel) is assembled from these ten subnetworks. Colors represent which weights in the combined weight matrix come from which corresponding sub-network. Weights shown as white remain 0.0. Consequently, the weight matrix connecting the hidden layer to the output layer is de facto sparse.
However, in general, not all of the nodes that comprise \(Y\) carry information. Let us imagine, for example, that the set of hidden nodes \(Y\) is composed of a set \(Y_{R}\) that shares information with \(X_{\text{in}}\) and \(X_{\text{out}}\), and a set \(Y_{0}\) that does not share this information, that is, \(I(X_{\text{in}};X_{\text{out}};Y_{0})=0\), with \(Y=Y_{R}\otimes Y_{0}\). We will discuss the algorithm to determine which neurons belong in the set of relay neurons \(Y_{R}\) further below.
The nodes that comprise \(Y_{0}\) could, for example, have zero-weight connections to the inputs, the outputs, or both. They are defined in such a way that none of the information \(I(X_{\text{in}};X_{\text{out}})\) (area outlined in yellow in Figure 3B) is shared with them.
We call the information that is relayed through the "critical" nodes that carry the information (the nodes in the set \(Y_{R}\)) the _relay information_. While we could define this information simply as the information shared between \(X_{\text{in}}\) that is also shared with the neurons identified to be in the set \(\mathbb{Y}_{\mathbb{R}}\) (see Section 2.4), it is important to deal with cases where neurons that are informationally inert (they do not read information from \(X_{\text{in}}\) nor write into \(X_{\text{out}}\)) could nevertheless copy the state of a neuron that does relay information. In the current construction, this does not appear to be very likely (or is at most a small effect). However, in other architectures (such as recurrent neural networks, networks with multiple layers, probabilistic transfer functions, or natural brains), such a phenomenon might be more common. As discussed in Appendix A, inert neurons that copy the state of relay neurons may be classified as
Figure 3: (**A**) Input/output structure of an ANN with inputs \(X_{\text{in}}\), outputs \(X_{\text{out}}\), and a hidden layer \(Y=Y_{R}\otimes Y_{0}\). The relay information passes from the inputs via the relay neurons \(Y_{R}\) to the output (green arrow); (**B**) the entropic Venn diagram for the four variables \(X_{\text{in}}\), \(X_{\text{out}}\), \(Y_{R}\), and \(Y_{0}\), with ellipses quantifying the entropy of each of the variables colored according to (**A**). The information shared between \(X_{\text{in}}\) and \(X_{\text{out}}\) is outlined in yellow. The relay information Equation (3) is indicated by the green area.
Figure 2: Entropy Venn diagram for the random variables \(X_{\text{in}}\), \(X_{\text{out}}\), and \(Y\). The shared information between all three variables equals the information \(I(X_{\text{in}};X_{\text{out}})\) because no information can flow from \(X_{\text{in}}\) to \(X_{\text{out}}\) without passing through \(Y\).
belonging to the set \(\mathbb{Y}_{0}\) (because removing them does not reduce the information carried by the set) yet show a nonvanishing \(I(X_{\text{in}};X_{\text{out}};Y_{0})\). In order to eliminate such contributions, we measure the relay information _conditional_ on the state of neurons in \(Y_{0}\) that is
\[I_{R}=H(X_{\text{in}};X_{\text{out}};Y_{R}|Y_{0})\;, \tag{3}\]
which is indicated in the entropic Venn diagram in Figure 3 as the area colored in green. An explicit expression for \(I_{R}\) can be obtained simply by writing Equation (2) for \(Y_{R}\) instead of \(Y\), and conditioning every term on \(Y_{0}\).
We can also define a _particular relay information_ (a relay information that pertains to any particular numeral class) by introducing the input-class random variable
\[Z=Z_{1}\otimes Z_{2}\otimes\dots\otimes Z_{10}\;. \tag{4}\]
Because we can decompose \(X_{\text{out}}\) in a similar manner
\[X_{\text{out}}=X_{\text{out}}^{(1)}\otimes X_{\text{out}}^{(2)}\otimes\dots \otimes X_{\text{out}}^{(10)}\;, \tag{5}\]
the relay information about numeral \(i\) can then be written as
\[I_{R}(i)=H(Z_{i};X_{\text{out}}^{(i)};Y_{R}|Y_{0})\;. \tag{6}\]
This is the information that the critical relay nodes \(Y_{R}\) are providing about numeral \(i\).
The removal of hidden neurons that do not contribute to information transfer suggests a simple algorithm that identifies such neurons: start with the full set and remove neurons one by one, and keep only those neurons that lead to a reduction of the information being relayed. However, this search is in reality more complex because neurons can carry redundant information. We discuss this algorithm in the following section.
### Shrinking Subset Aggregation Algorithm
In order to find the minimal subset of nodes \(\mathbb{Y}_{R}\) that carry all of the information flowing from \(X_{\text{in}}\) to \(X_{\text{out}}\), we should in principle test all possible bi-partitions of neurons in \(Y\). Unfortunately, the number of bi-partitions of a set is still exponential in the set size, so a complete enumeration can only be performed efficiently for small sets. However, it turns out that in most cases a greedy algorithm that removes nodes one by one will find the minimal set \(\mathbb{Y}_{R}\) (see Appendix A).
We start with the largest partition in which all nodes belong to the set \(\mathbb{Y}_{R}\), and none to \(\mathbb{Y}_{0}\). Now, all possible subsets in which a single node is moved from \(\mathbb{Y}_{R}\) to \(\mathbb{Y}_{0}\) can be tested. The subset with the highest information (Equation (3)) is retained, and the node with the lowest information contribution is permanently moved into subset \(\mathbb{Y}_{0}\). This process is repeated until only one node is left in \(\mathbb{Y}_{R}\). Over the course of this procedure (assuming perfect estimation of entropy from sample data), the set with the highest information for each set size should be identified (see Algorithm 1).
As discussed in Appendix A, this algorithm can sometimes fail to identify the correct minimal subset. First, estimates of entropies from finite ensembles can be inaccurate: these estimates are both noisy and biased (see, for example, (Paninski, 2003)), leading to the removal of the wrong node from the set \(\mathbb{Y}_{R}\). Second, information can be stored redundantly. Imagine a network of ten nodes, with three nodes forming the relay between inputs and outputs, while another set of two nodes is _redundant_ with those other three nodes. The greedy algorithm will work until all those five nodes are in the set \(\mathbb{Y}_{R}\). Removing any of those nodes will not drop the information content of the larger set, since the information is fully and redundantly contained in both the set of three and the set of two. Thus, all five nodes appear equally _unimportant_ to the algorithm, which can now not decide anymore which node to remove. It might remove one of the nodes in the set of three, leading to the set of two becoming the crucial computational module. Alternatively, removing a node from the smaller set promotes the larger set to become the crucial computational set. Either way, the algorithm has a chance to fail to find a unique set because there could be several.
One way to amend the algorithm would be to allow the process to dynamically branch. In case multiple nodes upon removal do not reduce the information retained in the remaining set \(\mathbb{Y}_{R}\), all possible branches can be pursued. Such a fix will significantly increase the computational time. However, as we do not expect the occurrence of redundant sets to be a prominent feature of many networks, we have not explored this alternative algorithm further.
```
0:\(\mathbb{Y}=\{0,...,n\}\) \(\mathbb{Y}_{0}\leftarrow\emptyset\) \(\mathbb{Y}_{R}\leftarrow\mathbb{Y}\) while\(\mathbb{Y}_{R}\neq\emptyset\)do for\(\forall a\in\mathbb{Y}_{R}\)do \(\mathbb{Y}_{R}^{\prime}\leftarrow\mathbb{Y}_{R}-a\) \(\mathbb{Y}_{0}^{\prime}\leftarrow\mathbb{Y}_{0}+a\) \(I_{a}\gets I_{R}(X_{\mathrm{in}};X_{\mathrm{out}};\mathbb{Y}_{R}^{\prime} |\mathbb{Y}_{0})\) (see Equation (3)) endfor \(a\leftarrow\{\mathbb{Y}_{R};a=min(I_{a})\}\) \(\mathbb{Y}_{R}\leftarrow\mathbb{Y}_{R}-a\) \(\mathbb{Y}_{0}\leftarrow\mathbb{Y}_{0}+a\) endwhile
```
**Algorithm 1** Shrinking Subset Aggregation Algorithm.
### Knockout Analysis
To test the informational relevance of individual nodes of the hidden layer, we can perform "knockout" experiments. While a knockout in a biological context is defined as the disabling of a component, it is less obvious how to perform such an operation in the context of a neural network. One option would be to replace a neuron's activation level by a random number, which still leaves the freedom to choose a distribution and a range. Furthermore, these random values still propagate through the network, which implies that such a knocked-out neuron is not disabled. Keeping an activation level constant (independent of the inputs) can also have undesirable effects. Imagine that a neuron's activation level is constant, say \(1.0\) or \(-1.0\), independently of the input images. This value would be included in all subsequent aggregation functions affecting the final output of the network. Attempting to knock out this node by forcing it to \(-1.0\) or \(1.0\) can now have two different effects. If the node is already a constant \(1.0\), knocking it out by forcing it to be a constant \(1.0\) would suggest that this node has no function, since such a knockout would not change any output. Setting it to \(-1.0\) might have significant effects, but would on the other hand leave a node that should be at \(-1.0\) unaffected. Here, to "knock out" a node (to render it non-functional) in the hidden layer, we force it to take on a value of \(0.0\) during the forward pass. At the same time, all weights of the first layer leading to such a node are set to \(0.0\), as are all weights of the second layer that are affected by that node. Alternatively, the values of the nodes to be knocked out in the hidden layer could have been forced to \(0.0\) when the forward pass reaches the hidden layer. These methods are equivalent. While this form of knockout can also have undesirable consequences, the effect is likely closest to an actual removal of the node by eliminating it from the network, and shrinking the weight matrices accordingly.
### Coarse-Graining Continuous Variables
The computations performed by the neural network use continuous inputs, and due to the tanh-like threshold function, the activation levels of neurons in the hidden layer are confined to the interval \([-1,1]\). While entropies can be computed on continuous variables (so-called differential entropies, see (Shannon, 1948)), we use discrete entropies here, which require a discretization of the continuous values. In particular, we are coarse-graining those entropies by mapping all continuous values to the binary categories \(0\) and \(1\). We previously used the median value of a neuron's excitation level as the threshold for the bin (Bohm _et al._, 2022). Instead, here the hidden-state values are clustered using a \(k\)-means clustering algorithm with \(k=2\). Using the median for coarse-graining ensures that the resulting distribution has maximal entropy because each bin is guaranteed to receive half of the values. However, we found a maximum-entropy assumption for a neuron to be inappropriate in most cases. Using a \(k\)-means clustering algorithm to distribute values into bins gives a better approximation of the relative entropy between states.
Coarse-graining also reduces the size of the state space that is being sampled. Using \(k=2\) to coarse-grain the hidden states implies that there are at most \(k^{N}\) possible states, which (with \(N=20\)) is a state space that is in danger of being significantly undersampled with a limited number of images (MNIST has at most 70,000 if test and training data are combined). As the entropy of this hidden space is \(N\log_{2}k\) bits, an input sample with \(\log(60,000)\approx 15.87\) bits would be insufficient to adequately sample the random variables \(X_{\text{in}}\), \(X_{\text{out}}\), \(Y_{R}\), or \(Y_{0}\) even for \(k=2\). However, as discussed in Appendix B, because the input entropy is much smaller (\(\log 10\approx 3.32\) bits), estimation errors are small, and the likelihood that nodes are accidentally removed from \(Y_{R}\) due to poor sampling is small.
### Aggregated Relay Information
The greedy algorithm identifies a sequence of sets of nodes that continuously shrink because it is always the node contributing the least to \(I_{R}\) that is removed next. Consequently, every time a node is removed, we can also quantify the loss of information for that particular node \(n\) as the difference in \(I_{R}\) between the larger set containing the node (\(\mathbb{Y}_{R}\cup n\)) and smaller set without it (\(\mathbb{Y}_{R}\)):
\[\Delta I(n)=I_{R}(\mathbb{Y}_{R}\cup n)-I_{R}(\mathbb{Y}_{R})\;. \tag{7}\]
Interestingly, when this process arrives at a set of nodes that taken together is essential in relaying the information, it can happen that the removal of _any_ of the nodes of the set causes the remaining neurons to have \(I_{R}=0\). Information in such an essential set can be seen to be _encrypted_, to the point where no node can be removed without losing all of the information (Bohm _et al._, 2022). However, this creates a situation in which the last nodes, when removed, appear to not contribute any information, even though they are essential. Thus, we quantify the amount that each node contributes to the relay information in terms of the sum of all \(\Delta I(n)\) over all previously removed nodes as
\[I_{A}(n)=\sum_{i=1}^{n}\Delta I(i)\;. \tag{8}\]
Using the information loss due to the removal of a node from the essential set, we can also quantify the _essentiality_ of a neuron in terms of the loss of information the removal of node \(n\) causes when it is removed from the remaining set of nodes. The essentiality of a single node can be computed using Equation (7) where \(n\) is the node being removed from the full set of nodes. Thus, if a neuron is meaningless or redundant, its essentiality \(\Delta I(n)\) will vanish.
## 3 Results
### Identification of Information Relays
To determine if the proposed metric and optimization method correctly identifies the nodes that relay information from the outputs to the inputs, we trained two kinds of networks. A standard ANN with 20 hidden nodes was trained to correctly identify all ten numerals. As a control, ten sub-networks with two hidden nodes were trained on a single numeral each. From the ten smaller networks, a full network was composed (see Figure 1) that can perform the same task as the network trained on all numerals at the same time.
Figure 4 shows the mean accuracy of recognizing each of the different digits as a function of training epoch, for the full as well as the composite network. Note that the full network only needed 43 epochs to reach 96% accuracy, while the training of the smaller models took significantly longer. The full model was trained until it reached an accuracy of 0.96; the smaller models were trained until they reach an accuracy of 0.98. Smaller networks could easily be trained to achieve this high 98% accuracy while training the full network is usually limited to 96%. In order to observe networks performing as optimally as possible, and to maximize the information between inputs and outputs, networks were trained until they reached those practical limits (Chapman _et al._, 2013).
Because in the composite network the two hidden neurons of each sub-network are guaranteed to serve as relays for the relevant information, we can use this network as a positive control to test whether our algorithm correctly detects relay information, and whether neurons carrying non-overlapping information (each of the hidden neuron sets only carries the information about one specific numeral) are either more or less vulnerable to knockout. This does not imply that the hidden neurons that pertain to a particular numeral cannot relay information about another numeral. After all, hidden nodes trained to recognize the numeral 1, for example, might still correlate with nodes trained to recognize numeral 7 due to the similarity between those images.
In order to test whether the greedy algorithm finds the correct minimal informative subset in the full model, we performed an exhaustive search of all \(2^{N}-1\) (with \(N=20\)) bi-partitions of the hidden nodes \(Y\) to find the minimal set \(Y_{R}\). We then compared the result of the exhaustive search with the candidate set resulting from the shrinking subset aggregation algorithm. This un-branched version of the algorithm only needs \(\frac{N(N+1)}{2}\) computations, reducing the computational complexity from exponential to quadratic.
Figure 5 shows that different partitions relay very different amounts of information about the particular output. In general, the larger the set \(\mathbb{Y}_{R}\), the more information it represents, but we also see that the highest information found within sets of a particular size is always higher than the maximal information found amongst all sets that are smaller (as proved in Appendix A, with the caveat of redundant sets). The shrinking subset aggregation algorithm exploits this observation of smaller sets always having less information than their larger superset and should thus be capable of identifying the subsets \(\mathbb{Y}_{R}\) (and consequently also \(\mathbb{Y}_{0}\)) with the highest information content for all sets of the same size, but without the complete enumeration of all possible sets. We find that fewer than \(0.9\%\) of the correct sets have equal
Figure 4: Training accuracy as a function of training epoch. (**A**) full model (top panel). The accuracy to predict each numeral is indicated with lines of different colors (see legend). Accuracy on the training set is shown as solid lines while accuracy on the test is indicated by dotted lines. The average performance classifying all numbers is shown in black; (**B**) accuracy of each of the ten sub-network models used to create the composite model as a function of training epoch. Colors indicate the accuracy for detecting an individual numeral. The endpoint of the training is highlighted with a dot; the same time point but using test data is indicated by an x. Training other networks had marginally different outcomes (data not shown).
or more information than the set identified by the greedy algorithm. As discussed earlier, the failure of the greedy algorithm to correctly identify the most informative set can be attributed to noise in the entropy estimate due to the finite sample size, as well as to the presence of redundant sets with identical information content.
We now investigate whether the greedy algorithm properly identifies the relevant subsets that are critical in relaying the information from inputs to outputs that is, whether the information they carry is indeed used to predict the depicted numeral. We define the _importance_ of a node as the sum of all information loss that this node conveyed before it was removed (aggregated relay information, see Methods). We also define the _essentiality_ of node \(n\) as the amount of relay information lost when moving that node from the minimal set \(Y_{R}\) to \(Y_{0}\) (see Equation (7)). Because this measure of essentiality only considers the effect of removing single nodes, it can be inaccurate if a node is essential only when another node (for example a redundant one) is also removed. However, since the relays in the composite network are so small (two nodes) removing any one of them causes a large drop of information. This can be seen in Figure 6B, where nodes identified as relays are also highly essential.
Figure 6A shows that both the importance analysis (via the aggregated particular relay information) and the essentiality analysis correctly identify the nodes that relay the information from inputs to outputs in the composite model. Aside from the sampling noise, each pair of hidden nodes that were trained to be the relays are correctly identified as highly informative (see Figure 6).
Figure 5: Particular relay information about each numeral for all possible bi-partitions (black dots) as a function of the set sizes \(|\mathbb{Y}_{R}|\). The top ten panels show particular relay information for the full model, while the bottom ten panels show the same for the composite model. Each panel shows the relay information about a different numeral in the MNIST task, indicated by the index of the panel. The red line corresponds to the set identified by the shrinking subset aggregation algorithm. Fewer than \(0.9\%\) of all subsets have a higher information content than the one identified by the algorithm.
Training the full network via backpropagation is not expected to create modules of hidden nodes that each only relay information about one specific numeral. Indeed, we find information to be relayed in an unstructured fashion in this network (see Figure 7A). Interestingly, nodes that are positively identified as relays are not necessarily essential, suggesting that many nodes contain redundant information (see Figure 7B). This further supports our previous findings that backpropagation smears or distributes function across all nodes, rather than isolating functions into structured modules (Hintze _et al._, 2018; Kirkpatrick and Hintze, 2019). The results from Figure 7B also suggest that using the essentiality of single nodes does not properly identify the informational structure of the network.
### Information Relays Are Critical for the Function of the Neural Network
To verify that the sets \(\mathbb{Y}_{R}\) with high information are indeed relaying information from the inputs to the outputs, we can study the effect of knockouts on those nodes. Because we expect a correlation between knockout effect size (the sensitivity of the node to perturbation) and the size of the informative set, care must be taken when interpreting the correlation between relay information and knockout effect size (sensitivity). Smaller sets can relay less information and have a potentially smaller effect when knocked out compared to larger sets. Thus, set size confounds the correlation between knockout effect and the amount of information relayed by the same set. We performed a multiple linear regression to test how much the knockout effect (treated as the dependent variable) is explained either by the set size or the amount of information relayed (independent variable). Figure 8 shows the regression coefficients of that analysis.
Figure 6: Aggregated relay information and essentiality in the composite model. (**A**) aggregated particular information loss \(\Delta I_{R}(n)\) (Equation (8)) for all 20 nodes in the hidden layer (\(x\)-axis) and the ten different numeral classes (\(y\)-axis) shown in grayscale (brighter shades indicate higher loss of information); (**B**) node essentiality (Equation 7)) for each hidden neuron and numeral. Bright squares indicate essential nodes, while black squares would indicate redundant or meaningless nodes. The red dot (node 16, numeral 1) points to a neuron that appears to relay information (**A**) but is entirely redundant and non-essential (red dot in (**B**)).
Relay information explains at least 75% (\(r^{2}>0.75\)) of the variance of the knockout effect for the composite model and at least 45% (\(r^{2}>0.45\)) of the variance of the knockout effect for the full model. We can thus conclude that, when assuming a linear relationship between either set size or relay information and knockout effect, the influence of relay information on knockout effect is significantly stronger than the influence of set size (\(F>1.5\times 10^{5}\) in an F-test).
Figure 8 shows that the knockout effect is better explained by the amount of particular relay information about that node than the set size \(|\mathbb{Y}_{R}|\). This shows also that, as expected, set size is indeed confounding this relation. We further find that in the composite network the relationship between particular relay information and knockout effect is stronger compared to the full network. The weaker relation between knockout effect and relay information is most likely due to the information being distributed more broadly over many nodes, compared to the composite model where the information is forced to reside in only two relay nodes.
## 4 Discussion
We introduced a new information-theoretic concept that we believe will prove to be useful in the analysis of information flow in natural and artificial brains: the "relay information". Relay information quantifies the amount of information within a set of nodes inside a communication channel that passes through this set, and not through other nodes within the same channel. The particular relay information can be
Figure 8: Regression coefficients of the multiple linear regression analysis between knockout effect \(K\) and set size \(|\mathbb{Y}_{R}|\) (red crosses), and knockout effect \(K\) and particular relay information \(I_{R}(i)\) (black crosses), as a function of numeral \(i\). Lines are meant to guide the eye. (**A**) full model; (**B**) composite model.
Figure 7: Aggregated relay information and essentiality in the full model. (**A**) aggregated relay information for each node and every numeral class for the full network; (**B**) essentiality. Methods, axes, and grayscales as in Figure 6.
used to identify which nodes in a hidden layer of a neural network are responsible for what particular classification function. We constructed a greedy algorithm that identifies the minimal informative set of nodes that carry the particular relay information, and tested it on the MNIST hand-written numeral classification task using a regular neural network, as well as a control in a network in which we know--by construction--the function of each hidden node (see Figure 1). We further showed via a knockout analysis that the sets of neurons identified as carrying the relay information indeed are functional because knocking out those nodes abrogates the classification accuracy for that particular numeral.
The identification of information relays, and thus discovering the computational modules that relay information, can only be a first step in a more comprehensive analysis of brain function. Here, we focused on testing the method, and showed using a positive control (the composite network) that the identified relay sets are indeed correlated to function. We also found that the full network, trained on all image classes at the same time, does not display a well-differentiated modular structure. Instead, information is distributed haphazardly across the network, and if we were to identify functional modules, they would be highly overlapping. In other words, the ANNs that we trained here do not seem to have a modular structure in information space.
Because a defined, modular, informational structure appears to be key to understanding a number of key properties of neural networks (such as catastrophic forgetting (McCloskey and Cohen, 1989; Kirkpatrick _et al._, 2017; Kemker _et al._, 2018) or learning (Ellefsen _et al._, 2015)), understanding what design decisions give rise to more (or less) modular networks is an important first step. We are now better equipped to study the role of information smearing and modularity in its effect on fooling, generalization, catastrophic forgetting, or latent space variables, and look forward to exploring these topics in the future.
The concepts and methods we introduced are general and can be applied to any task where a network (be it neural or genetic) performs its function by taking inputs, computing outputs, and then using those outputs for prediction. In the case of a natural neural network, however, because it is continuously processing, an additional temporal binning has to be performed. This, and measuring the states of _all_ neurons in the first place, will make applying the concept of relay information challenging, to say the least. In the future, it would be interesting to study if this method also applies to, for example, time series classification, recurrent neural networks, convolutional layers, or even generative tasks.
Another concern is the scaling of the computational complexity of the algorithm to detect information relays with the number of nodes in the hidden layer. Currently, using the greedy algorithm and all 60,000 training images from the MNIST data set, and applying it to a full network with 20 hidden nodes, takes about 30 s on a 3.5 Ghz desktop computer (for all 10 numerals together). Performing the same analysis but computing the exact critical set (testing all \(2^{N}\) sets) takes about 24 h on the same hardware. Because the complexity of the greedy algorithm has a computational complexity of \(O(N(N-1))\) and the full enumeration has a computational complexity of \(O(2^{N})\), we can conjecture that a network of 1000 nodes can be analyzed within the same 24 h needed for a network of size \(N=20\).
In this work, we only studied one particular optimizer to train the neural network (Adam), one loss function (mean squared error), and the threshold functions hyperbolic tangent and argmax. We conjecture that our method applies to all other variances of deep learning. However, we also conjecture that the way in which information is distributed across the network will depend on the method and parameters of the optimization procedure, and we will test this dependence in future work. Finally, by testing different coarse-grainings of neuronal firing levels, the method should be able identify relay neurons and thus functional modules in biological brains, and thus help in studying information flow in functioning brains.
In this work, we found that the greedy algorithm correctly identifies the minimal informative set in almost all cases. However, we expect that the failure rate depends on the task being studied, the data set size, as well as the amount of redundancy among neurons. In networks with significant redundancy, we can imagine that the algorithm fails significantly more often, in which case a branching algorithm may have to be designed, which would carry a significant complexity cost.
**Author contributions**
A.H. implemented all computational analysis and methods, A.H. and C.A. designed the experiments and devised the new methods, and A.H. and C.A. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.
**Funding information**
This research was supported by the Uppsala Multidisciplinary Center for Advanced Computational Sci
ence SNIC 2020-15-48, and the National Science Foundation No. DBI-0939454 BEACON Center for the Study of Evolution in Action.
**Data availability**
Code for the computational experiments and the data analysis can be found at
[https://github.com/Hintzelab/Entropy-Detecting-Information-Relays-in-Deep-Neural-Networks](https://github.com/Hintzelab/Entropy-Detecting-Information-Relays-in-Deep-Neural-Networks)
DOI:10.5281/zenodo.7660142
**Acknowledgements**
We thank Clifford Bohm for extensive discussions. This research was supported by Uppsala Multidisciplinary Center for Advanced Computational Science SNIC 2020-15-48, and the National Science Foundation No. DBI-0939454 BEACON Center for the Study of Evolution in Action.
## Appendix A Proof of non-deceptive removal of nodes
Here, we show that, as long as information is not redundantly encoded, it is possible to remove nodes one by one in a greedy fashion so that the minimal information reduction by single-node removal is not deceptive. In a deceptive removal, removing a pair of nodes reduces the information by a smaller amount than each of the individuals would have removed.
Say that the information to predict feature \(X_{\rm out}\) is stored in \(n\) variables \(Y_{1}\cdots Y_{n}\). This information is
\[I(X_{\rm out};Y_{1}\cdots Y_{n})\;. \tag{9}\]
The general rule to study node removal is the identity
\[I(X_{\rm out};Y_{1}\cdots Y_{n})=I(X;Y_{1}\cdots Y_{n-1})+H(X_{ \rm out};Y_{n}|Y_{1}\cdots Y_{n-1})\;. \tag{10}\]
We can easily convince ourselves of this, by imagining \(Y=Y_{1}\cdots Y_{n-1}\) and \(Y_{n}=Z\); then, this equation is just
\[I(X;YZ)=I(X;Y)+H(X;Z|Y)\;, \tag{11}\]
which is easily seen by writing the Venn diagram between \(X\), \(Y\), and \(Z\).
Let us study the simplest case of three nodes. The three possible information reductions are
\[\Delta I_{1} = H(X_{\rm out};Y_{1}|Y_{2}Y_{3})\;, \tag{12}\] \[\Delta I_{2} = H(X_{\rm out};Y_{2}|Y_{1}Y_{3})\;,\] (13) \[\Delta I_{3} = H(X_{\rm out};Y_{3}|Y_{1}Y_{2})\;. \tag{14}\]
We will now prove that, if \(\Delta I_{3}<\Delta I_{1}\) and _at the same time_\(\Delta I_{3}<\Delta I_{2}\) (implying that node 3 should be removed first), then it is _not_ possible that the information reduction due to the removal of nodes 1 and 2 at the same time is smaller than the information loss coming from the removal of node 3 (i.e., that \(\Delta I_{12}<\Delta I_{3}\) is impossible). If the latter was possible, we should have removed nodes 1 and 2 at the same time instead of node 3 (making the removal of node 3 deceptive).
Let us first write down \(\Delta I_{12}\). Since
\[H(X_{\rm out};Y_{1}Y_{2}Y_{3})=I(X_{\rm out};Y_{3})+H(X_{\rm out };Y_{1}Y_{2}|Y_{3}), \tag{15}\]
we know that
\[\Delta I_{12}=H(X_{\rm out};Y_{1}Y_{2}|Y_{3})\;. \tag{16}\]
We can rewrite this as
\[H(X_{\rm out};Y_{1}Y_{2}|Y_{3})=H(X_{\rm out};Y_{2}|Y_{3})+H(X _{\rm out};Y_{1}|Y_{2}Y_{3})\;. \tag{17}\]
This is the same rule as (11), just conditioned on a variable (all information-theoretic equalities remain true if the left-hand side and the right-hand side are conditioned on the same variable). Equation (17) implies that
\[\Delta I_{12}=H(X_{\text{out}};Y_{2}|Y_{3})+\Delta I_{1}\;. \tag{18}\]
Now since \(H(X_{\text{out}};Y_{2}|Y_{3})\geq 0\), we know that \(\Delta I_{12}\geq\Delta I_{1}\). However, since \(\Delta I_{1}>\Delta I_{3}\) by assumption, it follows immediately that
\[\Delta I_{12}>\Delta I_{3} \tag{19}\]
contradicting the claim that it is possible that \(\Delta I_{12}<\Delta I_{3}\).
Clearly, the same argument will apply if we ask whether larger groups are removed first: they can never remove less information than the smallest information removed by a single node in that group.
If information is redundantly encoded, the greedy algorithm can fail. Suppose two nodes are copies of each other \(Y_{1}=Y_{2}\), making them perfectly correlated: they carry the same exact information about \(X_{\text{out}}\). In that case, we can remove any of the two nodes, and it will not change the information, that is, \(\Delta I_{1}=\Delta I_{2}=0\):
\[I(X_{\text{out}};Y_{1}Y_{2}Y_{3})=I(X_{\text{out}};Y_{1}Y_{3})=I(X_{\text{out} };Y_{2}Y_{3})\;. \tag{20}\]
However, once we removed one (say we removed \(Y_{1}\)), then removing \(Y_{2}\) potentially removes information, as
\[I(X_{\text{out}};Y_{2}Y_{3})=I(X_{\text{out}};Y_{3})+H(X_{\text{ out}};Y_{2}|Y_{3})\;. \tag{21}\]
Now that \(Y_{1}\) is removed, the redundancy is gone, and \(H(X_{\text{out}};Y_{2}|Y_{3})\) could be large even though \(\Delta I_{1}=H(X_{\text{out}};Y_{1}|Y_{2}Y_{3})=0\). This failure of the greedy algorithm is at the origin of the discrepancies between the true minimal set of nodes (obtained by exhaustive enumeration) and the set identified by the greedy algorithm, but the failure is clearly rare.
## Appendix B Sampling Large State Spaces
In this work, we identify relay information by calculating shared conditional entropies such as those shown in Equation (1). Calculating those entropies, in turn, relies on estimating the entropy of the hidden layer neurons \(H(Y)\), which can become cumbersome if the number of neurons in the hidden layer is large. To calculate a quantity such as \(I(X_{\text{in}};X_{\text{out}};Y)\) (the center of the diagram in Figure 2), we must estimate probabilities such as \(p(y)\), the probability to find the hidden layer in any of its \(2^{20}\) states, assuming a binary state for each node after binning. To obtain an accurate maximum-likelihood estimate of \(p(y)\), the sample size has to be large. In order to estimate entropies, for example, a good rule of thumb is that the finite sample size bias (Basharin, 1959)
\[\Delta H=\frac{M-1}{2N\ln k}<<1\;, \tag{22}\]
where \(M\) is the number of states in \(Y\), \(N\) is the sample size, and \(k\) is the dimension of the alphabet (\(k=2\) for a binary random variable). Evidently, for \(M=2^{20}\) and \(N=60,000\), this condition is not fulfilled. However, because the output entropy \(H(X_{\text{out}})\) is constrained to be \(\log_{2}(10)\approx 3.32\) bits, a trained ANN should never end up with a hidden layer entropy near \(\log M\), but rather have an entropy comparable to that of the output state. In this way, the channel loss \(H(Y|X_{\text{out}})\) is minimized.
To test whether hidden layer entropy estimates are adequately sampled, we measured the entropy of the hidden layer when sampling over the entire image space. If \(Y\) was uniformly distributed, we would expect \(H(Y)\approx\log(N)\approx 15.86\). Instead, we found the actual entropy \(H_{\text{act}}(Y)\approx 3.319\), indicating that the hidden layer probability distribution is highly skewed. In this manner, the effective number of states \(M_{\text{eff}}=2^{H_{\text{act}}(Y)}\approx 9.98\), which easily satisfies condition (22).
We further took the original (continuous-value) hidden states derived from the 60,000 training images and clustered them using \(k\)-means clustering. We find the optimal number of clusters, identified using the elbow method, to be 10 or close to 10 (see Figure 9A), which of course coincides with the number of image classes.
We also performed a principal component analysis (PCA) on the original (not binned) hidden states, and plot the results for the two first components, color-coding each hidden state by its affiliated class. Again, for both full and composite networks, we find the hidden states to cluster according to their assigned class (see Figure 9B,C). All of these measurements confirm that the effective space of the hidden states is significantly smaller than \(2^{20}\) because training causes the hidden states to converge to the attractors needed to classify the images into the ten classes, giving rise to \(H_{\text{act}}(Y)\approx\log_{2}(10)\).
|
2307.02250 | Stress-testing Road Networks and Access to Medical Care | This research studies how populations depend on road networks for access to
health care during crises or natural disasters. So far, most researchers rather
studied the accessibility of the whole network or the cost of network
disruptions in general, rather than as a function of the accessibility of
specific priority destinations like hospitals. Even short delays in accessing
healthcare can have significant adverse consequences. We carry out a
comprehensive stress test of the entire Austrian road network from this
perspective. We simplify the whole network into one consisting of what we call
accessibility corridors, deleting single corridors to evaluate the change in
accessibility of populations to healthcare. The data created by our stress test
was used to generate an importance ranking of the corridors. The findings
suggest that certain road segments and corridors are orders of magnitude more
important in terms of access to hospitals than the typical one. Our method also
highlights vulnerable municipalities and hospitals who may experience demand
surges as populations are cut off from their usual nearest hospitals. Even
though the skewed importance of some corridors highlights vulnerabilities, they
provide policymakers with a clear agenda. | Hannah Schuster, Axel Polleres, Johannes Wachs | 2023-07-05T12:45:07Z | http://arxiv.org/abs/2307.02250v1 | # Stress-testing Road Networks and Access to Medical Care
###### Abstract
This research studies how populations depend on road networks for access to health care during crises or natural disasters. So far, most researchers rather studied the accessibility of the whole network or the cost of network disruptions in general, rather than as a function of the accessibility of specific priority destinations like hospitals. Even short delays in accessing healthcare can have significant adverse consequences. We carry out a comprehensive stress test of the entire Austrian road network from this perspective. We simplify the whole network into one consisting of what we call accessibility corridors, deleting single corridors to evaluate the change in accessibility of populations to healthcare. The data created by our stress test was used to generate an importance ranking of the corridors. The findings suggest that certain road segments and corridors are orders of magnitude more important in terms of access to hospitals than the typical one. Our method also highlights vulnerable municipalities and hospitals who may experience demand surges as populations are cut off from their usual nearest hospitals. Even though the skewed importance of some corridors highlights vulnerabilities, they provide policymakers with a clear agenda.
Road networks Health Care Stress-test Simulation ARTICLE INFO
## 1 Introduction
Our dependence on road networks to access emergency medical care increases in two important ways during crises. Crisis events, such as natural disasters, can create increased demands for access to medical services by causing injuries directly. Additionally, these can also disrupt the functionality of these networks themselves, increasing the time it takes to get to a hospital. In acute cases, we know that delays can cause markedly worse medical outcomes for patients (Murata and Matsuda, 2013; Jena et al., 2017). As climate change increases the frequency of severe weather events (Mukherji et al., 2023), which can extensively disrupt road networks, we need to better understand not only the abstract resilience of infrastructure such as road networks (Antoniou and Tsompa, 2008) but also the specific weaknesses of these systems in terms of access to medical care.
Indeed, we can expect such weaknesses in road networks: especially in geographically rugged terrain, transportation infrastructure is expensive and highly constrained by physical barriers (Rodrigue et al., 2020). Road networks are rightfully built with cost efficiency as a priority alongside robustness to periodic maintenance and disturbances. At the same time, these growing networks, like other complex systems, are known to be highly vulnerable to unanticipated shocks (Doyle, 2002). Much like the banking sector, in which an unexpected financial insolvency can cause cascades of bankruptcies (Battiston et al., 2016; Diem et al., 2020), local problems in road networks impact transportation through the whole system (Hackl, 2019; Goldbeck et al., 2019). Hospitals also face critical "tipping-points" - above a certain capacity they deliver significantly worse care (Kuntz et al., 2015). Likewise, macro-scale medical care systems can also break down in the face of unexpected shocks (Lo Sardo et al., 2019; Kaleta et al., 2022). Policymakers in all three domains: finance, transport infrastructure, and medical care are increasingly turning to stress tests to analyze their systems and pinpoint weaknesses.
Yet to date, little work has been done on how stresses and problems in one system can impact provision of services in another. Whatever the cause of a disruption, the complexity of these networks makes it difficult to predict the effect of one disruption on the functioning of the whole system. Given the significant potential coupling of risks in road transportation networks and access to and provision of medical care, we propose to develop a suitable stress test to examine how road network disruptions impact access to medical care. The aim of such a stress test is to highlight
critical road segments or corridors that provide access to medical care, population centers at risk of being cut off from care, and hospitals that may see sudden surges in demand during crises.
We implement this stress test by applying simulation analysis to data on road and healthcare infrastructure in Austria. Simulation analysis has proven an effective tool in modeling relative risks and the importance of components of complex systems (Liu et al., 2022; Hackl et al., 2018; van Ginkel et al., 2022). Elements of these systems highlighted by stress tests are natural candidates for resources and attention from planners.
Our stress test of road networks and their provision of access to medical care presents three novel aspects. First, we develop a measure to quantify access from population centers to medical care. Most quantitative work on the resilience of transportation systems to date focuses on the impact of disruptions by determining the cost of disruption or by measuring the change of accessibility of the whole system during specific scenarios. However, during a disaster, changes in global accessibility or costs may be of minor concern compared to the specifics of which roads are used in the provision of essential services like healthcare or fire protection. We know that small differences in travel time to emergency care can have a significant impact on mortality and other patient outcomes (Murata and Matsuda, 2013). To this end, we modify an existing measure of accessibility in road networks (Martin et al., 2021) in order to classify the importance of links in a network relating to the accessibility of municipalities to the closest hospitals.
A second challenge in stress testing the resilience of road networks at the scale of a whole country is their size, which can make an exhaustive calculation and comparison of outages and their consequences intractable. We, therefore, introduce and stress test a coarse-grained simplification of the road system: we merge road segments connecting municipalities to create a backbone representation of the Austrian road network. We can stress test this network more extensively and show that derived insights can be transferred to the more realistic fine-grained system.
A third contribution of our approach is that we quantify the impact of our stress tests along three orthogonal dimensions. We measure how road network disruptions limit people's access to hospitals, suggesting vulnerability of population centers. We quantify road importance by observing the effects of their disruption. Finally, we measure the vulnerability of hospitals to sudden surges in the population they are the first point of care for. Thus our framework provides multi-level insight. We note that our approach can be applied and generalised to both other countries (depending on data availability) or to the provision of other services in crisis situations (for instance, firefighting facilities).
In the remainder of this paper, we first review the related literature on road network resilience and access to emergency medical care (Section 2). We then introduce the case of the Austrian road network and relevant datasets, and describe the methods and measures used to study (Section 3). We present and interpret the results of our stress tests in Section 4. Finally, we conclude by discussing our method, including its limitations and avenues for future work in Section 5.
## 2 Literature Review
During crisis events impacting entire regions, the accessibility of medical care is crucial, given its potential to influence patient outcomes. Indeed, there is ample evidence of a direct effect of the travel distance to a hospital and the mortality of patients. A study using a national database in Japan concluded that the ambulance distance to hospitals significantly correlates with macro-regional mortality risks for particula acute diseases such as acute myocardial infarction and brain infarction (Murata and Matsuda, 2013). Consequently, also Planned road closures and infrastructure disruptions result in worse mortality outcomes: for instance, previous work found a sharp increase in acute myocardial infarction or cardiac arrest hospitalizations among Medicare beneficiaries in 11 U.S. cities during major marathons (Jena et al., 2017).
In summary, the accessibility of emergency medical care depends crucially on road networks, which are also especially vulnerable to environmental perturbations such as extreme weather events (Bil et al., 2015). Therefore, as numerous studies have shown that events like heatwaves, heavy rainfall, droughts, and tropical weather cyclones have become more frequent and intense globally since the 1950s (Mukherji et al., 2023), the vulnerability of road networks is likely to increase. In addition, the problem of transportation networks and accessibility is especially salient in geographic areas with rugged terrain (Rodrigue et al., 2020) (as we face it for instance in alpine regions in Austria), since such conditions limit possible cost-efficient redundancies that would make such networks more robust.
More generally, growing systems like road networks are known to be vulnerable to unanticipated shocks (Doyle, 2002). Indeed, there is a whole literature analyzing the resilience of networks that tend to function well in "normal" times but can fail catastrophically during unexpected disruptions. Researchers have begun to stress test these systems
to analyze their weak points, where simulation analysis has demonstrated its efficacy in modeling relative risks and the importance of components of complex systems (Liu et al., 2022; Mattsson and Jenelius, 2015). Particular applications of these methods include financial markets (Battiston et al., 2016), food suppliers (Schueller et al., 2022), regional economies (Toth et al., 2022), ride-sharing systems (Bokanyi and Hannak, 2020), and software systems (Schueller and Wachs, 2022). Here, the resilience of a network is generally determined by monitoring the response of systems to the cumulative elimination of sections according to random order, deterministic order of criticality, and deterministic order in areas at high risk (Martin et al., 2021). Insights gained through stress tests can be used to help guide planning resources and areas of attention, in order to improve the robustness of diverse networks while their functionality remains unchanged (Schneider et al., 2011; Lin et al., 2023).
Methodological approaches to measuring resilience of road networks, in particular, vary from quantifying the travel cost of a disruption (Jenelius et al., 2006; Xie et al., 2023) to quantifying the risk to the overall network (Hackl et al., 2018). The healthcare system has specifically been studied from this perspective, especially since the Covid-19 Pandemic: the additional stress of the pandemic shed light on the various problems of healthcare systems. Stress tests of hospital networks and networks of doctors have demonstrated that macro-scale medical care systems can also breakdown in the face of unexpected shocks (Lo Sardo et al., 2019; Kaleta et al., 2022), manifesting, for instance, in a sudden surge in patients which may overwhelm individual hospitals Kuntz et al. (2015).
Despite the apparent interest, little work has been done to understand how the infrastructure that provides access to care is vulnerable to shocks. Therefore in the present paper, we concentrate on how road closures change patient flows and volumes to hospitals. While previous work has studied how transport infrastructure ensures the provision of essential goods to communities (Wisniewski et al., 2020; Anderson et al., 2022), to the best of our knowledge, access to healthcare has not been considered from this perspective thus far.
## 3 Data and Methods
We now outline the data and methods we will use to stress test the Austrian road system. Our aim is to evaluate the importance of specific parts of the network in terms of the population's access to healthcare at hospitals. We first describe how we create an abstracted network representation of the road system. We call the edges or links in this resulting network _corridors_ and define measures of corridor importance. We then outline the methodology of two kinds of stress tests we will apply to this network. Finally, we describe how to measure the impact of these tests on hospitals.
### Constructing a network of corridors
There are many possible ways to represent a nationwide transport network. Our goal is to create a representation of the Austrian road network that is simplified enough so that extensive stress tests are computationally feasible and still fine-grained enough to capture important details. We begin with data from GIP (Graphenintegrations-Plattform) 1 - an extensive open data source of Austrian transportation infrastructure segments, from hiking trails to highways and railroads. As we are interested in emergency response, we focus on roads that can be accessed via automobile.
Footnote 1: [https://www.gip.gv.au/](https://www.gip.gv.au/)
We first create a network of all roads, in which nodes are intersections of road segments and edges are roads. This is a very fine-grained representation of the system: with close to 1.5 million links and about 1.3 million nodes. Obviously, such a fine-grained representation presents a computational problem for network analysis and simulation: as we aim to simulate the removal of road segments and measure the impact on shortest paths to hospitals many times over, we therefore derive a coarse-grained abstraction of this network to keep shortest path calculations tractable while maintaining its core features. In the derived network nodes are _municipalities_ connected by an edge if there is a road segment ending in both municipalities. In other words: two municipalities are connected in the network if there is a direct path between them. We call these edges **accessibility corridors** or corridors for short, as they represent an abstraction of road connections between municipalities. We also record how many real-world roads between the municipalities are combined in a single corridor. This information is later used in the analysis section to emphasize the importance of the corridor in a real-world context. The resulting network representation of Austria is visualized in Figure 1, with municipalities hosting a hospital highlighted.
### Measures of corridor importance
Given our network representation of the Austrian road transportation network, we would like to quantify the importance of specific access corridors. The literature presents several ways to measure the importance of corridors and the impact of their closure on the movement of people, in general (Jenelius et al., 2006; Xie et al., 2023; Hackl et al., 2018; Wisniewski et al., 2020; Anderson et al., 2022). Our research introduces a new method by taking the accessibility of critical infrastructure into account when measuring importance. Specifically, we observe the impact of corridor closures on the accessibility of a municipality to its _closest_ hospital. Whether a corridor's closure causes a population to take a longer, indirect path to a hospital or forces them to go to a different hospital, we infer corridor importance from increases in travel times upon their removal weighted by the impacted population numbers. The changes in the accessibility measurement are used to implement a ranking of corridors, henceforth referred to as the Accessibility Corridor Impact Score (ACIS).
The ACIS can be used to assess the impact of the initial stress test on the accessibility corridor network by estimating how a deletion impacts the distance to the closest hospital weighted by the impacted people. To determine the ACIS, we start by calculating the integral of the cumulative population with respect to the distance for the baseline case and the stress-tested situation, using the trapezoid rule. Subsequently, the ACIS can be determined by computing the difference between the baseline integral and the stressed integral, which can be expressed by the following formula:
\[ACIS=\int_{0}^{dist_{max}}P_{base}(x)\,dx-\int_{0}^{dist_{max}}P_{stress}(x) \,dx, \tag{1}\]
where \(dist_{max}\) stands for the maximum distance between a hospital and a municipality in the original situation and the population functions \(P_{base}(x)\) and \(P_{stress}(x)\) characterize how many people have a hospital reachable within \(x\) km.
Alternative MeasuresAs an alternative to the Accessibility Corridor Impact Score we also calculated a measure based on Martin et al. (2021), where the authors introduce a measurement of a municipality's access to a full network of destinations based on the population distribution and the minimal distance to each node in the network. In our case, we modify this measure by switching the target variable to its closest hospital instead of all the other municipalities in the network given our focus on access to healthcare. The following equation represents our version of the accessibility
Figure 1: A coarse-grained representation of the Austrian road transportation system as a network. Nodes are municipalities and edges are accessibility corridors connecting them. Municalities are colored and marked with a green cross if they contain a hospital.
measure, which we call the Hospital Accessibility of a municipality (\(H\,A_{m}\)):
\[H\,A_{m}=max_{h\in H}\left(\frac{P_{m}}{d(m,h)}\right), \tag{2}\]
where we measure the accessibility \(H\,A_{m}\) of a municipality \(m\) to the closest hospital by finding the maximum of the ratio of the population of the municipality \(P_{m}\) divided by its distance \(d\) to hospitals \(h\) in Austria. The municipality's population is included to give greater weight to those municipalities with more people because the probability of someone needing a hospital increases with increasing population.
To calculate an overall, aggregated accessibility measure for the entire country, which we call its Hospital Accessibility \(HA\), we use the following formula:
\[H\,A=\sum_{m\in M\backslash H}\frac{(H\,A_{m}*P_{m})}{P_{M\backslash H}}, \tag{3}\]
where we calculate the sum of \(HA_{m}\) over all municipalities \(m\) in Austria, except for municipalities with a hospital, and then normalize each summand by a population factor \(\frac{P_{m}}{P_{M\backslash H}}\), which takes the population \(P_{M\backslash H}\) of all Austrian municipalities without a hospital into account. This measure captures the overall efficiency of the corridor network in terms of how well it gets people from population centers to hospitals.
To rank the importance of the different corridors, we calculated the difference between the baseline accessibility score and the overall accessibility after stress testing the network. Specifically, if we remove corridor \(k\), we define its impact \(H\,A(k)\) as follows:
\[HA(k)=\frac{H\,A_{baseline}-HA_{\backslash k}}{H\,A_{baseline}}*100, \tag{4}\]
where \(H\,A_{baseline}\) is the accessibility in the original situation and \(HA_{\backslash k}\) is the overall hospital accessibility after accessibility corridor \(k\) was deleted from the network.
As a third alternative besides the \(\mathrm{ACIS}\) and \(HA\) measures of corridor importance, we also considered a popular way of ranking edges in networks called _edge betweenness centrality_. In our context, this defines the importance of a corridor as follows:
\[c_{B}(\text{corridor }e)=\sum_{s\in M\backslash H\in H:d(s,t)\leq 100km}\frac{ \sigma(s,t|e)}{\sigma(s,t)}, \tag{5}\]
where the betweenness centrality \(c_{B}\) of a corridor \(e\) is the sum of fractions of all shortest paths between a municipality \(s\in M\) and a hospital \(t\in H\) that use the corridor \(e\), divided by the number of all shortest paths between them (denoted by \(\sigma(s,t)\)). In plain words, this measures calculates how often corridors appear on the shortest paths between all pairs of municipalities and hospitals in the country at most 100km apart from one another.
### Stress testing corridor networks
In an effort to establish a ranking of the accessibility corridors based on their importance to hospital accessibility, we conducted two distinct stress tests of the Austrian accessibility corridor network. The first kind of test tracks the reaction of the system to the deletion of a single accessibility corridor. Specifically, we remove one link from the network and calculate the accessibility of each municipality to its closest hospital post-deletion. The ranking of the corridors was based on the resulting \(\mathrm{ACIS}\) for each deletion: the higher the \(\mathrm{ACIS}\) score a corridor receives, the higher its ranking.
We show a concrete example of such a corridor deletion in Figure 2. On the left, we color municipalities by how long they must travel to reach a hospital when the network is functioning undisturbed. Following the removal of the focal corridor, visualized on the right, people in several municipalities must travel significantly farther to reach healthcare. This would be reflected in a large \(\mathrm{ACIS}\) score for this corridor.
While the results of the first stress test serve as a fine approximation for the topological importance of corridors, real world events often impact roads across wider geographic areas. For instance a weather event like a snow storm could impact roads across entire regions. Even when a single corridor or road may be closed, resulting congestion may cause significant delays for travelers on nearby alternatives.
The second stress test thus introduces neighborhood outages of roads. It measures the system's functionality after the deletion of a corridor and its neighboring corridors. This idea was initially sparked by the observation that severe
weather conditions often have a widespread impact across geographic space rather than being confined to a single location. To increase the potential volatility of the stress test, we first delete the focal corridor, then with a fixed probability \(p\) remove each of its immediate neighbors. This fixed probability \(p\) was chosen to simulate the decreasing severity of a weather event with increasing distance from its core, which is assumed to be at the focal corridor. In particular, we ran 100 simulations for each corridor and its neighborhood with \(p\in[0.1,0.25,0.5,0.75]\). In each simulation, the hospital accessibility of the network and the distance to the closest hospital for each municipality were calculated, and the same impact measures were calculated as for the single corridor removal stress test.
### Measuring hospital vulnerability
While assessing the resilience of infrastructure networks and the accessibility changes caused by disturbances has been studied in previous research (Jenelius et al., 2006; Xie et al., 2023; Wisniewski et al., 2020; Anderson et al., 2022), less attention has been paid to how transportation infrastructure disturbances impact potential flows to healthcare centers. For instance, a key road closure could greatly increase the number of people going to a specific hospital as closest point of care.
Therefore we also explored an alternative approach to measuring the impact of corridor deletions in terms of their impact on hospital catchment areas. In particular, we look for which hospitals become responsible for a significantly larger population as their closest point of care when specific corridors are closed. This allows us to measure the strain on hospitals resulting from corridor closures. To quantify this impact on hospitals, we assess how many people have to move from one hospital to another for each stress test, which can be mathematically written as:
\[P_{affected}=\sum_{M\in Change}P_{M}, \tag{6}\]
where we calculate the total affected Population \(P_{affected}\) by summing over all \(M\in Change\), which is the portion of municipalities that have a new closest hospital after the simulated deletion of a corridor, and \(P_{M}\) stands for the population of municipality \(M\). From this calculation, we are also able to calculate the new number of patients per hospital. Besides this, we use the different stress test results to calculate how often a hospital experiences a changing inflow due to a corridor deletion.
Through the application of these measurements, we can better understand how hospital catchment areas change due to the alteration of the accessibility corridor network. For instance, a corridor closure may change the closest hospital for a significant number of people. The changing size of the hospital catchment area thus either causes a growth or reduction in the patient flow to specific hospitals, straining or relaxing those hospitals' capacity, respectively. By examining the effect of corridor deletions on hospital catchment areas, we can derive a map of redundancy relationships between hospitals. This map allows us to report hospitals that could be more prone to sudden patient influx during crisis events which lead to corridor closures.
## 4 Results
In this section, we present the results of our analyses. We first focus on the single corridor deletion stress test. We find that according to the ACIS measure, the impact of such deletions is highly heterogeneous: some corridors are
Figure 2: Here we observe how the deletion of a single corridor, highlighted in the white box, significantly impacts the hospital access of the surrounding municipalities (adapted from Schuster et al. [2022]).
significantly more important than the average one. We show a significant correlation between the ACIS measure and the \(HA\) measure of corridor importance in this scenario. We investigate the relationship between corridor ACIS score and the number of roads in a corridor, finding highly important corridors containing very few roads. We also analyze changes in travel times. We then analyze the results of the corridor neighborhood stress test. Finally, we present two case studies in which hospitals are often or significantly impacted by corridor closures.
### Single Corridor stress test
First, we found that most accessibility corridors closures have a low impact on the population, see Figure 2(a). In this figure we plot the complementary cumulative density function (CCDF) of the Accessibility Corridor Impact Score score of each corridor. In general, the closure of any given corridor is a minor nuisance in terms of getting to a hospital. However, there are a few accessibility corridors which have a tremendous impact on the system if closed, observed in the right tail of this figure. Furthermore, the results of the neighborhood deletion stress test, see Figure 2(b), suggest that locally correlated corridor closures can have an even greater impact. This is our first important result: as a few corridors are much more important than the typical one, policymakers can focus their attention on just a few parts of the (abstracted) road network. Improvements to the resilience at these key points can make a significant difference in its resilience.
A comprehensive representation of the simulation results using the Accessibility Corridor Impact Score (ACIS) of the single corridor stress test can be found in Figure 4. To provide context for these findings, we now interpret which corridors play a crucial role according to this first stress test. In the map, we see that the highlighted corridors seem to function as connectors to otherwise isolated dead-ends to the network or as critical connectors reducing travel time between different regions. Another category of highlighted corridors are short-cuts directly connected to a hospital.
As corridors are abstractions that bundle together any number of roads between two neighboring municipalities, we look more closely at the relationship between the ACIS ranking and the number of roads within a corridor in the inset of Figure 4. We find that there are many examples of corridors containing just a few roads and having a high ACIS. These corridors are perhaps the most important ones to focus on: they are both systemically important and contain few local redundancies. This is especially relevant when the topography of Austria is considered as many valleys in the
Figure 3: The CCDF of the Accessibility Corridor Impact Score scores calculated from the deletion of Accessibility Corridors under different stress tests. In general, we observe that most corridors are not critical in providing access to hospitals but that a few are orders of magnitude more important.
Alps are only connected to the rest of Austria by a single corridor. If a road like that is blocked, the access to a hospital of the municipalities in the valley is cut off.
What about the other measures of corridor importance? Under the single deletion stress test scenario, \(\mathrm{ACIS}\) and \(HA(k)\) are highly correlated (Spearman's \(\rho=0.83\)). Edge betweenness centrality on the other hand is not significantly correlated (Spearman's \(\rho=0.09\)) with the \(\mathrm{ACIS}\) measure, nor with the \(HA(k)\) measure (Spearman's \(\rho=0.098\)). As edge betweenness centrality evaluates corridor importance in terms of access to multiple hospitals, we focus on the other measures as they capture access to the closest point of care.
Upon closer examination, we found that \(\mathrm{ACIS}\) and \(HA(k)\) do deviate significantly from each other in the most important cases. If we consider the top 100 corridors according to either ranking, this correlation turns negative (Spearman's \(\rho=-0.26\)). This means that the two rankings significantly diverge in terms of which corridors they consider most important. In Figure 5, we plot the map of Austria with important corridors according to the \(HA(k)\) measure highlighted. We observe that the \(HA(k)\) measure tends to rank the last corridors leading directly to hospitals as the most important, while the \(\mathrm{ACIS}\) measure tends to emphasize corridors that appear to bridge regions. Among the top 100 \(HA(k)\) ranked corridors, the average corridor contains 14 roads, while the top 100 \(\mathrm{ACIS}\) ranked corridors contain on average only 11 roads. This suggests that the \(\mathrm{ACIS}\) is ranking highly those corridors that bridge regions and are highly vulnerable due to their dependence on fewer road segments. In the rest of the paper, we therefore focus on the \(\mathrm{ACIS}\) measure.
To make the analysis more concrete, we report changes in driving times experienced by people following the single deletion stress test in Figure (a)a. Specifically, we examine the number of additional individuals who would need to drive at least 15, 30, or 60 minutes, assuming a tempo limit of 50km/h, following a corridor deletion. We observe that a significant number of people would have to drive more than 15 minutes following such a deletion. As before, it is worth focusing on the extreme cases: some corridors cause thousands of people to have to drive over 60 minutes to get to a hospital. We report specific examples in Table (b)b: the deletion of the corridor reported in the first row, which contains a single road, causes a nearly 20-minute increase in driving time for over 10,000 people. Such delays can make a significant difference in critical care outcomes. To that end we report those corridors whose deletion increases average travel time by at least five minutes in Table 1. Such a difference has been shown to cause statistically observable higher 30-day mortality rate in critical cases, cf. Jena et al. (2017). For example, the deletion of the corridor in the first row of this Table 1 leads to a mean increase of approximately 7 minutes for more than \(40,000\) people.
Figure 4: The top 100 most important corridors based on \(\mathrm{ACIS}\) under the single corridor deletion stress test. Inset: the relationship between the number of roads in a corridor and its \(\mathrm{ACIS}\) score. We observe critically important corridors containing few roads.
### Corridor Neighborhood stress test
We now discuss the results of our second stress test, where we simulated the deletion of corridor neighborhoods to see the network's reaction to more significant alterations. To recapitulate, the main idea of the second stress test extend the first stress by introducing local geographic correlations in road closures, reflecting for example the broader impacts of extreme weather. Besides ranking the corridor neighborhoods, we will also compare the rankings to the initial stress test to see if the same areas are impacted.
Each instance of the second stress test also focuses on a single focal corridor. It additionally considers all neighboring corridors, removing them from the system with a probability \(p\). For each focal corridor we ran 100
Figure 5: The top 100 most important corridors based on the accessibility corridor importance factor \(HA(k)\) under the single corridor deletion stress test. Inset: the relationship between the number of roads in a corridor and its \(HA(k)\) score. We observe critically important corridors containing few roads.
simulations for each \(p\in[0.1,0.25,0.5,0.75]\). This approach yields a distribution of impact scores for each corridor. For each focal corridor and \(p\) we considered the mean and the 90th percentile of the \(\mathrm{ACIS}\) of this distribution of results.
For low values of \(p\), i.e. 0.1, the Spearman correlation between \(\mathrm{ACIS}\) of the single corridor deletion stress test and the neighborhood deletion stress test are high: 0.57 for the mean and 0.41 for the 90th percentile result. However, this correlation quickly drops as we consider higher likelihoods of correlated corridor failures. At \(p=0.75\), the correlations drop to 0.08 for the mean and 0.03 for the 90th percentile. We may conclude this implies that when corridors are failing in a larger geographic area, as is often the case, the impact on hospital access is very different from the situation in which a single corridor is removed.
Indeed, in Figure 7, we observe that the top 100 most important corridors according to the neighborhood deletion stress test are quite different from those under the single corridor deletion stress test when the deletion probability is increased. Calculating the overlap of the top 100 most impactful corridors of the single \(\mathrm{ACIS}\) with the \(\mathrm{ACIS}\) ranking considering a probability of 25% for neighborhood deletions shows that only 15% of the corridors are ranked in the top 100. This overlap is even lower for higher probabilities. Furthermore, it is apparent from this visualization that in many cases, important corridors are directly connected to a hospital forming small clusters, indirectly highlighting vulnerable neighborhoods within the corridor network as a whole.
### Hospital vulnerability
In this section, we will study the vulnerability of hospitals based on the single corridor removal stress test. By closely examining the relations between accessibility corridor deletions and hospital catchment area shifts, we are able to derive a map of redundancies between hospitals, and to quantify which hospitals are at risk of suddenly becoming responsible for a significantly greater number of patients due to road closures. The resulting map provides detailed insight into the dynamics of patient flow between the hospitals caused by the alteration of the corridor network.
The map and scatter plot inset in Figure 8 offers a compelling portrayal of the frequency and magnitude of impacts that hospitals experience during the different stress tests. To delve deeper into the analysis, we have selected two illuminating cases that exemplify two kinds of vulnerable hospitals. Our first example, located in Kalwang, is a hospital
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Deleted Corridor} & \multicolumn{3}{c|}{Travel Time} & \multicolumn{2}{c|}{affected} & \multicolumn{1}{c|}{affected} & \multirow{2}{*}{Roads} \\ \hline From & To & Old & Delta & New & Population & Municipalties & \\ \hline AT41743 & AT41746 & 18.48 & 7.29 & 25.77 & 42656 & 14 & 24 \\ \hline AT70926 & AT70921 & 31.33 & 7.89 & 39.19 & 30791 & 21 & 6 \\ \hline AT70332 & AT70331 & 25.45 & 6.56 & 32.07 & 27080 & 15 & 7 \\ \hline AT61045 & AT61053 & 13.98 & 8.8 & 22.78 & 26548 & 8 & 37 \\ \hline AT70921 & AT70910 & 34.83 & 6.68 & 41.47 & 26260 & 18 & 2 \\ \hline AT70331 & AT70350 & 27.01 & 6.5 & 33.57 & 24850 & 14 & 4 \\ \hline AT70910 & AT70935 & 36.28 & 6.28 & 42.52 & 24824 & 17 & 2 \\ \hline AT70935 & AT70923 & 37.58 & 9.53 & 47.13 & 22987 & 16 & 8 \\ \hline AT70413 & AT70416 & 17.0 & 7.63 & 24.6 & 21291 & 6 & 4 \\ \hline AT31839 & AT31818 & 19.58 & 7.82 & 27.38 & 18747 & 4 & 12 \\ \hline AT61114 & AT61108 & 23.3 & 9.65 & 32.92 & 18027 & 4 & 9 \\ \hline AT20923 & AT20913 & 24.25 & 10.45 & 34.7 & 17898 & 4 & 50 \\ \hline AT10706 & AT10724 & 37.1 & 6.03 & 43.1 & 17543 & 7 & 5 \\ \hline AT20101 & AT20402 & 27.3 & 6.18 & 33.45 & 16737 & 6 & 19 \\ \hline AT40101 & AT41624 & 13.55 & 6.7 & 20.25 & 16731 & 4 & 9 \\ \hline \end{tabular}
\end{table}
Table 1: Impact of the deletion of an accessibility corridor (Top 15) on the travel time with a focus on the time difference. We include the number of roads in each accessibility corridor and the affected population and report the largest impacted populations thresholded by five minutes.
that is only impacted by a few specific corridor closures. However, when those corridors close, the impact is extreme: with a 250% increase in the number of people in its catchment area per bed, see Table 3. These corridors would otherwise provide access to hospitals in Rottenmann (162 beds) or Leoben (408 beds). In other words, closures of nearby corridors can lead to dramatic surges at this hospital, with a capacity of only 72 beds.
The second example, located in Ried im Innkreis, is rather potentially impacted by many corridor closures, but to a smaller degree. Over 20 different corridors can impact its catchment area, but they increase the population to bed ratio by less than 10%. In other words, this hospital will likely often see small increases in the population for which it is the first point of service. Such small increases can nevertheless be the source of significant volatility over time in hospital admittances.
Even though the latter is an extreme case where the closure of accessibility corridors blocks the way to big hospitals and puts a lot of strain on a small hospital, it shows how small changes can have big effects. Both cases show how our developed method can be used to refine the analysis and offer another perspective to the stress test.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{People per Bed} & \multicolumn{2}{c|}{Change per Bed} & \multicolumn{2}{c|}{Accessibility Corridor} & \multicolumn{1}{c|}{Affected} \\ \hline Initial & After Deletion & in \% & People & Deleted Corridor & \# Roads & Population \\ \hline
183.56 & 203.40 & 10.81 & 19.83 & (AT41743, AT41746) & 24 & 9 787 \\ \hline
183.56 & 196.04 & 6.80 & 12.48 & (AT41426, AT41402) & 3 & 5 203 \\ \hline
183.56 & 195.31 & 6.40 & 11.74 & (AT40419, AT40418) & 5 & 4 897 \\ \hline
183.56 & 195.28 & 6.38 & 11.71 & (AT41716, AT41743) & 2 & 4 884 \\ \hline
183.56 & 195.28 & 6.38 & 11.71 & (AT41716, AT41709) & 15 & 4 884 \\ \hline
183.56 & 193.96 & 5.66 & 10.39 & (AT40830, AT40808) & 4 & 4 334 \\ \hline
183.56 & 191.69 & 4.42 & 8.12 & (AT41711, AT41743) & 8 & 3 387 \\ \hline
183.56 & 191.69 & 4.42 & 8.12 & (AT41747, AT41711) & 9 & 3 387 \\ \hline
183.56 & 190.07 & 3.54 & 6.50 & (AT41422, AT41418) & 15 & 2 712 \\ \hline \end{tabular}
\end{table}
Table 2: The ten cases in this table exemplify the significant impact on the hospital in Ried im Innkreis, Oberösterreich, with a focus on the change of people per bed.
Figure 7: The Top 100 most important corridors based on ACIS under the neighborhood corridor deletion stress test with a deletion probability of 25% for the neighboring streets. We observe that there is a notable presence of corridors directly connected to hospitals among the Top 100.
## 5 Conclusion and Future Research
In this paper, we show the resilience of a population's access to healthcare can be meaningfully analyzed and quantified based on stress tests of road-based transportation networks. These stress tests can provide meaningful insights into the dependencies between different systems, in this case of the transportation system and hospitals. By ranking corridors based on their importance in terms of hospital access, we can identify corridors of interest for policymakers seeking to allocate limited resources. In particular, we show that there are high impact corridors containing few roads which provide access to emergency care for many people. We also show that some hospitals are vulnerable to sudden surges of people they are responsible for.
Based on these results further investigations can be conducted, for instance by including the area around the corridors of interest. This would lead to guidelines to improve the underlying network's resilience and general access to health care. Analyzing the corridors' neighborhoods is another step from the abstract model to real-world scenarios where natural disasters impact whole regions.
Our results show that certain road segments and corridors play a pivotal role in access to hospitals in Austria. The disruption of these roads during crisis scenarios can have a significant impact on travel time to hospitals for large numbers of people. Specific municipalities are especially vulnerable to the closure of specific road corridors. Hospitals can also be affected: a road closure can change which hospital is closest for large numbers of people. The skewed importance of some corridors highlights vulnerabilities but also gives policymakers something to focus on.
Compared to previous work, we focus on the specific problem of accessibility of hospitals in our stress test, using a generated measurement as well as adapting an appropriate accessibility measure [14]. As we are
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{People per Bed} & \multicolumn{2}{c|}{Change per Bed} & \multicolumn{2}{c|}{Accessibility Corridor} & \multicolumn{1}{c|}{Affected} \\ \hline Initial & After Deletion & in \% & People & Deleted Corridor & \# Roads & Population \\ \hline
83.32 & 298.23 & 257.96 & 214.92 & (AT61114, AT61108) & 9 & 15 689 \\ \hline
83.32 & 298.23 & 257.96 & 214.92 & (AT61114, AT61120) & 16 & 15 689 \\ \hline
83.32 & 134.27 & 61.16 & 50.96 & (AT61263, AT61247) & 10 & 3 720 \\ \hline \end{tabular}
\end{table}
Table 3: The following cases exemplify the significant impact on the hospital in Kalwang, Steiermark, with a focus on the change of people per bed.
Figure 8: Exploring the impact of corridor deletions on the hospitals in the network: comparing the probability a hospital is affected to the stress tests’ impact on the hospital. In this illustration, a comparison is made between a high-probability case with a moderate impact and a low probability with a high impact.
interested in hospital accessibility, which is a local network problem, this derived measure of road importance is more appropriate than a global measure such as edge betweenness centrality. Further comparison shows that the measurement generated in this paper is a better fit for our problem since the accessibility measure from [14] over-emphasizes city size for our application. In the scenario described in the paper, the size of a hospital doesn't add to its attractiveness, and the focus is solely on the accessibility of medical care. If the size of the target is important to the simulation, for example, the size of a city or the number of hospital beds, an adaptation of [14] is more practical. However, in this simulation, we assume that fast access to health care is crucial and not influenced by the size of the hospital.
Our work also has policy implications for the healthcare sector. Past research has demonstrated that hospital capacity has a tipping-point in terms of care quality: when occupancy exceeds a critical level of capacity, mortality outcomes worsen significantly [14]. Our work shows that such surges can occur due to the outcomes in another complex system. Flexible staffing and pooled capacity across hospitals, effective policies recommended in this previous work, should take into account how exogeneous shocks influencing transport networks can create surges and limit the effectiveness of these interventions.
Rather than vulnerability to specific events (i.e., floods [13]), we consider abstract road closures. By coarse-graining the Austrian road network, we can run a more thorough stress test on a country-wide network. Both of these simplifications enable us to easily identify network sections of interest. More fine-grained versions of these sections can be further investigated in future work using more realistic stress tests. This would be especially tractable if policymakers wished to zoom in on a specific region or part of the country.
Our study has several limitations. Roads in different parts of the country may be more or less vulnerable to closure, given factors like local weather and altitude. Future work should consider historical weather patterns and their correlation with road closures. More realistic stress tests can be developed in this way. Some but not all of the road corridors we highlight pass through extremely rugged terrain (i.e. the Alps). Creating redundancies in this context may be very expensive. In such areas it may be more useful to create redundancies at highly impact hospitals.
Furthermore, our results show that overlaying the \(\mathrm{ACIS}\) measurement with the number of real-world roads in a corridor can help to understand the implications of a corridor closure better. Therefore, we propose to update the introduce \(\mathrm{ACIS}\) method by including a factor that takes the number of real-world roads into account.
More generally, our approach to stress-testing road networks can be applied in a variety of contexts. For example, rather than measuring the accessibility of hospitals from population centers, we may measure accessibility of population centers from firefighting stations. Indeed, many critical services rely on the functionality of transportation systems like the road network. Social, demographic, and environmental factors suggest that these systems will only experience greater strain in the coming decades. Whether due to climate change, large-scale migration, or the aging of the population, resilience and robustness of these services as a function of the systems they depend on will merit increasing scrutiny. Integrating various forms of data, for example, in a knowledge graph designed for crisis management [15], can greatly expand the potential scope of our simulations. The knowledge graph can be used in future works to determine additional risk factors for roads due to overlapping networks, e.g., river networks which can increase the risk to a road located close by or crossing the river. By adding these risk factors to the simplified network, the simulation can be adapted by updating the probabilities, and consequently, the relevance of the findings to real-world situations can be improved.
|
2305.13524 | Upper extremity kinematics: Development of a quantitative measure of
impairment severity and dissimilarity after stroke | Strokes are a leading cause of disability, with many experiencing difficulty
in recovering arm movement, particularly hand function and grasping ability.
There is currently no objective measure of movement quality, and without it,
rehabilitative interventions remain at best estimations of the underlying
neural structures response to produce movement. In this paper, we utilize a
novel modification to Procrustean distance to quantify curve dissimilarity and
propose the Reach Severity and Dissimilarity Index (RSDI) as an objective
measure of motor deficits. All experiments took place at the Medstar National
Rehabilitation Hospital; persons with stroke were recruited from the hospital
patient population. Using Fugl-Meyer (FM) scores and reach capacities, stroke
survivors were placed in mild or severe impairment groups. Individuals
completed sets of reach-to-target tasks to extrapolate kinematic metrics
describing motor performance. The Procrustes method of statistical shape
analysis was modified to identify reaching sub-movements that were congruous to
able-bodied sub-movements. Movement initiation proceeds comparably to the
reference curve in two- and three-dimensional representations of mild
impairment movement. There were significant effects of the location of
congruent segments between subject and reference curves, mean velocities, peak
roll angle, and target error. These metrics were used to calculate a
preliminary RSDI score with severity and dissimilarity sub-scores, and subjects
were reclassified in terms of rehabilitation goals as Speed Emphasis, Strength
Emphasis, and Combined Emphasis. The Modified Procrustes method shows promise
in identifying disruptions in movement and monitoring recovery without adding
to patient burden. The proposed RSDI score, while limited in scope, can be
adapted and expanded to other functional movements and used as an objective
clinical tool. | Khadija F. Zaidi, Michelle Harris-Love | 2023-05-22T22:40:21Z | http://arxiv.org/abs/2305.13524v1 | Upper extremity kinematics: Development of a quantitative measure of impairment severity and dissimilarity after stroke
###### Abstract
_Background_ Strokes are a leading cause of disability worldwide, with many survivors experiencing difficulty in recovering upper extremity movement, particularly hand function and grasping ability. There is currently no objective measure of movement quality, and without it, rehabilitative interventions remain at best informed estimations of the underlying neural structures' response to produce movement. In this paper, we utilize a novel modification to Procrustean distance to quantify curve dissimilarity and propose the Reach Severity and Dissimilarity Index (RSDI) as an objective measure of motor deficits.
_Methods_ All experiments took place at the Medstar National Rehabilitation Hospital; persons with stroke were recruited from the hospital patient population. Using Fugl-Meyer (FM) scores and reach capacities, stroke survivors were placed in either mild or severe impairment groups. Individual completed sets of reach-to-target tasks to extrapolate kinematic metrics describing motor performance. The Procrustes method of statistical shape analysis was modified to identify reaching sub-movements that were congruous to able-hoduled sub-movements.
_Findings_ Movement initiation proceeds comparably to the reference curve in both two- and three-dimensional representations of mild impairment movement. There were significant effects of the location of congruent segments between subject and reference curves, mean velocities, peak roll angle, and target error. These metrics were used to calculate a preliminary RSDI score with severity and dissimilarity sub-scores, and subjects were reclassified in terms of rehabilitation goals as "Speed Emphasis", "Strength Emphasis", and "Combined Emphasis".
_Interpretation_ The Modified Procrustes method shows promise in identifying disruptions in movement and monitoring recovery without adding to patient or clinician burden. The proposed RSDI score, while limited in scope, can be adapted and expanded to other functional movements and used as an objective clinical tool. By reducing the impact of stroke on disability, there is a significant potential to improve quality of life through individualized rehabilitation.
## Introduction
Strokes represent one of the leading causes of disability worldwide. 65% of stroke survivors experience some difficulty in recovering the ability to reach [12, 14, 15], with more severe impairments featuring a loss of hand function and ability to grasp [22, 21, 17].At 6 months post stroke, many continue to experience some degree of upper extremity hemiparesis. This unilateral impairment of the paretic limb impacts functional reaching, and is a major contributor to stroke-related disability [52].
Early signs of motor control interruption include paralysis, reduced reflexes, and inability to produce resistance to perturbations [36, 49]. Symptoms arising during the chronic post-stroke recovery phase may include increased reflex activity or spasticity. [5, 8]. Compensatory movements may also arise in lieu of true recovery, such as extending the trunk to reach a target at arm's length due to decreased joint range of motion [25]. Stroke severity can significantly impact the type and amount of deficits experienced by an individual and the efficacy of particular rehabilitative strategies [37]. While mechanisms of arm recovery have been studied after mild functional impairments [16, 29], there are few effective treatments for the large portion of the stroke population with more severe impairments. An objective measure of severity and the nature of deficits is of interest in creating individualized rehabilitation plans [78, 35].
Three-dimensional kinematic analyses provide objective methods to characterize movement subsequent to stroke. [2, 9, 28, 63]. Kinematics of the upper extremity obtained through motion capture and 3D positional data can provide more sensitive tools to objectively assess individual motor function after stroke [26, 18, 10]. Active and passive visual markers, electromagnetic sensors, and inertial sensors have been used extensively for human movement analysis and can provide metrics such as movement speed, movement smoothness, joint angles, and limb orientation from position data [3, 31].
Currently, there is no consensus on the most appropriate tasks or variables to provide a global description of upper extremity movement [38, 39, 1]. With significant variability between individuals, clinicians use measures such as the Upper Extremity Fugel-Meyer scale to subjectively describe movement capability [30, 40]. Without an objective measure of movement quality, rehabilitative interventions are at best informed estimations of how the underlying neural structures will respond and produce movement. Subjective clinical scores cannot identify wherein during movement a deficit occurs and what that might suggest as the best rehabilitative plan [24, 23, 20]. Subjective scales also cannot efficiently monitor changes in impairment severity and dissimilarity over time.
In this paper, we propose a modified Procrustes analysis method applied to groups of persons with stroke, differentiated by movement severity. Utilizing upper extremity endpoint data from these two groups, this method was used to identify movement behaviors and metrics that differentiate
the mild and severe impairment groups. Finally, this study includes a preliminary severity and dissimilarity score of upper extremity movement that draws inspiration from scores such as the Gait Profile Score (GPS)[44], or Gait Deviation Index (GDI) [43]. The GPS evaluates overall gait pathology severity based solely on kinematic data for a given individual, while the GDI identifies how much an individual's gait features deviate from a reference set of able-bodied data. A single measure of the overall quality of a upper extremity movement, overall severity, and dissimilarity from reference data would be of interest in informing clinical decisions.
The paper is organized as follows:
* Validity of utilizing Procrustean Distance in Upper Extremity Analysis,
* Objectives and hypotheses of applying a modified Procrustes analysis to endpoint data,
* This study's inclusion and exclusion criteria for persons with stroke,
* Clinical measures used to classify patients into mild and severe impairment groups,
* Description of the experimental protocol and study methodology,
* Definitions of kinematic metrics included in the data analysis,
* Statistical tests performed to identify significant differences between subject groups,
* Resulting quantitative measures of severity and dissimilarity that inform the proposed
"Reach Severity and Dissimilarity Index" (RSDI)
## Background
In mathematics, the Euclidean distance between two points is the length of a line drawn between them. Root-Mean-Square Error (RMSE) is another method of quantifying how much one set of data differs from a reference set. Both Euclidean distance and RMSE have been used to construct measures of movement quality in the lower limb [44, 68] and the upper limb [64, 65, 66]. Additionally, Principle Component Analyis (PCA) is commonly used to simplify the interdependent data that is necessary to represent participating limb segments and joints, task requirements, and environmental constraints that produce any particular movement [61]. Clinical decisions can then be based on an interpretation of the complex data. The validity of scores generated by quantifying differences between mean reference data and paretic movement data has been established in the field of rehabilitation [11, 62].
Procrustes Analysis is another such psychometric method of quantifying difference or dissimilarity between two sets of data [76]. Procrustes distance has recently garnered attention as a metric in both gait [69, 72, 73] and upper extremity studies [79, 67, 70, 71]. Procrustes Analysis quantifies similarity of shape between two matrix sets and provides the linear transformation that would allow one curve to best conform to the other. More specifically, the Procrustes method compares each ith element of the subject curve to the ith element of the reference curve. This method generates a scaling factor b, an orthogonal rotation and reflection matrix T, and a translation matrix C, and a Procrustes distance d. Computing the Procrustes distance presents an interesting advantage in quantifying subject performance. Additionally, the scaling factor b can indicate a prolonged or truncated movement, while the ability to compare a reflected curve can allow comparison of right and left limb movements to the same reference curve [74, 75]. In addition to discrete kinematic landmarks, the variability across an entire movement can be assessed in order to extrapolate a subjective and sensitive representation of upper limb movement.
In order to support the proposed RSDI score, we quantitatively identified segments of the forward reaching movement that showed the least deviation when compared to a reference curve representing stereotypical able-bodied reaching behavior. These segments of movement were characterized not by the magnitude of discrete kinematic metrics but rather by when they occur relative to those metrics and when during the overall movement. We hypothesize that subjects with mild impairment will exhibit initial acceleration behaviors that are analogous to healthy movement, while subjects with more severe impairment will not exhibit any congruous segments of movement and therefore result in higher severity and deviation scores. Further, it is expected subjects with severe impairment will demonstrate diminished ability to refine movement through less variability in endpoint orientation. This study suggests the specific sub-movements, in cases of mild and severe impairment, that remain congruous to healthy movement can allow quantification of impairment severity and inform targets for rehabilitation.
## Methods
### Participants
Participants were recruited from the Medstar National Rehabilitation Hospital stroke patient population. Patients' stroke diagnoses were confirmed via Magnetic Resonance Imaging (MRI). All subjects completed written informed consent forms. This protocol was approved by the Medstar Rehabilitation Research Institutional Review Board under protocol number [947339-3].
Persons with stroke that were (1) at least eighteen years of age, (2) able to complete a reach-to-target task, (3) able to consent to the study and experienced no significant cognitive deficits (Mini-Mental State Examination score \(>24\)), and (4) six or more months post thromboembolic non hemorrhagic hemispheric or hemorrhagic hemispheric strokes were recruited for this study.
Potential subjects were excluded if (1) they were less than 18 years of age, (2) stroke occurred less than 6 months before participation or affected both hemispheres, (3) stroke involved the cerebellum, brainstem, or did not spare primary motor and dorsal premotor cortices, (4) there was a history of craniotomy, neurological disorders (other than stroke), cardiovascular disease, or active cancer or renal disease (5) there was a history of orthopedic injury or disorder affecting shoulder or elbow function, or (6) they had had a seizure or taken anti-seizure medication in the past 2 years.
### Clinical Measures
Demographics for participants with mild and severe impairments after stroke are detailed in Tables I and II.
Subjects underwent a Mini-Mental State Examination [48] to ensure ability to consent to all sections of the study and complete tasks as instructed. Since this study features a functional reaching task for the upper extremity only, assessment of recovery was limited to the Upper Extremity Motor Function section of the Fugl-Meyer Assessment. The Upper Extremity Fugl-Meyer (UEFM) test was used as a criterion for classifying post-stroke impairment as either mild or severe upper limb impairment. Classifications for impairment severity have been proposed in prior literature based on a range of Motor Function scores, [45, 46]. The Motor Function domain is divided into the following: Upper Extremity (scored out of 36), Hand (scored out of 10), Wrist (scored out of 14), and Coordination/Speed (scored out of 6) for a total of 66 indicating full performance of expected motor function for the upper limb [45]. Subjects that retain partial arm function and voluntary hand function, defined by an ability to grasp and release, were classified as mild (UEFM score: 38 - 66, n = 15). Subjects that (1) could not complete the Hand (/10) and Wrist (/14) sections, (2) could not display at least one finger response to upper extremity reflex tests (/4), and (3) demonstrated an inability to actively extend the paretic wrist and fingers at least 20 degrees past neutral, were classified in the severe impairment group (UEFM score: 0 - 37, n = 14).
### Experimental Setup
Prior to the first data collection session, subjects were familiarized with the reaching task and measurements of the chair height and distance of the chair from the table were recorded. These measurements were adjusted to ensure the subject sat as close to the table as was comfortable and
maintained a 90 degree resting angle at the elbow. Subject was fitted with trunk restraints to reduce appreciable trunk involvement in the forward reaching movement [47].
7 mm diameter IRED optical markers were placed at the dorsal surface of each hand as appropriate given each subject's movement capability and resting hand position. A single target sensor was placed at 80% of the maximum reach of each individual subject. Placing the target within arm's reach rather than at maximum reach capacity ensured the subject would experience typical and moderate shoulder and elbow contribution and minimize uncomfortable or compensatory movements [51]. Hand path kinematics were recorded using the Optotrak Certus motion capture system (Northern Digital Inc., Waterloo, Ontario, Canada) at a sampling frequency of 300 Hz, and the origin was calibrated at the front edge and center of the table at the beginning of each set of ten reaches. Optical tracking of upper extremity movement allows the collection of limb trajectory in terms of 3D Cartesian coordinates. Optotrak software was used to digitize the x-y plane in front of the subject and all movements were recording with six degree of freedom Optotrak cameras mounted surrounding and above the work-space. The relative position of the subject and the reaching workspace is depicted in Figure 1.
Each subject completed a passive ideal hand path test in which the hand was passively moved to the target and back to represent movement without muscle activity. This measurement was used to verify and troubleshoot the collection of all positional data between the starting position and the target. Each subject completed two sets of the simple reaching test on two separate days with both the paretic and nonparetic arms. The forward reaching task was initiated after a "Go" signal was indicated either in text on a screen or a light box placed within sight of the subject. Subjects were prompted with "When the 'Go' signal appears, quickly reach out to touch the target" to encourage rapid forward movement. Each testing session consisted of ten "Go" signals delivered at random intervals to ensure subjects did not anticipate movement initiation.
### Data Analysis
Four reference curves were created from reach-to-target movements performed by two able-bodied volunteers. Ableodied persons were recruited from the Medstar Rehabilitation Hospital volunteer population. Volunteers were asked to verify they had no diagnosis of a neurological or musculoskeletal disorder that could potentially influence movement control or reaching. In order to reduce effects of hand dominance on the reference curves, one right-hand dominant and one left-hand dominant volunteer were selected. Three dimensional position data was collected from both right and left limbs first at a steady pace and then a rapid pace. The reference data set was used to create a mean healthy movement stereotype against which to analyze movements in the mild and severe impairment groups. The reference curves were compared against prior research to ensure curves were an appropriate representation of able-bodied movement. The values of the mean velocity, peak velocity, and time to peak velocity of our reference curves and values from other studies are compiled in Table III. Reference curves were used only for the Procrustes trajectory analysis; each subject's velocity, target accuracy, and orientation variability were compared between the individual's paretic and non-paretic limbs.
Individual trials of reach-to target movements were extrapolated from raw kinematic data. The beginning of a movement was defined by displacement from the starting position and a non-zero positive velocity. The completion of a movement was defined by a local maxima in position immediately followed by a non-zero negative velocity. Reach detection was confirmed by visual inspection of each trial. Two sets of consecutive reaches were averaged to create a
Fig. 1: **Experimental Protocol and Reaching Workspace** All data collection conducted at the Mechanisms of Therapeutic Rehabilitation (MORR) Lab at Medstar National Rehabilitation Hospital in Washington, DC. Markers placed on the hand dorsum are indicated in red. Produced 3D positional data was evaluated with custom-written MATLAB scripts to extract individual curves and kinematic metrics such as movement variability, peak velocity, time to peak velocity, and target error.
composite curve for each individual consisting of 20 trials. The reaching trajectory data was filtered by applying a low-pass fourth order Butterworth filter with a cut-off frequency of 50 Hz to the trajectory data to account for minor variations in individual movement.
The reference trajectories were down-sampled to create ten fractions of the overall movement that were the same length as fractions of trajectories with motor impairments, in order to produce a dissimilarity profile of the overall reaching movement. Next, curve fragments composed of 35 time-points across the reference and subject curves were compared. This required a novel modified Procrustes analysis that advanced point for point along the length of the subject and reference curves to identify segments that were congruent between both. In this particular application, the curves were not scaled, since capacity to reach is specific to each subject. The index of dissimilarity, the sum of the squared Procrustes distance between each corresponding element in both curves, represents how incongruuous the two segments may be, and was scaled to produce a value between 0 to 1, where 0 represents congruence between curves and 1 represents complete dissimilarity.
#### Kinematic Analysis
The discrete kinematic metrics of interest for this study are (1) peak velocity and time to peak velocity and (2) target accuracy. The continuous metrics of interest for this study are (1) endpoint orientation and (2) curve shape. The variables were chosen to represent movement strategy and performance, as they are often reported related to outcomes of therapy. The temporal location of the discrete kinematic landmarks during reach duration was used to characterize curve shapes highlighted by the Procrustes Analysis. The captured position data were transferred to MATLAB (The MathWorks Inc) software for analysis with custom-written scripts.
For the purposes of representing online movement refinement, we utilized the recommended method of a fixed local coordinate system with respect to the work-space [50, 33, 60]. The y axis extends directly forward and represents the primary distance covered during a reaching task. The x axis extends laterally from the subject and the z axis extends inferior to superior relative to the subject (Figure 2).
The rotations of the distal coordinate system are described in terms of the proximal coordinate system. The first rotation was described as around the z-axis, and the third rotation around the longitudinal axis, or the y-axis of the moving coordinate system. The rotation matrix in Figure 2 describes the yaw-pitch-roll sequence of rotations; this was computed using consecutive data points for each incremental change in mean position during the forward reach. \(\psi\) represents the yaw angle, \(\theta\) represents the pitch angle, and \(\phi\) represents the roll angle [58]. The definitions of these angles as well as other kinematic metrics of interest are listed in Table IV.
Velocity was extrapolated from the raw mean position data using the forward/backward/central differences in position data. Missing marker data were found for less than 10% of individual trials; missing data were corrected by extrapolating from adjacent position values. A low-pass fourth order Butterworth filter with a cutoff frequency of 5 Hz was applied to the velocity data to reduce noise and distortion. The values of mean velocity, peak velocity and the time-point where peak velocity was achieved were recorded for all subject data.
Finally, each trial of forward reaching was compared to the actual location of the target as recorded for each subject. The error tolerance was adjusted to account for reaches landing within the 4 squared inches of surface area of the target pad. Positive values of target error indicate when a subject stopped movement (identified by a local maxima in displacement and subsequent movement in the negative y direction) before or at the target sensor. Negative values of target error represent when the subject has overshoot or moved past the target.
_Statistical Analysis:_ Due to less than 50 subjects in either impairment group, an Anderson-Darling test for normalcy was performed on the kinematic metrics calculated from endpoint data [59]. For the purposes of consistency in this paper, all statistical analyses were performed using independent t-tests and N-way ANOVA. The one-way ANOVA is mathematically equivalent to an independent t-test when applied to only two groups [77]. Kinematic metrics related to target error, peak velocity, and time point where peak velocity occurred were analyzed independently for differences due to impairment severity with a one-way ANOVA. Individual discrete kinematic measurements were compared in a two-way ANOVA against severity, whether the genetic limb is also the dominant limb, and which axis primarily contributed to the rotation. Separate two-way ANOVA were performed to analyze results of the modified Procrustes Analysis to interpret significance of dissimilarity indices between mild and severe impairment groups. Kinematic measurements that appeared significantly different between the mild and severe impairment groups were then used to compute preliminary RSDI scores.
## Results
All individual velocity profiles and Procrustean plots show subject exemplars from both the mild and severe impairment groups. Discrete kinematic metrics related to velocity, orientation, and target error are reported in Tables V and VI, and rotation/reflection, scaling, and translation vector quantities are reported in Tables VIII and IX.
### Kinematic Findings
The Anderson-Darling test for normalcy indicated that both the peak velocity [Mild: \(p=0.25\) Severe: \(p=0.73\)] and velocity time location [Mild: \(p=0.30\) Severe: \(p=0.16\)] were normally distributed. For subjects with mild impairment, peak velocities occurred later in the movement beyond the first third of reach progression. Some velocity profiles of subject exemplars are depicted in Figure 3, along with the four reference curves.
There was no significant influence of severity on subject ability to complete each set of ten reaches, nor on time needed to reach the peak velocities. However, One-Way ANOVA indicated a significant effect of group on tendency to undershoot the target (\(p=0.0214\)). The severe group tended to undershoot the target with greater frequency than the mild impairment group. For subjects with severe impairment, peak velocities were lower in magnitude and occurred during movement extremes 3. Results of statistical analyses of kinematic metrics related to accuracy and velocity are related in Table VII. The Anderson-Darling test for normalcy indicated that the data collected on target error [Mild: \(p=0.69\) Severe: \(p=0.36\)] was also normally distributed.
The difference between the mild and severe impairment groups' time location of where the peak velocity occurs was analyzed with a One-Way ANOVA with a single degree of freedom, resulting in a p-value of 0.5928. In contrast, the difference between the mean velocity of the mild and severe impairment groups was found to be significant with a p-value of 0.0173. The severe impairment group tended toward more angular variability in the hand's orientation in the roll angle, or the y-axis toward movement completion. The control reach curve shows some rotation in orientation occurring in all three axes throughout the movement, as shown in Figure 4. In contrast, in the cases of both mild and severe impairment groups, rotation could not be adequately decomposed into the yaw and pitch angles. The peak roll angles achieved during movement [Mild: 86.34, Severe: 111.22] appeared to be a significant difference between the groups, with a p-value of 0.0202.
Fig. 2: **Orientation of limb endpoint in 3-D space** An intrinsic coordinate system centered at the hand was used to quantify movement refinement through the reach.
### Modified Procrustes Analysis Findings
Dissimilarity indices were calculated across ten equally sized segments of each subject curve and compared with both the steady paced and rapid paced reference curves. Visual inspection of these heatmaps indicates higher dissimilarity as
Fig. 3: **Subject Exemplars of Velocity Profiles.** Left: The neural intact reference velocity curves with a steady paced and rapid paced, collected for both left-handed and right-handed movement. Middle: Three subject exemplars with mild impairment; demonstrating a delayed peak velocity. Right: Three subject exemplars with severe impairment; demonstrating peak velocities at the extremes of movement. (Note the extremely different scale of the mild impairment case in the bottom row)
[MISSING_PAGE_POST]
movement ends in both mild and severe groups. When the groups are compared to the rapid paced reference, there is greater dissimilarity in the last three segments of movement. Though the rapid reference curve resulted from reference subjects being given the same prompt as the stroke subjects (i.e. to move as quickly as they can), this does not result in lower dissimilarities. (Figure 5).
When the complete subject reach path was compared to the complete control reach path with a One-Way ANOVA, there was no significant effect of severity on curve dissimilarity between the mild and severe groups (\(p=0.62\)). The Procrustes Method was then modified to compare segments, defined as 35 consecutive time-points, by advancing along the mean individual and control reach curve point for point. In all cases of mild impairment, some of which are depicted in Figure 6, the initial subject kinematic behavior appears most congruous to the initial control kinematic behavior. Regardless of overall response time, in both two- and three-dimensional representations of movement, movement initiation proceeds comparably to the healthy control curve. The modified Procrustes analysis showed the initial impulse control phase to be evident and preserved in stroke survivors with mild functional impairment but not with severe impairment. The portion of movement in the mild impairment group that replicated the control movement not only occurred in the initial phase of movement, but also occurred before the peak velocity was achieved.
Table X details the analysis of variance in the rotation, scaling, and translation transformation variables found through Procrustes analysis of the most congruent subject and reference segments. The mean scaling factors when compared to the smooth reference curve [Mild: 3.62, Severe: 2.49], and rapid reference curve [Mild: 2.04, Severe: 1.27] all indicate that the impairment groups demonstrated stretched movement, i.e. the subjects took longer amounts of time than the reference to complete the specific segment of movement. Subjects demonstrated an ability to prioritize and modulate speed of movement by decreasing the time required to complete the specific segment of movement. The difference between the mild and severe impairment groups produced a p-value of 0.0397.
N-Way ANOVA tests were performed to analyze the influence of severity and dominance on the time-location of the congruent segments in the subject and reference, and on the time-duration of the subject movement that appeared congruent to the reference. The complete analysis of the main effects and interaction effects on the location of congruent subject and reference segments is detailed in Tables XI and XII. A three-way ANOVA was performed assessing the significance of dissimilarity indices of the following factors: impairment severity, the location in the subject behavior where the curve dissimilarity occurs, and the location of the control behavior that is most likely preserved in subject behavior. Where there were no significance of the main effects, the two-way interaction of each of the three factors showed significance, as detailed in Table XII. Severity and the preservation of movement initiation do not show significant interaction effects (\(p=0.4656\)). While there was no independent effect of the paretic limb also being the dominant limb (\(p=0.6753\)) when the rapid reference curve is used for comparison, hand dominance contributes to a significant difference between populations when the reference motivation is to produce steady and smooth movement, (\(p=0.0107\)).. The population marginal means of the groups of mild impairment with a paretic non-dominant limb, and severe impairment with a paretic non-dominant limb are significantly different. The population marginal means for both groups of impairment where the paretic limb is the dominant limb did not have any significant differences.
The analysis of the main effects and interaction effects on the length of the congruent subject segment is detailed in Table XIII. A three-way ANOVA was performed assessing the significance of dissimilarity indices of the following factors: impairment severity, the location in the subject behavior where the curve dissimilarity occurs, and whether the paretic arm was also the dominant arm. The impairment severity classification of the subject had a significant main effect on the length of congruence of the subject segment, p-value of 0.0342. Where there were no significance of the other main effects, the two-way interaction of severity and arm dominance had a p-value of 0.0364.
also included in the computation of the dissimilarity sub-score.
The preliminary RSDI sub-scores computed using these metrics were classified in terms of likely rehabilitation goals. Subjects with a higher severity indices and lower dissimilarity indices due to low mean velocities, low peak angular
Fig. 5: **Bissimilarity Indices** Heatmaps show mild and severe impairment groups compared to the steady and rapid reference curves. All groups show increased dissimilarity during movement completion, more so in the severe groups for both reference cases
## References
* [1] S. A. Barabasi, A. Barabasi, and A. Barabasi. _Theoretical and computational complexity of the human world_. Springer, 2010.
[MISSING_PAGE_POST]
values, and high target error, may benefit from a classification that prioritizes speed-focused goals. Such subjects were given a "Speed Emphasis" classification. Alternatively, subjects with lower severity indices and higher dissimilarity indices were scored as such due to high dissimilarity to the reference movement, or elongated movement behaviors, implying a need for "Strength Emphasis" to produce stable movements. Subjects with comparable severity and dissimilarity indices were classified as "Combined Emphasis". These classifications compared with the UEFM mild and severe classifications are cross tabulated in Table XIV.
The first row in Table XIV shows that of the 15 subjects classified as mildly impaired according to the UEFM test, 10 received a Strength Emphasis and 5 received a Combined emphasis. This is consistent with the clinical observation that persons with mild impairment continue to be able to reach forward quickly while compensating for muscle weakness and loss of agility. The second row indicates that of the 14 subjects classified as severely impaired by the UEFM test, 5 can be reclassified as Speed Emphasis, 7 as Strength
Emphasis, and 2 as Combined Emphasis.
## Discussion
During the reach to target movement performed in this study, the hand passes radially to reach the target which is centered in front of the subject, in addition to forward displacement. The position of the hand as it moves through space was captured as endpoint data representing the movement of the arm. We can extrapolate kinematic metrics such as mean velocity, peak velocity, the time required to achieve peak velocity, and target accuracy from this endpoint data. Additionally, the position of the hand over time can be compared to reference datasets in order to quantify deviation of the arm during a reach to target movement.
The subjects classified as mildly impaired in this study achieved higher mean velocities than their severely impaired counterparts. Another quantity found to differ significantly between impairment groups was the target accuracy. Higher target error may be correlated with diminished ability to sub-correct movements during the final phase of movement where precision and accuracy is prioritized. Earlier motor control decision-making prioritizes speed and minimization principles. The data thus lends some support to the observation that response time and target accuracy are disrupted after stroke but not physical capability of ballistic movement. Movement is modulated differently during reaching, with every particular functional limitation requiring an investigation of which kinematic metrics require incorporation into deciding the best therapeutic interventions.
We found the range of roll angles achieved by the arm to also differ significantly between impairment groups. Mildly impairment individuals demonstrated higher peak roll angles, whereas individuals with severe impairments had much lower rotation around the y-axis to achieve a lateral-to-medial movement in front of the subject. This could potentially imply a phase of movement where movement is constrained by maladaptive joint movement, such as a compensatory adaptation between the elbow and shoulder, with joint movement becoming inflexible during the forward movement. This may be due to range of motion being constrained while speed is prioritized over accuracy or online movement correction. Clinically, these findings could translate to the development of tasks where the target is placed elsewhere in the three-dimensional space in front of the subject for more effective reaching practice, e.g. a ball suspended in the air, targets placed radially equidistant, etc. A particular subject may need to be motivated not by reaction time, but by following a pre-drawn path as precisely as possible.
The dissimilarity indices of specific events with the reaching task are of particular interest, and imply that some movement behavior is preserved in mild impairment that is disrupted with severe impairment. A most interesting finding of the modified Procrustes analysis is that severity has a significant interaction effect, along with hand dominance, on whether a subject replicates reference behavior while initiating reach or at some point during the reach task. Individuals with mild impairments replicated reference behavior when beginning movement.The relative timing of the peak velocity within the first phase of movement follows prior literature describing the initiation of movement being based on anticipation of the task and not sensory feedback. Applying dissimilarity indices to the overall movement may represent an overall effect of impairment severity. The modified Procrustes method, alternatively, allowed dissimilarity indices to be computed across segments of the entire movement. Both subjects with mild and severe impairment showed that completion movements were not similar to the reference data, though they deviated more from the reference in the case of severe impairment. In the clinical setting, a subject demonstrating congruous movement initiation may focus on precision exercises and visual feedback incorporation, while a subject demonstrating congruous movement completion may practice speed exercises and need not emphasize target accuracy.
## Conclusions
While rehabilitation efforts can be effectively informed by clinical observation in the case of individuals with mild functional impairments, individuals exhibiting severe impairments require a deeper investigation of when and how deficits emerge. The tri-phasic activation pattern of upper extremity movement and the behavioral model of rapid movement, error correction, and precision control imply that movement may be disrupted in different ways in different parts of the reach-to-target task. The use of endpoint kinematic data does not allow for decomposition of rotation matrices to identify specific joint contributions; however, it can be used to identify differences in velocity, accuracy, smoothness, and deviation from reference movements. Though the upper extremity is neither cyclical nor stereotyped in its movement like the lower extremity, nevertheless measurements of gait deviation can guide analogous measures of severity and dissimilarity for the arm during functional sub-movements such as the reach and grasp cycle.
The Modified Procrustes method produced intriguing results that are supported by clinical observations; namely that mild impairment does not exhibit a disruption in the ability to initiate rapid movement. By comparing curved paths point by point, clinicians may pinpoint when a disruption in movement occurs. Taking into account how the overall
limb is oriented when this disruption occurs could then allow for only specific joint measurements to be taken rather than throughout the movement. This creates the possibility for movement tracking to remain simple yet effective, so that it can be incorporated into the clinical setting without increasing patient burden.
The RSDI score proposed in this paper can be applied to any patient position data, provided the clinician also has access to reference datasets. The RSDI can thus also be expanded to other movements, if such movements have also been recorded by healthy volunteers. In this way, the RSDI score can easily be adapted and modified to a given clinician's protocol, and provide insight when creating rehabilitation goals. It would also be worthwhile to expand the methods explored in this paper to multi-joint models of the arm to objectively identify the presence of synergies or compensatory movements that may then be incorporated into rehabilitative practice. Collecting multi-joint data by centering visual markers on each limb segment will allow for characterization of joint contributions to movement deficits. Although the RSDI preliminary results only include a few metrics of upper extremity movement, we hope in future studies it can continue to be refined and expanded to include other functional movements. This study did not accommodate for differences in limb dominance, an important consideration for future studies as limb dominance certainly has an impact on rehabilitation and quality of life. Another limitation of the current study
The upper extremity presents a rich platform for studying the motor system and how it is affected by the physical world around it and the internal world that controls and communicates through it. Through advancing the kinematic questions explored in this study and understanding the specific control parameters and factors that constrain and alter function, we hope that the impairment and functional limitations correlated with stroke may be minimized and thus prevented from translating to disability in social functioning. By creating a comprehensive and objective clinical tool to select rehabilitative strategies that can serve each individual's specific needs, we anticipate the impact of stroke on disability and quality of life may be appreciably reduced.
## Acknowledgments
We are indebted to all who participated in this study. We would like to thank Drs. Rachael Harrington and Evan Chan for guidance with experiment design and assistance with data collection, Dr. Kathryn Laskey for guiding and reviewing the statistical analyses, and Dr. Qi Wei and Dr. Joseph Majdi for editing and reviewing this paper. Finally, we are grateful to the MedStar National Rehabilitation Hospital research department for facilitating recruitment for this project.
|
2307.12043 | Euler and the Duplication Formula for the Gamma-Function | We show how the formulas in paper Variae considerationes circa series
hypergeometricas written by Euler imply the duplication formula for the
Gamma-function. This paper can be seen as an Addendum to a previous paper by
the author. | Alexander Aycock | 2023-07-22T10:26:52Z | http://arxiv.org/abs/2307.12043v1 | # Euler and the Duplication Formula for the Gamma-Function
###### Abstract
We show how the formulas in Euler's paper "Variae considerations circa series hypergeometrics" [4] imply Legendre's duplication formula for the \(\Gamma\)-function. This paper can be seen as an Addendum to [2].
## 1 Introduction
In [2], we focused on a function defined by Euler in [4] as:
\[\Gamma_{E}(x):=a\cdot(a+b)\cdot(a+2b)\cdot(a+3b)\cdot\dots\cdot(a+(x-1)b)\quad \text{for}\quad a,b>0, \tag{1}\]
which we showed to be continueable to non-integer values of \(x\) via the expression:
\[\Gamma_{E}(x)=\frac{b^{x}}{\Gamma\left(\frac{a}{b}\right)}\cdot\Gamma\left(x +\frac{a}{b}\right). \tag{2}\]
Here, \(\Gamma(x)\) means the ordinary \(\Gamma\)-function defined as:
\[\Gamma(x):=\int\limits_{0}^{\infty}e^{-t}t^{x-1}dt\quad\text{for}\quad\text{ Re}(x)>0. \tag{3}\]
Equation (1) enabled us to determine the constant \(A\) in the asymptotic expansion for the function \(\Gamma_{E}\) found by Euler via the Euler-Maclaurin summation formula. The asymptotic expansions reads:
\[\Gamma_{E}(x)\sim A\cdot e^{-x}\cdot(a-b+bx)^{\frac{a}{b}+x-\frac{1}{2}}\quad \text{for}\quad x\to\infty. \tag{4}\]
We found the constant \(A\) to be
\[A=\frac{\sqrt{2\pi}}{\Gamma\left(\frac{a}{b}\right)}\cdot e^{1-\frac{a}{b}} \cdot b^{\frac{1}{2}-\frac{a}{b}}. \tag{5}\]
In this paper, we intend to use this result and more of Euler's formulas from the same paper to show that they imply the Legendre duplication formula for the \(\Gamma\)-function, i.e., the relation
\[\Gamma(x)=\frac{2^{x-1}}{\sqrt{\pi}}\cdot\Gamma\left(\frac{x}{2}\right)\cdot \Gamma\left(\frac{x}{2}+\frac{1}{2}\right). \tag{6}\]
## 2 Euler's other Functions
### Euler's Definition
Aside from the function \(\Gamma_{E}\), in his paper [4], Euler introduced two other related functions:
\[\begin{split}\Delta(x)&=a\cdot(a+2b)\cdot(a+4b) \cdot(a+6b)\cdot\dots\cdot(a+(2x-2)b),\\ \Theta(x)&=(a+b)\cdot(a+3b)\cdot(a+5b)\cdot\dots \cdot(a+(2x-1)b).\end{split} \tag{7}\]
As it was the case for \(\Gamma_{E}\) (equation (1)), Euler's definition is only valid for integer values of \(x\), but by using the ideas from [2], we could extend the definition to real numbers.
### Asymptotic Expansions of these Functions
Furthermore, Euler also found asymptotic expansions for his functions \(\Delta\) and \(\Theta\). They are:
\[\begin{split}\Delta(x)&\sim B\cdot e^{-x}\cdot(a-2b+2bx)^{\frac{a}{2b}+x-\frac{1}{2}}\\ \Theta(x)&\sim C\cdot e^{-x}\cdot(a-b+2bx)^{\frac{a }{2b}+x},\end{split} \tag{8}\]
where \(B\) and \(C\) are constants resulting from the application of the Euler-Maclaurin summation formula and the asymptotic expansions are valid for \(x\to\infty\).
### Relation among the Constants
Euler was not able to find any of the constants \(A\), \(B\) and \(C\). But, using the general relations among his functions \(\Gamma_{E}\), \(\Delta\) and \(\Theta\) and the respective corresponding asymptotic expansions, he found the following relations:
\[A=\frac{B\cdot C}{\sqrt{e}} \tag{9}\]
and
\[B=C\cdot k\cdot\sqrt{e} \tag{10}\]
with \(k=\Delta\left(\frac{1}{2}\right)\). As we will show in the next section, these relations imply the Legendre duplication formula (equation (6)).
Derivation of the Legendre Duplication Formula from Euler's Formulas
As Euler remarked himself in [4], equations (9) and (10) tell us that we only need to find one of the constants \(A\), \(B\) and \(C\) such that we can calculate the remaining two from the first. Since we discovered the value \(A\) (equation (5)), we could do precisely that. But for our task at hand, we need to find the value of \(k\) first.
### Evaluation of the Constant \(k\)
To evaluate \(k=\Delta\left(\frac{1}{2}\right)\), we note that we just have to make the substitution \(b\mapsto 2b\) in equation (1) such that the expression for \(\Gamma_{E}\) goes over into the expression for \(\Delta\) (equation (8)) in equation (7). Making the same substitution in equation (2), we arrive the the following expression for \(\Delta(x)\):
\[\Delta(x)=\frac{(2b)^{x}}{\Gamma\left(\frac{a}{2b}\right)}\cdot\Gamma\left(x+ \frac{a}{2b}\right).\]
Therefore, for \(x=\frac{1}{2}\)
\[k=\Delta\left(\frac{1}{2}\right)=\frac{(2b)^{\frac{1}{2}}}{\Gamma\left(\frac{ a}{2b}\right)}\cdot\Gamma\left(\frac{1}{2}+\frac{a}{2b}\right). \tag{11}\]
### The Legendre Duplication Formula
Having found \(k\), let us use equations (9) and (10) to find the Legendre duplication formula (equation (6)). Substituting the value for \(C\) in (10) in for the value of \(C\) in (9), we arrive at this equation:
\[A=\frac{B^{2}}{\Delta\left(\frac{1}{2}\right)}e^{-1}. \tag{12}\]
Next, we note that since \(\Delta(x)\) is obtained from \(\Gamma_{E}(x)\) by the substitution \(b\mapsto 2b\), the value of the constant \(B\) is obtained in the same way from \(A\) and reads:
\[B=\frac{\sqrt{2\pi}}{\Gamma\left(\frac{a}{2b}\right)}\cdot(2b)^{\frac{1}{2}- \frac{a}{2b}}\cdot e^{1-\frac{a}{2b}}. \tag{13}\]
Thus, substituting the respective values for \(A\) (equation (5)), \(B\) (equation (13)) and \(k\) (equation (11)), equation (12) becomes:
\[\frac{\sqrt{2\pi}}{\Gamma\left(\frac{a}{b}\right)}\cdot e^{1-\frac{a}{b}}\cdot b ^{\frac{1}{2}-\frac{a}{b}}=\frac{\left(\frac{\sqrt{2\pi}}{\Gamma\left(\frac{ a}{2b}\right)}\cdot(2b)^{\frac{1}{2}-\frac{a}{2b}}\cdot e^{1-\frac{a}{2b}} \right)^{2}}{\frac{(2b)^{\frac{1}{2}}}{\Gamma\left(\frac{a}{2b}\right)}\cdot \Gamma\left(\frac{1}{2}+\frac{a}{2b}\right)}\cdot e^{-1}.\]
Most terms cancel each other and after this equation simplifies to:
\[\frac{1}{\Gamma\left(\frac{a}{b}\right)}=\frac{\sqrt{2\pi}\cdot 2^{\frac{1}{2}- \frac{a}{b}}}{\Gamma\left(\frac{a}{2b}\right)\cdot\Gamma\left(\frac{1}{2}+\frac {a}{2b}\right)}.\]
Finally, writing \(x\) instead of \(\frac{a}{b}\) and solving this equation for \(\Gamma(x)\), after a little simplification, we arrive at the relation:
\[\Gamma(x)=\frac{2^{x-1}}{\sqrt{\pi}}\cdot\Gamma\left(\frac{x}{2}\right)\cdot \Gamma\left(\frac{x+1}{2}\right),\]
which is the Legendre duplication formula for the \(\Gamma\)-function (equation (6)), as we wanted to show.
## 4 Conclusion
In this note we showed that Legendre's duplication formula, i.e., equation (6) follows from Euler's formulas found in his paper [4]. Indeed, the Legendre duplication formula could also have been shown by Euler himself, if he had set this task for himself, as we argued in more detail in [2]. Furthermore, Euler's ideas that we explained in this and the before-mentioned paper, can be generalized to show the multiplication formula for the \(\Gamma\)-function, i.e, the formula
\[\Gamma(x)=\sqrt{\frac{n}{(2\pi)^{n-1}}}\cdot n^{x-1}\cdot\Gamma\left(\frac{x }{n}\right)\Gamma\left(\frac{x+1}{n}\right)\Gamma\left(\frac{x+2}{n}\right) \cdot\ldots\cdot\Gamma\left(\frac{x+n-1}{n}\right).\]
This formula is attributed to Gauss who stated and proved it in [5]. But it was given by Euler (in different form, expressed via Beta functions) in [3], as we demonstrated in [1].
|
2306.13074 | Iterative Scale-Up ExpansionIoU and Deep Features Association for
Multi-Object Tracking in Sports | Deep learning-based object detectors have driven notable progress in
multi-object tracking algorithms. Yet, current tracking methods mainly focus on
simple, regular motion patterns in pedestrians or vehicles. This leaves a gap
in tracking algorithms for targets with nonlinear, irregular motion, like
athletes. Additionally, relying on the Kalman filter in recent tracking
algorithms falls short when object motion defies its linear assumption. To
overcome these issues, we propose a novel online and robust multi-object
tracking approach named deep ExpansionIoU (Deep-EIoU), which focuses on
multi-object tracking for sports scenarios. Unlike conventional methods, we
abandon the use of the Kalman filter and leverage the iterative scale-up
ExpansionIoU and deep features for robust tracking in sports scenarios. This
approach achieves superior tracking performance without adopting a more robust
detector, all while keeping the tracking process in an online fashion. Our
proposed method demonstrates remarkable effectiveness in tracking irregular
motion objects, achieving a score of 77.2% HOTA on the SportsMOT dataset and
85.4% HOTA on the SoccerNet-Tracking dataset. It outperforms all previous
state-of-the-art trackers on various large-scale multi-object tracking
benchmarks, covering various kinds of sports scenarios. The code and models are
available at https://github.com/hsiangwei0903/Deep-EIoU. | Hsiang-Wei Huang, Cheng-Yen Yang, Jiacheng Sun, Pyong-Kun Kim, Kwang-Ju Kim, Kyoungoh Lee, Chung-I Huang, Jenq-Neng Hwang | 2023-06-22T17:47:08Z | http://arxiv.org/abs/2306.13074v5 | # Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports
###### Abstract
Deep learning-based object detectors have driven notable progress in multi-object tracking algorithms. Yet, current tracking methods mainly focus on simple, regular motion patterns in pedestrians or vehicles. This leaves a gap in tracking algorithms for targets with nonlinear, irregular motion, like athletes. Additionally, relying on the Kalman filter in recent tracking algorithms falls short when object motion defies its linear assumption. To overcome these issues, we propose a novel online and robust multi-object tracking approach named deep ExpansionIoU (Deep-EIoU), which focuses on multi-object tracking for sports scenarios. Unlike conventional methods, we abandon the use of the Kalman filter and leverage the iterative scale-up ExpansionIoU and deep features for robust tracking in sports scenarios. This approach achieves superior tracking performance without adopting a more robust detector, all while keeping the tracking process in an online fashion. Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 77.2% HOTA on the SportsMOT dataset and 85.4% HOTA on the SoccerNet-Tracking dataset. It outperforms all previous state-of-the-art trackers on various large-scale multi-object tracking benchmarks, covering various kinds of sports scenarios.
## 1 Introduction
Multi-Object Tracking (MOT) is a fundamental computer vision task that aims to track multiple objects in a video and localize them in each frame. Most recent tracking algorithms [33, 1, 29, 4], which mainly focus on pedestrians or vehicle tracking, have achieved tremendous progress on public benchmarks [20, 8, 11]. However, these state-of-the-art algorithms fail to perform well on datasets with higher difficulties, especially those datasets with sports scenarios [7, 6]. Given the growing demand for sports analytic for applications like automatic tactical analysis and athletes' movement statistics including running distance, and moving speed, the field of multi-object tracking for sports requires more attention.
Different from multi-object tracking for pedestrians or vehicles, MOT in sports scenarios poses higher difficulties due to several reasons, including severe occlusion caused by the high intensity in sports scenes as illustrated in Figure 2, similar appearance between players in the same team due to the same color jersey like examples in Figure 3, and also unpredictable motion due to some sport movement like a crossover in basketball, sliding tackle in football or spike in volleyball. Due to the above reasons, the previous trackers,
Figure 1: HOTA comparison of different trackers on the test sets of SoccerNet-Tracking and SportsMOT dataset. Deep-EIoU achieves 77.2% HOTA on the SportsMOT test set and 85.4% HOTA on the SoccerNet-Tracking test set. These results surpass the performance of all previous trackers on these large-scale multi-object tracking benchmarks. More comparisons between different trackers can be found in table 2 and table 3
which utilize appearance-motion fusion [34, 29] or simply motion-based [33, 5, 4] methods struggle to conduct robust tracking on several major MOT benchmarks in sports scenarios [6, 7].
To address these issues, in this paper, we propose a novel and robust online multi-object tracking algorithm specifically designed for objects with irregular and unpredictable motion. Our experimental results demonstrate that our algorithm effectively handles the irregular and unpredictable motion of athletes during the tracking process. It outperforms all tracking algorithms on two large-scale public benchmarks [7] without introducing extra computational cost while maintaining the algorithm online. Therefore, in this paper, we assert three main contributions:
* We present a novel association method to specifically address the challenges in sports tracking, named ExpansionIoU, which is a simple yet effective method for tracking objects with irregular movement and similar appearances.
* Our proposed iterative scale-up ExpansionIoU further leverages with deep features association for robust multi-object tracking for sports scenarios.
* The proposed method achieves **77.2** HOTA on the SportsMOT [7] dataset, and **85.4** HOTA on the SoccerNet-Tracking dataset [6], outperforming all the other previous tracking algorithms by a large margin.
## 2 Related Work
### Multi-Object Tracking using Kalman Filter
Most of the existing tracking algorithms [33, 4, 5, 29, 35, 30, 15, 12, 14, 13] incorporate Kalman filter [16] as a method for object motion modeling. Kalman filter can formulate object motion as a linear dynamic system and can be used to predict its next frame location according to the object's motion from the previous frames. Kalman filter has shown effectiveness in multi-object tracking across several public benchmarks [20, 8, 24]. However, due to the Kalman filter's linear motion and Gaussian noise assumption, the Kalman filter might fail to track an object with non-linear motion. Due to this reason, OC-SORT [5] proposes several methods including observation-centric re-update to modify the Kalman filter's parameters during the tracking process and prevent error accumulations when an object is not tracked. The performance has shown effectiveness for tracking objects with irregular motion on several public datasets [24, 7].
### Location-based Multi-Object Tracking
Tracking can also be conducted based on the position information, given a high frame rate input video sequence, the object's position shift between frames is relatively small due to the high frame rates, thus making the position information a reliable clue for association between frames. Several methods [23, 15] utilizes the bounding boxes' distance as the cost for bounding box association, while some recent work [31] utilize different IoU calculation methods including GIoU [21], DIoU [37], and BIoU [31], to conduct bounding box association between frames, which also demonstrate effectiveness in multi-object tracking.
### Appearance-based Multi-Object Tracking
With the recent development and improvement of object ReID model [38] and training tricks [18], many tracking algorithms incorporate ReID into the association process. Some methods use the joint detection and embedding architecture [35, 28] to produce detection and object embedding at the same time to achieve real-time tracking. While the other methods [29, 1] apply other stand-alone ReID model to extract detection's embedding features for association. The appearance-based tracking methods improve the tracking robustness with an extra appearance clue, while sometimes the appearance can be unreliable due to several reasons including occlusions, similar appearance among tracked objects, appearance variation caused by the object's rotation, or the lighting condition.
Figure 3: Example of similar appearances between the players from the SportsMOT dataset, which can cause confusion towards the tracker and decrease the tracking accuracy. Each column represents two different players with similar appearance.
Figure 2: An example of the occlusion problem encountered during multi-athlete tracking. Occlusion can significantly hinder detection and tracking performance, and the occlusion issue in athlete tracking is particularly severe when compared to pedestrian tracking due to the high intensity of sports characteristics.
### Multi-Object Tracking in Sports
Numerous studies have been conducted to monitor players' movements in team sports during games. This monitoring serves not only to automate the recording of game statistics but also enables sports analysts to obtain comprehensive information from a video scene understanding perspective. Different from MOT of pedestrian [20], MOT in sports scenarios is much more challenging due to several reasons including targets' faster and irregular motions, similar appearance among players in the same team, and more severe occlusion problem due to the sport's intense characteristic. The majority of recent methods for MOT for sports utilize the tracking-by-detection paradigm and integrate a re-identification network to generate an embedding feature for association.
Vats et al. [26] combine team classification and player identification approaches to improve the tracking performance in hockey. Similarly, Yang et al. [32] and Maglo et al. [19] demonstrate that by localizing the field and players, the tracking results in football can be more accurate. Additionally, Sanguesa et al. [22] utilize the human pose information and actions as the embedding features to enhance basketball player tracking. While Huang et al. [15] combine OC-SORT [5] and appearance-based post-processing to conduct tracking on multiple sports scenarios including basketball, volleyball, and football [7].
## 3 Proposed Methods
Our proposed method follows the classic tracking-by-detection paradigm, which also enables online tracking without using future information. We first apply the object detector YOLOX on each input frame, and then we conduct association based on several clues including the similarity between extracted appearance features and the ExpansionIoU between the tracklets and detections. After the association cost is obtained, the Hungarian algorithm is conducted to get the best matching between tracklets and detections.
### Appearance-based Association
The appearance similarity is a strong clue for object association between frames, the similarity can be calculated by the cosine similarity between the appearance features, and it can also be used to filter out some impossible associations. The cost for appearance association \(Cost_{A}\) can be directly obtained from the cosine similarity with the following formula:
\[Cost_{A}=1-\text{Cosine Similarity}=1-\frac{a\cdot b}{\|a\|\|b\|} \tag{1}\]
Here, \(a\) and \(b\) are the tracklet's appearance feature and the detection's appearance feature, respectively. A higher cosine similarity denotes a higher similarity in appearance, while a lower cosine similarity means the tracklet's appearance and the detection's appearance are different.
### Association with ExpansionIoU
To deal with the fast and irregular movement of sports player, we proposed ExpansionIoU (EIoU), a robust association method for tracking under large and nonlinear motion. Traditional IoU has been a cornerstone in location-based tracking method, but it often lacks the flexibility to account for object's large movement, when tracklet and detection bounding boxes share small or no IoU between adjacent frames. EIoU addresses this limitation by modifying the dimensions of bounding boxes, expanding their width and height and considers a wider range of object relationships, thus recover the association for those objects with large movement in sports scenarios. The expansion of bounding box is controlled by expansion scale \(E\), given an original bounding box with height \(h\) and width \(w\), we can calculate the expansion length \(h^{\star}\) and \(w^{\star}\) following:
\[h^{\star}=(2E+1)h \tag{2}\]
Figure 4: The proposed iterative scale-up ExpansionIoU tracking pipeline. The pseudo code of the proposed pipeline can be found in supplementary material.
\[w^{*}=(2E+1)w \tag{3}\]
The original bounding box is expand based on the expansion length. Denote the original bounding box top-left and bottom-right coordinate as \((t,l)\),\((b,r)\), we can derived the expanded bounding box's coordinate as \((t-\frac{h^{*}}{2},l-\frac{w^{*}}{2})\) and \((b+\frac{h^{*}}{2},r+\frac{w^{*}}{2})\).
The expanded bounding box is further used for IoU calculation between tracklets and detections pairs, note that the expansion is applied both on tracklets' last frame detections and the new coming detections from detector, the calculated EIoU is used for Hungarian association between adjacent frames. The operation of expanding the bounding box does not change several important objects' information like the bounding box center, aspect ratio, or appearance features. By simply expanding the search space, we can associate those tracklets and detections with small or no IoU, which is considered a common situation when the target's movement is fast, especially in sports games.
### Confidence Score Aware Matching
Following ByteTrack [33], we give the high confidence score detections higher weighting during the matching process. The high score detections usually imply less occlusion, hence a higher chance to preserve more reliable appearance features. Due to this reason, the first stage matching with high score detections is based on the association cost of both appearance and ExpansionIoU, denoted as \(C_{stage1}\). The first stage of matching is built upon several rounds of iterative associations with a gradually scale-up expansion scale, addressed in Section 3.4. In the second round of matching with low score detections, only ExpansionIoU is used, the cost is denoted as \(C_{stage2}\).
In our first matching stage, we abandon the IoU-ReID weighted cost method used in several previous works [34, 29], where the cost is a weighted sum of the appearance cost \(C_{A}\) and IoU cost \(C_{IoU}\):
\[C=\lambda C_{A}+(1-\lambda)C_{IoU} \tag{4}\]
Instead, we adopt strategy similar to that of in BoTSORT [1] for appearance-based association. More specifically, we first filter out some impossible associations by setting cost thresholds for both appearance and ExpansionIoU (EIoU). The adjusted appearance cost \(C_{\hat{A}}\) is set to 1 if either cost is bigger than its corresponding threshold, otherwise \(C_{\hat{A}}\) is set as half of its appearance cost \(C_{A}\). Finally, the first stage's final association cost \(C_{stage1}\) is set as the minimum of the appearance cost \(C_{\hat{A}}\) and EIoU cost \(C_{EIoU}\). With \(\tau_{A}\) and \(\tau_{EIoU}\) denotes the threshold for the cost filter, we can write the appearance cost \(C_{\hat{A}}\) as:
\[C_{\hat{A}}=\begin{cases}1,&\text{if }C_{A}>\tau_{A}\text{ or }C_{EIoU}>\tau_{EIoU}\\ 0.5C_{A},&\text{otherwise}\end{cases} \tag{5}\]
The final cost in the first stage of matching \(C_{stage1}\) will be the minimum between adjusted appearance cost \(C_{\hat{A}}\) and EIoU cost \(C_{EIoU}\).
\[C_{stage1}=\min(C_{\hat{A}},C_{EIoU}) \tag{6}\]
While the association cost in the second matching stage \(C_{stage2}\) will be only using the EIoU cost \(C_{EIoU}\).
### Iterative Scale-Up ExpansionIoU
As illustrated by the previous work using expansion bounding box for association [31], the amount of the bounding box expansion is a crucial and sensitive hyperparameter in the tracking process and the performance of the tracker can be largely affected by the choice of the hyperparameter. In the real-world scenario, several factors might limit us from tuning the expansion scale and improving the tracking performance, including 1) the online tracking requirements. One common requirement for an athlete tracking system is the system needs to operate in an online matter, tuning the expansion scale with experiments and tweaking the performance is not possible in such cases. 2) No access to the testing data. For real-world scenarios, the testing data's ground truth is often not available, which makes finding the perfect expansion scale for association impossible. Due to the above reasons, we proposed a novel iterative scale-up ExpansionIoU association stage for robust tracking, the experiment results show that without any parameter tuning, our algorithms can always maintain SOTA performance on public benchmark. Instead of doing hyperparameter tuning for the best expansion scale \(E\), we choose to iteratively conduct EIoU association based on a gradually increasing \(E_{t}\) during the tracking process. In each scale-up iteration, the expansion scale of the current iteration \(E_{t}\) can be derived from the following formula:
\[E_{t}=E_{initial}+\lambda t, \tag{7}\]
where \(E_{initial}\) is the initial expansion scale, \(\lambda\) denotes the step size for the iterative scale-up process, \(t\) stands for the iteration count, which starts from 0. By using this approach, we can first perform association to those trajectory and detection pairs with higher ExpansionIoU, and gradually search for those pairs with lower overlapping area, which enhances the robustness of our association process. Note that the iterative scale-up process is only applied for high score detections association, once the iteration count reaches the total number of iteration \(t_{total}\), the association for high score detections stops and the tracker move on to the low score detections association stage.
## 4 Experiments and Results
### Dataset
We evaluate our tracking algorithm on two large-scale multi-sports player tracking datasets, i.e., SportsMOT [7] and SoccerNet-Tracking [6].
**SportsMOT** consists of 240 video sequences with over 150K frames and over 1.6M bounding boxes collected from 3 different sports, including basketball, football, and volleyball. Different from the MOT dataset [20, 8], SportsMOT possesses higher difficulties including: 1) targets' fast and irregular motions, 2) larger camera movements, and 3) similar appearance among players in the same team.
**SoccerNet-Tracking** is a large-scale dataset for multiple object tracking composed of 201 soccer game sequences. Each sequence is 30 seconds long. The dataset consists of 225,375 frames, 3,645,661 annotated bounding boxes, and 5,009 trajectories. Unlike SportsMOT, which only focuses on the tracking of sports players on the court, the tracking targets of SoccerNet contains multiple object classes including normal players, goalkeepers, referees, and soccer ball.
### Detector
We choose YOLOX [10] as our object detector to achieve real-time and high accuracy detection performance. Several existing trackers [33, 5, 1, 31] also incorporate YOLOX as detector, this also leads to a more fair comparison between these trackers with ours. We use the COCO pretrained YOLOX-X model provided by the official GitHub repositories of YOLOX [10] and further fine-tune the model with SportsMOT training and validation set for 80 epochs, the input image size is 1440 \(\times\) 800, with data augmentation including Mosaic and Mixup. We use SGD optimizer with weight decay of 5 \(\times\) 10\(-\)4 and momentum of 0.9. The initial learning rate is 10\(-\)3 with 1 epoch warmup and cosine annealing schedule, which follows the same training procedure of ByteTrack's [33]. As for the SoccerNet-Tracking dataset, since oracle detections are provided in the dataset, to make a fair comparison and focus on tracking, we directly use the oracle detections provided by the dataset for the evaluation of all trackers.
### ReID Model
For player re-identification (ReID), we use the omniscale feature learning proposed in OSNet [38]. The unified aggregation gate fuses the features from different scales and enhances the ability of human ReID.
**SportsMOT** The ReID training data for experiments on SportsMOT dataset is constructed based on the original SportsMOT dataset where we crop out each player according to its ground truth annotation of the bounding boxes. The sampled dataset includes 31,279 training images, 133 query images, and 1,025 gallery images.
**SoccerNet-Tracking** We sample the ReID training data from the SoccerNet-Tracking training set, we randomly select 100 ground truth bounding boxes for each player from randomly sampled videos, with 65 used as training images, 10 used as query images, and 25 used as gallery images. The sampled ReID data contains 7,085 training images, 1,090 query images, and 2,725 gallery images, with a total of 109 randomly selected identities.
**Training Details** We use the pre-trained model from the Market-1501 dataset [36] and further fine-tune the model based on each of the above mentioned sampled sports ReID datasets, resulting in two ReID models for these two datasets. Each model is trained for 60 epochs, using Adam optimizer with cross entropy loss and the initial learning rate is 3 \(\times\) 10\(-\)4. All the experiments are conducted on single Nvidia RTX 4080 GPU.
### Tracking Settings
The threshold for detection to be treated as high score detection is 0.6, while detections with confidence score between 0.6 and 0.1 will be treated as low score detections, the rest detections with confidence score lower than 0.1 will be filtered. The cost filter threshold \(\tau_{A}\) and \(\tau_{EIoU}\) are set to 0.25 and 0.5, respectively. We also remove the constraint of aspect ratio in the detection bounding box, since sports scenarios might have the condition when a player is lying on the ground, which is different from the MOT datasets where most of the pedestrians are standing and walking. For the high score detections association, the initial value of expansion scale \(E_{initial}\) is set to 0.7 with a step size \(\lambda\) of 0.1, and the total number of iteration \(t_{total}\) is 2. The expansion scale \(E\) for low score detections association is 0.7, while for unmatched detections is 0.5. The max frames for keeping lost tracks is 60. After tracking is finished, linear interpolation is applied to boost the final tracking performance.
### Evaluation Metrics
MOTA [2] is often used as an evaluation metric for multi-object tracking task, however, MOTA mainly focuses on the detection performance instead of association accuracy. Recently, in order to balance between the detection and as
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Sport Type & \# of tracks & \# of frames & Track Len & Density \\ \hline Basketball & 10 & 845.4 & 767.9 & 9.1 \\ Football & 22 & 673.9 & 422.1 & 12.8 \\ Volleyball & 12 & 360.4 & 335.9 & 11.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the SportsMOT dataset split by the type of sport. The number of tracks, number of frames, track length, and track density are average numbers across all videos of the sports.
sociation performance, more and more public benchmarks start to use HOTA [17] as the main evaluation metric. For evaluation on the SportsMOT dataset, we adopt HOTA, MOTA, IDF1, and other associated metrics [3] for comparison. While for SoccerNet, we adopt HOTA metrics, with associated DetA, and AssA metrics, since only these metrics are provided by the evaluation server.
### Performance
We compare our tracking algorithm with previous existing trackers on two large-scale multi-object tracking datasets in sports scenarios, the SportsMOT and SoccerNet-Tracking datasets. All the experiments are run on one Nvidia RTX 4080 GPU, and the tracking results are evaluated on the datasets' official evaluation server.
**SportsMOT** As shown in table 2, the performance of our proposed Deep-EIoU achieves **77.2** in HOTA, **79.8** in IDF1, **67.7** in AssA. The performance of our method achieves state-of-the-art results and outperforms all the other previous trackers while also keeping the tracking process in an online fashion, showing the effectiveness of our algorithm in multi-object tracking in sports scenarios.
**SoccerNet** To focus on the tracking performance and make a fair comparison, all the evaluated methods are using oracle detections provided by the SoccerNet-Tracking dataset [6]. The performance of our proposed method is reported in table 3. Our method achieves **85.443** in HOTA, **73.567** in AssA, **99.236** in DetA, which outperforms several state-of-the-art online tracking algorithms by a large margin. The performance of DeepSORT and ByteTrack are reported from the original SoccerNet-Tracking paper [6]. The competitive performance of Deep-EIoU in various large-scale sports player tracking datasets demonstrates the effectiveness of our algorithm in multi-object tracking in sports.
ter incorporating ReID model based on appearance association, the HOTA of Deep-EIoU is boosted by 3.8, showing that although sharing similar appearance between athletes, it is still important to use appearance as a clue for tracking in sport scenarios. With the iterative scale-up process (ISU), the gradually scale-up bounding box can first establish association with those tracklets and detections with higher EIoU, thus also increase the tracking performance. And finally, following most of the online tracking algorithm [33, 5], we also include linear interpolation (LI) as a strategy to boost the final tracking performance.
### Robustness to initial expansion scale
To prove the effectiveness and robustness of our approach, we conduct experiments based on different initial expansion scales in the iterative scale-up process. We change the initial expansion scale from 0.2 to 0.8. The experiment results in Figure 5 show that we can still achieve SOTA performance with different initial expansion scales because the iterative scale-up process can enhance the robustness and does not require any parameter tuning to achieve SOTA performance. This proves our method's effectiveness in the real-world scenario, when ground truth is often not available and the tracking parameter can not be tuned.
### ExpansionIoU on Kalman filter-based tracker
To test the effect of ExpansionIoU on the Kalman filter-based tracker, we also implement several versions of our method by directly incorporating the Kalman filter and ExpansionIoU. In our implementation, the Kalman filter's prediction and detection will be expanded in the tracking process following the ExpansionIoU. The experiment results in Table 5 demonstrate that after directly replacing IoU with EIoU, these two classic Kalman filter-based trackers increase their performance by a large margin in HOTA, AssA, and DetA. This demonstrates that ExpansionIoU can also be applied as a plug-and-play trick for Kalman filter-based tracker to boost the tracking performance.
### Limitations
While our algorithm provides a robust and practical solution for online multi-object tracking in sports scenarios, it does have its limitations, including the absence of an offline post-processing trajectories refinement method. Such methods could involve a post-processing approach [13] or a strong memory buffer [27], which would be valuable in handling edge cases where sports players temporarily exit and re-enter the camera's field of view. However, our method still outperforms all the previous online tracking algorithms on two major large-scale sports player tracking benchmarks, and it remains a potent choice for short-term robust tracking during continuous play. It is worth noting that exploring and integrating offline refinement techniques in the future could potentially enhance the overall performance and extend the applicability of our approach beyond short-term tracking scenarios.
Another concern of Deep-EIoU is its relatively slower running speed when compared with motion-based trackers. Despite delivering significantly enhanced performance, the integration of the appearance-based tracking-by-detection framework, which involves a detector and a ReID model, introduces additional computational cost. The current Deep-EIoU pipeline achieves around 14.6 FPS on a single Nvidia RTX 4080 GPU, which is slower compared to motion-based method. It's worth noting that transitioning to a more lightweight detector and ReID model has the potential to significantly boost operational speed. This is particularly relevant as YOLOX detector and OSNet ReID model contribute to the majority of processing time, accounting for 75.6% and 23.3% respectively, while the tracking stage only accounts for around 1.1% in our pipeline.
## 5 Conclusions
In this paper, we proposed Deep-EIoU, an iterative scale-up ExpansionIoU and deep features association method for multi-object tracking in sports scenarios, which achieves competitive performance on two large-scale multi
Figure 5: Performance comparison of Deep-EIoU under different initial expansion scales on the SportsMOT test set.
\begin{table}
\begin{tabular}{l c c c c} \hline Tracker & w/ EIoU & HOTA & AssA & DetA \\ \hline ByteTrack & & 62.8 & 51.2 & 77.1 \\ ByteTrack & ✓ & 67.5 & 54.4 & 83.9 \\ BoT-SORT & & 68.7 & 55.9 & 84.4 \\ BoT-SORT & ✓ & 71.3 & 60.2 & 84.5 \\ \hline \end{tabular}
\end{table}
Table 5: We evaluate two classic Kalman filter-based tracking algorithms including ByteTrack [33] and BoTSORT [1] on the SportsMOT test set. Experiment results show that the Kalman filter-based tracker can also be benefited from incorporating ExpansionIoU during the tracking process.
object sports player tracking datasets including SportsMOT and SoccerNet-Tracking. Our method successfully tackles the challenges of irregular movement during multi-object tracking in sports scenarios and outperforms the previous tracking algorithms by a large margin.
|
2307.04621 | Recipes for Jet Feedback and Spin Evolution of Black Holes with
Strongly-Magnetized Super-Eddington Accretion Disks | A spinning black hole accreting from a disk of strongly magnetized plasma via
a magnetically arrested disk is known to produce an efficient electromagnetic
jet powered by the black hole's spin energy. We present general relativistic
radiative magnetohydrodynamic simulations of magnetically arrested systems
covering a range of sub- to super-Eddington accretion rates. Using the
numerical results from these simulations, we develop formulae to describe the
magnetization, jet efficiency, and spin evolution of an accreting black hole as
a function of its spin and accretion rate. A black hole with near-Eddington
accretion experiences a mild degree of spin-down because of angular momentum
loss through the jet, leading to an equilibrium spin of 0.8 rather than 1.0 at
the Eddington limit. As the accretion rate increases above Eddington, the
spin-down effect becomes progressively stronger, ultimately converging on
previous predictions based on non-radiative simulations. In particular, spin
evolution drives highly super-Eddington systems toward a black hole spin near
zero. The formulae developed in this letter may be applied to galaxy and
cosmological scale simulations that include black holes. If magnetically
arrested disk accretion is common among supermassive black holes, the present
results have broad implications for active galactic nucleus feedback and
cosmological spin evolution. | Angelo Ricarte, Ramesh Narayan, Brandon Curd | 2023-07-10T15:08:16Z | http://arxiv.org/abs/2307.04621v2 | Recipes for Jet Feedback and Spin Evolution of Black Holes with Strongly-Magnetized Super-Eddington Accretion Disks
###### Abstract
A spinning black hole accreting from a disk of strongly magnetized plasma via a magnetically arrested disk is known to produce an efficient electromagnetic jet powered by the black hole's spin energy. We present general relativistic radiative magnetohydrodynamic simulations of magnetically arrested systems covering a range of sub- to super-Eddington accretion rates. Using the numerical results from these simulations, we develop formulae to describe the magnetization, jet efficiency, and spin evolution of an accreting black hole as a function of its spin and accretion rate. A black hole with near-Eddington accretion experiences a mild degree of spin-down because of angular momentum loss through the jet, leading to an equilibrium spin of 0.8 rather than 1.0 at the Eddington limit. As the accretion rate increases above Eddington, the spin-down effect becomes progressively stronger, ultimately converging on previous predictions based on non-radiative simulations. In particular, spin evolution drives highly super-Eddington systems toward a black hole spin near zero. The formulae developed in this letter may be applied to galaxy and cosmological scale simulations that include black holes. If magnetically arrested disk accretion is common among supermassive black holes, the present results have broad implications for active galactic nucleus feedback and cosmological spin evolution.
accretion -- active galactic nuclei -- black hole physics -- magnetohydrodynamics (MHD) -- relativistic disks -- relativistic jets +
Footnote †: journal: ApJ
## 1 Introduction
Astrophysical black holes (BHs) accreting from disks of plasma are known to launch relativistic jets and outflows (Fabian, 2012; Heckman & Best, 2014). Such energy injection from supermassive BHs (SMBHs) at the centers of galaxies, a process referred to as active galactic nucleus (AGN) feedback, is believed to be essential for stopping runaway gas cooling and star formation in massive galaxies and dark matter halos (Di Matteo et al., 2005; Springel et al., 2005; Croton et al., 2006; Sijacki et al., 2007; Kormendy & Ho, 2013; Harrison, 2017). In this paradigm, accretion and feedback processes are critical for a complete picture of SMBH growth and galaxy co-evolution. However, the details remain poorly understood.
For magnetized accretion disks, an electromagnetic analogue of the Penrose (1969) process known as the Blandford & Znajek (1977) (BZ) mechanism provides the most widely accepted model for jet launching. The power of a jet launched by the BZ mechanism scales approximately proportional to both the square of the BH spin and the square of the magnetic flux threading the horizon. In systems with high enough spin and with maximal magnetic field strength, corresponding to a so-called magnetically arrested disk (MAD) (Bisnovatyi-Kogan & Ruzmaikin, 1974; Igumenshchev et al., 2003; Narayan et al., 2003), more jet power can be launched than the entire rest mass energy of the material flowing into the BH (Tchekhovskoy et al., 2011). The extra energy is supplied by the spin kinetic energy of the BH, which thereby may cause the BH to spin down with
time. In this way, jets that travel through dark matter halos for hundreds of kiloparsecs are ultimately linked to the evolution of BH spin and the transport of magnetic fields on event horizon scales.
Since the BZ mechanism powers a jet by extracting BH spin energy, if the process continues long enough a BH could continuously spin down and equilibrate near a spin value \(a_{*}\approx 0\). This has been explicitly demonstrated via general relativistic magnetohydrodynamic (GRMHD) simulations of radiatively inefficient, geometrically thick, MAD models (McKinney et al., 2012; Tchekhovskoy et al., 2012; Narayan et al., 2022; Lowell et al., 2023). Several recent publications have begun to study the implications of this spin-down effect for BH populations over cosmic time. The systems simulated so far largely belong to the regime of advection-dominated accretion (Narayan and Yi, 1994, 1995), or hot accretion (Yuan and Narayan, 2014), which corresponds to highly sub-Eddington accretion. Spin-down is relatively slow for such low Eddington-ratio systems simply because the mass accretion rate is very small; nevertheless, continuous jet feedback from such BHs is implicated for maintaining low star formation for Gyrs in some galaxies (e.g., Hlavacek-Larrondo et al., 2015), which can lead to cosmologically significant BH spin evolution (Narayan et al., 2022).
Super-Eddington accretion disks are geometrically thick and advection-dominated, just like low-Eddington ratio hot accretion flows, and can also reach the MAD state (McKinney et al., 2015; Narayan et al., 2017; Curd and Narayan, 2019). Such systems can produce extremely powerful jets (e.g., Curd and Narayan, 2019), and because of the very large accretion rate their BHs could spin-down very rapidly. Lowell et al. (2023) developed a physical semi-analytic model for this spin-down phenomenon. Using this model, Jacquemin-Ide et al. (2023) predicted rapidly decreasing collapsar BH spins to \(a_{*}\lesssim 0.2\) near birth.
Self-consistent BH spin evolution is now being implemented in some galaxy and cosmological-scale simulations, which may then be used to model radiative efficiency and jet power (Dubois et al., 2014; Fiacconi et al., 2018; Bustamante and Springel, 2019; Beckmann et al., 2019; Dubois et al., 2021; Talbot et al., 2021; Massonneau et al., 2023; Dong-Paez et al., 2023). Although galaxy-scale simulations cannot possibly resolve accretion disk scales, such an approach still represents a substantial improvement over most contemporary work to link SMBH spin evolution to the angular momentum of resolved gas on scales of parsecs. Dubois et al. (2021) and Massonneau et al. (2023) implement spin-down during periods of thick disk accretion, employing fitting functions for the magnetic flux as a function of spin from GRMHD simulations. Again assuming the same results that have been demonstrated for very low Eddington ratio disks also hold for super-Eddington disks, Massonneau et al. (2023) consider super-Eddington growth in high-redshift galaxies. While spin-down is noticeable in this simulation, it is counteracted by periods of thin disk accretion.
All such calculations require some a priori knowledge or assumptions about the magnetic field strength. For magnetized geometrically thick disks in the low-Eddington rate limit, the MAD model offers one well-studied solution. In contrast to the weak-field "Standard and Normal Evolution" (SANE) model (Narayan et al., 2012; Sadowski et al., 2013), a MAD system is characterized by such strong magnetic fields that magnetic pressure and tension is comparable to the gas pressure near the horizon (Bisnovatyi-Kogan and Ruzmaikin, 1974; Igumenshchev et al., 2003; Narayan et al., 2003). MAD models are characterized by a dimensionless magnetic flux parameter \(\phi\) (defined in Equation 2) saturating at a spin-dependent maximum value (Tchekhovskoy et al., 2012; Narayan et al., 2022), as well as "flux eruption events" that occur when the BH expels magnetic flux (e.g., Tchekhovskoy et al., 2011; Dexter et al., 2020; Ripperda et al., 2022; Chatterjee and Narayan, 2022). The saturated fields that characterize the MAD state lead to highly efficient jets powered by the BZ mechanism.
Spatially resolved and polarimetric observations of the nearby low-luminosity AGN, M87* and Sgr A*, currently favor MAD models over their SANE counterparts (Event Horizon Telescope Collaboration et al., 2021, 2022; Wielgus et al., 2022), suggesting that the saturated values of \(\phi\) characteristic of MAD models are easily achieved in low Eddington-ratio geometrically thick hot accretion disks. However, it remains to be confirmed that the same saturation values found for hot accretion flows at low Eddington ratios also hold for super-Eddington accretion flows where radiation plays an important role. It is also unknown whether the BZ mechanism operates efficiently in such systems and how efficiently BH spin-down proceeds. We explore these questions here.
In this letter, we introduce and analyze a suite of super-Eddington general relativistic radiative magnetohydrodynamic (GRRMHD) simulations in the MAD regime to explicitly calculate the magnetization \(\phi\), jet power \(P_{\rm jet}\), and spinup parameter \(s\) (defined in equation Equation 7), as a function of the dimensionless BH spin parameter \(a_{*}\) and the Eddington ratio \(f_{\rm Edd}\) (defined in Equation 1) of the accretion flow. As we shall show, highly super-Eddington accretion disks (\(f_{\rm Edd}\gg 1\)) behave similarly to their very low Eddington-ratio (
1) counterparts. However, we find reduced magnetization and spin-down for Eddington ratios \(f_{\rm Edd}\lesssim 10\). Based on this behavior, we devise fitting functions for jet power and spin evolution that can be adapted into cosmological and galaxy-scale simulations.
## 2 GRRMHD Simulations
Radiation plays a critical role in the dynamics of BH accretion disks for Eddington-ratios \(f_{\rm Edd}\gtrsim 0.01\). In these systems, radiative cooling acts to thin the disk at lower Eddington ratios, while radiative pressure puffs up the disk vertically as the mass accretion rate approaches or exceeds Eddington (Abramowicz et al., 1988). In super-Eddington systems, winds and jets driven purely by radiation can also occur (Sadowski and Narayan, 2015; Coughlin and Begelman, 2020)
The numerical treatment of radiation in BH accretion problems is quite difficult as the algorithm must treat both optically thin and thick regions in a curved spacetime. Ohsuga et al. (2005); Ohsuga and Mineshige (2011) pioneered global, non-relativistic, radiation hydrodynamics (RHD) simulations of super-Eddington accretion disks using flux-limited diffusion. Following this work, radiation was first included in the fully general relativistic radiation magnetohydrodynamics (GRRMHD) code, koral, by Sadowski et al. (2013, 2014) using the M1 closure scheme and a semi-implicit method to handle the radiation terms. Since then, the M1 closure scheme has been applied in other GRRMHD codes (McKinney et al., 2014; Takahashi et al., 2016; Asahina and Ohsuga, 2022; Utsumi et al., 2022) as well as a GPU accelerated GRRMHD code (Liska et al., 2023). Alternative methods of treating radiation in GRRMHD include directly solving the radiative transfer equations to obtain the Eddington tensor (Asahina and Ohsuga, 2022), Monte Carlo methods (Ryan et al., 2015), or using a discretized radiation tensor (White et al., 2023). The M1 closure scheme allows limited treatment of anisotropic radiation fields. It is superior to the Eddington approximation in optically thin regions, and is well suited for global GRRMHD simulations of super-Eddington disks. However, for complicated radiation fields, it cannot match methods based on the full Eddington tensor.
Utsumi et al. (2022) explored the role of BH spin in super-Eddington accretion by running a suite of 2D GRRMHD simulations for different spin values. They considered the SANE regime of accretion for which 2D simulations are sufficient. The MAD accretion regime, however, requires 3D simulations and this is the focus of our work. We present a suite of 38 3D numerical simulations of near-Eddington to super-Eddington MAD simulations carried out with the GRRMHD code, koral(Sadowski et al., 2013, 2014, 2015; Sadowski and Narayan, 2015). We include 2 BH masses, \(M=10,10^{4}M_{\odot}\), 6 BH spin values, \(a_{*}\) = -0.9, -0.68, 0, 0.68, 0.9, and 0.97 (where a minus sign denotes retrograde accretion), and a range of Eddington ratios, \(0.4\lesssim f_{\rm Edd}\lesssim 40\). Since prolonged super-Eddington accretion is often invoked for the growth of BH seeds in the early universe, as we will later explore in section 4, these two masses are loosely motivated by exploring both "light" and "heavy" seeding scenarios (see e.g., Natarajan, 2014, for a review). We define \(f_{\rm Edd}\) as follows,
\[f_{\rm Edd}=\dot{M}/\dot{M}_{\rm Edd}, \tag{1}\]
where \(\dot{M}\) is the mass accretion rate through the BH horizon (Equation B17) and \(\dot{M}_{\rm Edd}\) is the Eddington mass accretion rate corresponding to the radiative efficiency of a thin disk (see Equation B32 and Equation B33). Thin disks below and near the Eddington limit are notoriously difficult to simulate, due to difficulties resolving the disk scale height. However, the additional magnetic pressure of the MAD state helps to inflate even moderately sub-Eddington disks (see Appendix C), making this problem computationally tractable (see e.g., Sadowski, 2016).
Using a mesh-based, finite-difference method in a stationary Kerr space-time, koral solves the conservation equations of GRMHD, with the addition of radiative heating, cooling, and plasma coupling. Modeled radiative processes include synchrotron radiation, opacities from electron scattering, free-free and bound-free emission/absorption from the Sutherland and Dopita (1993) model, and Compton scattering. While ideal GRMHD simulations without radiation are rescalable to different masses and accretion rates, the inclusion of radiative processes sets absolute physical scales and necessitates individual simulations for each combination of \(M\), \(a_{*}\), and \(f_{\rm Edd}\).
Each simulation is initialized as a torus of gas in hydrostatic equilibrium threaded by a large-scale poloidal magnetic field, either perfectly aligned or anti-aligned with the BH spin axis. To limit computational expense, but still allow non-axisymetric structures that commonly arise in MAD disks, we simulate a periodic \(\pi/2\) wedge in azimuth. From the torus initial conditions, the magnetorotational instability naturally develops to allow the plasma to lose angular momentum and accrete onto the BH, advecting along with it magnetic field which saturates at the MAD state. One example is shown in Figure 1, where in the upper panels we visualize the density and magnetic field lines of the \(M=10^{4}~{}M_{\odot}\), \(a_{*}=0.9\), \(f_{\rm Edd}=9.3\) model in the plane and in a perpendicular slice respectively. The BH has accumulated a significant poloidal magnetic field, and
turbulent eddies are evident in the disk. A flux eruption event characteristic of the MAD state, the low-density bubble near the horizon, is visible during this snapshot.
Throughout this work, we use gravitational units to describe physical parameters. For distance we use the gravitational radius \(r_{g}\equiv GM/c^{2}\) and for time we use the gravitational time \(t_{g}\equiv GM/c^{3}\). We set \(G=c=1\), so the above relations would be equivalent to \(r_{g}=t_{g}=M\). We restore \(G\) and \(c\) in cases where it helps to keep track of units. Each of the 38 models was run for a total time of \(30000\,t_{g}\). Summary statistics are given in Table 1 and correspond to averages over the final \(5000\,t_{g}\) of the run when we expect each simulation to be most nearly in steady state.
## 3 Results
### Magnetization
The dimensionless magnetization parameter \(\phi(t)\) at time \(t\) is defined by (Tchekhovskoy et al., 2011),
\[\phi(t)=\frac{\sqrt{4\pi}}{2\sqrt{\dot{M}(t)}}\int_{\vartheta}\int_{\varphi} \left|B^{r}\right|_{r=r_{\rm H}}\,\sqrt{-g}\;\mathrm{d}\vartheta\;\mathrm{d}\varphi, \tag{2}\]
where \(B^{r}\) is the radial component of the magnetic field, \(g\) is the metric determinant, \(\dot{M}(t)\) is the BH accretion rate, and the integral is evaluated at the BH horizon. MAD systems are characterized by a value of \(\phi\) that has saturated at a spin-dependent value of \(\sim 30-50\)(Tchekhovskoy et al., 2011, 2012; Narayan et al., 2022), as is the case for the example plotted in Figure 1. The value of \(\phi\) tends to decrease during a flux eruption event; note that our example snapshot visualized in Figure 1 coincides with a local minimum in \(\phi\). Although both \(\dot{M}\) and \(\phi\) are time variable, we assign a single value to each simulation by averaging each quantity over the time period \(t=25000t_{g}-30000t_{g}\). These are the values listed in Table 1.
In the left panel of Figure 2, we show the values of \(\phi\) obtained from our 38 simulations, both as a function of the Eddington ratio \(f_{\rm Edd}\) and the BH spin \(a_{*}\). Different spins are encoded in different colors, and different masses are encoded by symbol size. At large Eddington ratios, the simulations approach spin-dependent values similar to those found in pure GRMHD simulations of MADs (Tchekhovskoy et al., 2012; Narayan et al., 2022). However, \(\phi\) decreases as \(f_{\rm Edd}\) decreases. Interestingly, simulations with \(f_{\rm Edd}=1\) remain substantially magnetized, with \(\phi\) values typically about a third of the limiting value for \(f_{\rm Edd}\gg 1\). As we explore in Appendix C, this trend can be explained by increased pressure scale height as Eddington ratio increases, allowing the disk to confine stronger magnetic fields.
We model the behavior shown in the simulation data by fitting the following function:
\[\phi(a_{*},f_{\rm Edd})=\phi_{\rm MAD}(a_{*})\frac{(f_{\rm Edd}/f_{c})^{ \alpha}}{1+(f_{\rm Edd}/f_{c})^{\alpha}}, \tag{3}\]
where \(f_{c}\) is a critical Eddington ratio determining the mid-point of the transition, and \(\alpha\) is a free parameter determining the rapidity of the evolution around \(f_{c}\). The function \(\phi_{\rm MAD}(a_{*})\) is the saturated value of \(\phi\) found in non-radiative MAD simulations. We use the approximation given in Narayan et al. (2022),
\[\phi_{\rm MAD}(a_{*})=52.6+34a_{*}-14.9a_{*}^{2}-20.2a_{*}^{3}. \tag{4}\]
By construction, in Equation 3, \(\phi\to 0\) as \(f_{\rm Edd}\to 0\) and \(\phi\to\phi_{\rm MAD}(a_{*})\) as \(f_{\rm Edd}\to\infty\). Via least-squares fitting, we arrive at \(\alpha=1.29\) and \(f_{c}=1.88\). The spin-dependent \(\phi(a_{*},f_{\rm Edd})\) curves are plotted in the background of Figure 2, and describe the main trends fairly well. We intentionally transition \(\phi\to 0\) as \(f_{\rm Edd}\to 0\) to connect to the thin disk solution, but we caution that the shape and rapidity of this transition may be sensitive to our poor sampling of the \(f_{\rm Edd}\lesssim 1\) regime. We note that the GRRMHD simulations of both Liska et al. (2022) and Curd & Narayan (2023) produced \(\phi\sim 30\) for \(f_{\rm Edd}\sim 0.3\), which our fitting function would underestimate.
### Jet Efficiency
The electromagnetic jet efficiency \(\eta_{\rm EM}=P_{\rm jet}/\dot{M}c^{2}\) can be calculated analytically given \(a_{*}\) and \(\phi\). For small to moderate values of spin, \(\eta_{\rm EM}\propto a_{*}^{2}\phi^{2}\)(Blandford & Znajek, 1977), but for spin values up to and including \(a_{*}=1\), the following expression including higher order correction factors is more accurate (Tchekhovskoy et al., 2010; Pan & Yu, 2015):
\[\eta_{\rm EM}=\frac{\kappa}{4\pi}\phi^{2}\Omega_{\rm H}^{2}\left[1+1.38\Omega_ {\rm H}^{2}-9.2\Omega_{\rm H}^{4}\right], \tag{5}\]
where
\[\Omega_{\rm H}\equiv\frac{|a_{*}|}{2r_{\rm H}}=\frac{|a_{*}|}{2(1+\sqrt{1-a_{*} ^{2}})} \tag{6}\]
is the angular velocity of the horizon and \(\kappa\) is a constant dependent on the initial field geometry, for which we adopt \(\kappa=0.05\).
In the right panel of Figure 2, we plot the MHD energy outflow efficiency \(\eta_{\rm MHD}\) as a function of magnetization, with spin once again encoded in color and mass encoded in symbol size. Note that unlike \(\eta_{\rm EM}\) predicted by Equation 5 this quantity also includes the hydrodynamic energy flux. The colored curves correspond to the fitting
function Equation 5 for each spin sampled by our simulation suite. The data points are from the simulations, where we have computed the mass and energy fluxes at a radius of \(5~{}r_{g}\) since numerical floors cause inaccuracies closer to the horizon (consistent with previous work Lowell et al., 2023). Radiative flux is neglected (which is again affected by floors, particularly in the jet region), but this introduces only a small error since the radiation contribution near the BH tends to be small.
Despite the wide range of mass, spin and accretion rate considered in the right panel of Figure 2, we find that the fitting function Equation 5 performs remarkably well, implying that the BZ mechanism dominates the jet physics in MAD super-Eddington accretion flows. Note that at \(a_{*}=0\), the BZ prediction is identically 0 because the BH has no spin energy. However, the simulations still give \(\eta_{\rm MHD}>0\). In these models, the outflowing energy is from the accretion disk, presumably in a hydrodynamic wind. As a point of reference, we plot the radiative efficiencies of thin disks with \(a_{*}\in\{0,0.68,0.9,0.97\}\) as colored horizontal lines. The MHD outflow from the \(a_{*}=0\) simulation is similar in energetic output to an equivalent thin disk's radiative output. Meanwhile, the radiative efficiency of a thin disk around a maximally spinning black hole can be easily be exceeded with enough spin and magnetic flux.
### Spin Evolution
Since the BZ mechanism extracts spin energy from the BH, this can result in astrophysically significant spin evolution of an accreting BH, which we study here. We describe the evolution in terms of a dimensionless spin
Figure 1: Here we visualize the disk structure and time evolution of the \(M=10^{4}\,M_{\odot}\), \(a_{*}=0.9\), \(f_{\rm Edd}=9.3\) model. In the upper two panels, we plot the gas density (color) and magnetic field (black streamlines) within and perpendicular to the disk midplane respectively. On the right, we plot with a magenta curve the \(\sigma\equiv B^{2}/4\pi\rho=1\) contour, a common definition of the jet boundary. This snapshot, which corresponds to time \(t=28,500\,t_{g}\), features a flux eruption event, a transient low-density bubble near the horizon. In the lower panels, we plot the Eddington ratio \(f_{\rm Edd}=\dot{M}/\dot{M}_{\rm Edd}\) and the magnetic flux parameter \(\phi\) as a function of time for this model, demonstrating stability for our period of interest, demarcated by the red horizontal lines. The time corresponding to the snapshot in the upper panels is marked with a blue circle.
up parameter (Gammie et al., 2004; Shapiro, 2005),
\[s=\frac{da_{*}}{dt}\frac{M}{\dot{M}}=l-2a_{*}e, \tag{7}\]
where \(l\) is the inward specific angular momentum flux and \(e\) is the inward specific energy flux, each of which we measure at a radius of \(5~{}r_{g}\). Spinup as a function of \(a_{*}\) computed from our GRRMHD simulations is shown in the upper panel of Figure 3, where the color encodes different Eddington ratios and the symbol size encodes different masses. The thin disk solution, which always pushes the BH towards maximal prograde spin (\(a_{*}\to 1\)), is shown as a dotted line (Novikov and Thorne, 1973; Moderski and Sikora, 1996). A fitting function which we presented in previous work for MAD GRMHD (\(f_{\rm Edd}\ll 1\)) models (Narayan et al., 2022) is shown as a dashed line and is given by
\[\begin{split} s_{\rm MAD}(a_{*})=& 0.45-12.53a_{*}-7.80a_{*}^{2}+9.44a_{*}^{3}\\ &+5.71a_{*}^{4}-4.03a_{*}^{5}.\end{split} \tag{8}\]
The simulated GRRMHD models generally transition from the thin disk solution to the MAD GRMHD solution as the Eddington ratio increases (blue to red colors in Figure 3). This is not unexpected, since highly super-Eddington disks are geometrically very thick and are highly advection-dominated (Abramowicz et al., 1988) and therefore closely resemble the low-\(f_{\rm Edd}\) hot accretion flows studied in Narayan et al. (2022). Retrograde models do not follow this trend, however, in fact spinning up more rapidly than the thin disk solution. These models overshoot the thin disk curve because both the BZ mechanism and accretion of oppositely rotating material torque the BH towards \(a_{*}=0\).1
Footnote 1: As Eddington ratio increases, the disk dynamics evolve from the thin disk solution and the hydrodynamic torques become weaker (see Appendix B). At the same time, the magnetization increases, so the electromagnetic torque becomes _stronger_. Whether or not a retrograde disk spins up faster or slower than a thin disk depends on the balance between these effects.
Lowell et al. (2023) built a semi-analytic model to understand spin evolution in non-radiative MAD systems based on the spin evolution equations appropriate for a disk-plus-jet system introduced in Moderski and Sikora (1996). In this model, the spinup parameter is explicitly split up into hydrodynamic spinup by the accretion disk gas and spindown via a jet powered by the BZ mechanism. The spinup parameter is then expressed as
\[s=s_{\rm HD}+s_{\rm EM}, \tag{9}\]
Figure 2: _Left:_ Magnetic flux parameter \(\phi\) as a function of Eddington ratio \(f_{\rm Edd}\), where color encodes different values of the BH spin \(a_{*}\). For each spin sampled by our simulation library, we plot our fitting function (Equation 3) in the appropriate color. _Right:_ MHD energy outflow efficiency \(\eta_{\rm MHD}\) as a function of magnetic flux parameter for each of our models. For each spin sampled by our simulation library, we plot the BZ prediction \(\eta_{\rm EM}\) (Equation 5) as colored lines. The agreement is excellent, implying that a BZ-like electromagnetic jet dominates the outflow energy in most of the simulations, except for \(a_{*}=0\), which features a weaker hydrodynamic outflow. As a point of reference, we plot the radiative efficiencies of thin disks with \(a_{*}\in\{0,0.68,0.9,0.97\}\) as horizontal lines.
where
\[s_{\rm HD}=l_{\rm HD}-2a_{*}e_{\rm HD}, \tag{10}\]
and
\[s_{\rm EM}=\mathrm{sign}(a_{*})\,\eta_{\rm EM}\left(\frac{1}{k\Omega_{H}}-2a_{*} \right). \tag{11}\]
We detail the calculation and modeling of \(s_{\rm HD}\) from \(l_{\rm HD}\) (the hydrodynamic specific angular momentum flux) and \(e_{\rm HD}\) (the hydrodynamic specific energy flux) in Appendix B. As explained there, we develop a fitting function for \(s_{\rm HD}\) given by Equation B24 that smoothly interpolates between the thin disk solution as \(f_{\rm Edd}\to 0\) and non-radiative GRMHD results as \(f_{\rm Edd}\to\infty\). Meanwhile, the electromagnetic component \(s_{\rm EM}\) depends on \(\eta_{\rm EM}\) and the parameter \(k\), which is the ratio of the angular frequency of field lines relative to that of the BH. We estimate \(\eta_{\rm EM}\) as a function of \(a_{*}\) and \(f_{\rm Edd}\) by combining Equation 5 and Equation 3. For \(k\), we adopt the following fit from the non-radiative GRMHD simulations of Lowell et al. (2023):
\[k(a_{*})=\begin{cases}0.23,&a_{*}<0\\ \min(0.1+0.5a_{*},0.35),&a_{*}>0\end{cases} \tag{12}\]
this gives \(k\) slightly less than the Blandford & Znajek (1977) monopole value of 0.5, which broadly agrees with other simulations in the literature (McKinney et al., 2012; Penna et al., 2013; Chael et al., 2023).
Figure 3: Spinup parameter \(s\) as a function of BH spin \(a_{*}\), with Eddington ratio \(f_{\rm Edd}\) encoded in the color. Values computed directly from our GRMHD simulations are plotted in the upper panel, and the predictions of our fitting functions (Equation 13) are shown in the lower panel. At the lowest accretion rates, models approximately match the prediction for a razor-thin disk (Equation B30), shown as the dotted line. At the highest accretion rates, prograde and zero-spin models approach the curve found for pure GRMHD models (Equation 8), plotted as a dashed line. We plot our model predictions for \(s\) for \(f_{\rm Edd}=1\) and \(f_{\rm Edd}\to\infty\) with light blue and dark red curves respectively.
As one final modification to allow our model to support hot accretion flows, we make the following adjustment:
\[s=\begin{cases}s_{\rm HD}+s_{\rm EM}&f_{\rm Edd}>f_{c}\\ s_{\rm MAD}&f_{\rm Edd}\leq f_{c}\end{cases} \tag{13}\]
where \(f_{c}\) is a critical Eddington ratio below which the accretion flow should transition to the radiatively inefficient hot accretion mode (Narayan and Yi, 1994, 1995; Abramowicz et al., 1995). Following previous efforts to model the evolution of black hole populations, we adopt \(f_{c}=3\times 10^{-2}\)(Merloni and Heinz, 2008; Volonteri et al., 2013). The exact Eddington ratio at which this transition occurs is poorly constrained and unlikely to be a sharp transition (Cho and Narayan, 2022). Different values of \(f_{c}\) may be adopted without qualitatively changing our formulae.
Our final result for the spinup parameter \(s\)(Equation 13) can thus be obtained from just two parameters (\(a_{*}\) and \(f_{\rm Edd}\)) by inserting our fitting functions for \(\phi(a_{*},f_{\rm Edd})\) (Equation 3), \(s_{\rm HD}(a_{*},f_{\rm Edd})\) (Equation 4), and \(\eta_{\rm EM}(a_{*},\phi)\) (Equation 5). As constructed, Equation 13 can be applied to all physical values of \(a_{*}\in[-1,1]\) and \(f_{\rm Edd}\in(0,\infty)\).
The model predictions from Equation 13 are shown in the bottom panel of Figure 3. The model captures the behavior seen in the simulations (upper panel) exceptionally well, especially for spinning BHs. For \(a_{*}=0\), it underestimates the evolution of \(s\) with \(f_{\rm Edd}\). We speculate that this may be due to the exclusion of angular momentum loss due to hydrodynamic wind, evident in Figure 2. In light blue, we plot the model's prediction for \(s\) when \(f_{\rm Edd}=1\). It is quite similar to the thin disk solution, but has a root, which corresponds to an equilibrium value of \(a_{*}\) for fixed \(f_{\rm Edd}\), at \(a_{*,\rm eq}\approx 0.8\) instead of 1. In red, we plot the limit as \(f_{\rm Edd}\to\infty\). It follows the non-radiative GRMHD fitting function well, with minor deviations in the retrograde regime. This curve exhibits two kinks originating from the piece-wise nature of Equation 12. As \(f_{\rm Edd}\to f_{c}\), \(s\) is well-approximated by the thin disk solution (dotted black line) by construction. In any case, the key result from the red line is that, as \(f_{\rm Edd}\to\infty\), the equilibrium spin (where \(s=0\)) approaches \(a_{*,\rm eq}\approx 0\).
In Figure 4, we plot the equilibrium spin \(a_{*,\rm eq}\) as a function of Eddington ratio, found by taking Equation 13 and solving the condition \(s=0\) at fixed \(f_{\rm Edd}\). We demarcate three different physical regimes: (i) hot accretion for \(f_{\rm Edd}<f_{c}\), (ii) what is classically modeled as a thin disk for \(f_{c}<f_{\rm Edd}<1\), and (iii) super-Eddington accretion for \(f_{\rm Edd}>1\). In reality, \(s\) and \(a_{*,\rm eq}\) should evolve more gradually around \(f_{\rm Edd}\approx f_{c}\), but we lack a detailed understanding of this transition and are unable to model it more realistically in this work.
Our model permits the existence of BHs with a stable \(a_{*,\rm eq}\approx 1\) for Eddington ratios in the range \(f_{\rm Edd}\sim 0.03-0.3\), but \(a_{*,\rm eq}\) begins to decline above \(f_{\rm Edd}\approx 0.3\) and approaches 0 as the accretion rate becomes highly super-Eddington. The limiting equilibrium spin for extremely large values of \(f_{\rm Edd}\) is \(a_{*}=0.035\), as in the hot accretion regime (Narayan et al., 2022; Lowell et al., 2023), but note that this exact value is not very accurate and depends on the details of how spin-down is modeled. On the upper \(x\)-axis, we plot the evolutionary timescale of both mass and spin for a given \(f_{\rm Edd}\), given by \(t_{\rm Sal}/f_{\rm Edd}\) where
\[t_{\rm Sal}=\frac{\epsilon\sigma_{T}c}{4\pi Gm_{p}}=\epsilon\times 450\ \rm Myr \tag{14}\]
is called the Salpeter timescale, where \(\sigma_{T}\) is the Thomson cross-section and \(m_{p}\) is the proton mass. For the convenience of defining a spin-independent \(t_{\rm Sal}\), we adopt a fiducial value of \(\epsilon=0.1\) for its definition, such that \(t_{\rm Sal}=45\) Myr. Since mass and spin evolve on the same time-scale, a BH must accrete a significant
Figure 4: Equilibrium spin \(a_{*,\rm eq}\) as a function of Eddington ratio \(f_{\rm Edd}\) using our model (Equation 13). Systems with \(f_{\rm Edd}=1\) reach equilibrium at \(a_{*,\rm eq}\approx 0.8\), while those with a factor of a few smaller \(f_{\rm Edd}\) equilibrate near \(a_{*,\rm eq}\approx 1\). Systems with both \(f_{\rm Edd}\ll 1\) and \(f_{\rm Edd}\gg 1\) reach equilibrium near \(a_{*,\rm eq}\approx 0\). In the upper \(x\)-axis, we plot \(t_{\rm Sal}/f_{\rm Edd}\), the timescale over which both mass and spin evolve and thus the minimum timescale required to reach spin equilibrium.
fraction of its own mass to reach equilibrium spin2. In the hot accretion regime, this would occur on timescales easily exceeding the age of the universe, and thus such BHs will not naturally reach the equilibrium spin value through the BZ process (although noticeable evolution is still possible; Narayan et al., 2022). However, BHs which accrete continuously near or above the Eddington limit can reach their equilibrium spins in less than (sometimes very much less than) a Hubble time. Interestingly, such continuous and rapid assembly is invoked to explain the existence of massive quasars at \(z\gtrsim 6\) (e.g., Fan et al., 2003; Banados et al., 2018; Wang et al., 2021; Bogdan et al., 2023), which have accumulated masses up to \(10^{10}~{}M_{\odot}\) when the Universe was approximately 1 Gyr old.
Footnote 2: However, note that \(s\) measures the ratio of the spin evolution rate to the mass evolution rate. Hence for values of \(|s|\) approaching 10, spin evolves 10 times faster than mass.
## 4 Discussion and Conclusions
In this letter we presented a suite of GRRMHD simulations of radiative MAD accretion disks around BHs. The simulations cover a range of BH spins \(a_{*}\) from \(+0.97\) to \(-0.9\), and Eddington ratios \(f_{\rm Edd}\) from \(0.4\) to \(40\). We find two key qualitative results.
First, radiative disks in the MAD state around spinning BHs produce powerful jets as efficiently as the better-studied non-radiative disks (which are found in systems with \(f_{\rm Edd}\ll 1\)), and the power in the jet comes similarly from the BZ mechanism (see the right panel of Figure 2).
Second, the saturated magnetic flux \(\phi\) depends not only on the BH spin (as already known for non-radiative MAD models) but also on the Eddington ratio (see the left panel of Figure 2). As a result, radiative disks with \(f_{\rm Edd}\lesssim 0.3\) behave roughly like the standard thin accretion disk model, but systems with \(f_{\rm Edd}\gg 1\) are very different and closely resemble non-radiative models (see Figure 4). In particular, when \(f_{\rm Edd}\gg 1\), the accreting BHs spin-down rapidly toward an equilibrium \(a_{*}\approx 0\).
At a quantitative level, using the above suite of MAD GRRMHD simulations we have devised fitting functions which can be used to estimate magnetization \(\phi\) (Equation 3), jet feedback efficiency \(\eta\) (Equation 5), and spin evolution \(s\) (Equation 13), as a function of spin and Eddington ratio. Spindown via the BZ mechanism grows more efficient as Eddington ratio increases, but is already noticeable at \(f_{\rm Edd}\approx 1\), where the equilibrium spin is \(a_{*}=0.8\). This has important implications for feedback and spin-evolution of BHs in the near-Eddington to super-Eddington regime, such as flux-limited samples of AGN, rapidly assembling seeds in the early universe, and collapsar BHs.
In Figure 5, we plot evolutionary tracks for a selection of cosmologically motivated scenarios, each of which results in a BH with \(M\approx 10^{9}~{}M_{\odot}\). In each case, we have integrated Equation 13 using a standard Runge-Kutta-Fehlberg 4(5) integrator with adaptive step-sizing. For these examples, we make an important assumption that the accretion disk and BH angular momentum axes are always perfectly aligned, which need not generally be the case. Variations in disk tilt over cosmic time are an uncertainty that can lead to substantial differences in spin evolution, leading to lower spins if the angular momenta of material is more randomized (King et al., 2008; Berti and Volonteri, 2008). In the left column of Figure 5, we plot evolutionary scenarios with different fixed \(f_{\rm Edd}\) values shown as different colors. For \(f_{\rm Edd}=20,~{}1,~{}0.1,~{}0.01\), we initialize our BHs with \(M=10,~{}10^{7},~{}3\times 10^{8},~{}10^{9}~{}M_{\odot}\) and \(a_{*}=0,~{}0,~{}0,~{}0.998\), respectively. In all cases, 1 Gyr is enough for each of the BHs to approach their equilibrium spin (see Figure 4). These scenarios result in very different spin evolution and feedback as a function of time.
Both the \(f_{\rm Edd}=20\) and the \(f_{\rm Edd}=1\) scenarios result in the accretion of \(10^{9}~{}M_{\odot}\) of material, but the \(f_{\rm Edd}=20\) scenario releases a total of \(7.8\times 10^{53}\) erg worth of feedback compared to \(5.3\times 10^{54}\) erg in the \(f_{\rm Edd}=1\) scenario, a factor of 7 difference. The reason is that the \(f_{\rm Edd}=20\) model reaches a lower equilibrium spin, which results in less efficient jet feedback. A consequence of this interesting result is that a BH could potentially grow _more_ efficiently in a super-Eddington state before having its mass supply cut off by excessive jet feedback. We have assumed a sharp transition between thin and thick accretion flows at an Eddington ratio of \(f_{c}=3\times 10^{-2}\). Evolving in the thin disk regime, the \(f_{\rm Edd}=0.1\) model spins _up_ to maximal spin and cannot power a very efficient jet, since lower Eddington ratio sources maintain weaker magnetization. On the other hand, the \(f_{\rm Edd}=0.01\) model evolves in the hot accretion flow regime and spins _down_ to near zero spin.
In the right column of Figure 5, we plot two different fueling-limited scenarios. In the "Constant \(\dot{M}\)" model, we envision that a galaxy provides constant \(\dot{M}\) that the BH can consume, regardless of the \(f_{\rm Edd}\) implied. In this model, we suggestively tune our parameters to match the formation of the Wang et al. (2021) quasar, which is observed with \(f_{\rm Edd}=0.67\) and \(M=1.6\times 10^{9}~{}M_{\odot}\) at \(z=7.642\), when the Universe was only 670 Myr old. After being initialized at \(10^{4}~{}M_{\odot}\) and \(a_{*}=0\), the BH accumulates mass in the super-Eddington regime as spin-down from the BZ mechanism keeps its spin low. Its
spin increases only as \(f_{\rm Edd}\to 1\), and it reaches an equilibrium spin of 0.9. Qualitatively consistent with our predictions for a powerful jet, Wang et al. (2021) report a relativistic outflow while also suggesting greater incidence of such powerful outflows at high redshift.
In the second "Power-Law \(\dot{M}\)" model, a \(10^{5}\)\(M_{\odot}\)\(a_{*}=0\) seed initially accretes at \(f_{\rm Edd}=15,000\), then the accretion rate declines as \(\dot{M}\propto(1+(t/10^{7}~{}{\rm yr})^{2})^{-1}\), motivated by Hopkins et al. (2006, 2006). Over the age of the Universe, this BH traverses all three accretion regimes, starting with \(a_{*}\approx 0\) while it is super-Eddington, rising to \(a_{*}\approx 0.9\) in the thin disk regime, then finally declining to \(a_{*}\approx 0.5\) in the hot accretion regime. It runs out of fuel before it can achieve the equilibrium spin \(\approx 0\) for its final \(f_{\rm Edd}\). Ending with \(f_{\rm Edd}\sim 10^{-6}\) and \(M\sim 10^{9}\)\(M_{\odot}\), this evolutionary track could represent the history of the most massive BHs resolvable on the sky, such as Event Horizon Telescope target Messier 87.
Figure 5 illustrates how a BH's assembly history is imprinted on its final spin value, motivating observational spin constraints of supermassive BHs. For \(0.01\lesssim f_{\rm Edd}\lesssim 0.3\), X-ray reflection spectroscopy has been most successful in accumulating large spin samples. The measured spin values tend to be highly skewed towards \(a_{*}\approx 1\)(see Reynolds, 2021, for a recent review), in agreement with the equilibrium spin of a thin accretion disk, as well as the equilibrium spin value suggested by the present work for that range of \(f_{\rm Edd}\). To complement these thin disk spin constraints, the next-generation Event Horizon Telescope aims to measure spins of dozens of supermassive BHs in the hot accretion (\(f_{\rm Edd}\ll 1\)) regime (Pesce et al., 2022; Ricarte et al., 2023). Taking the "Power-Law \(\dot{M}\)" model in Figure 5 as an example, we would predict typical spin values roughly half-way between 1 and 0 (but recall that these calculations have neglected angular momentum flips and BH-BH mergers, e.g., Berti and Volonteri, 2008). It would be interesting to see what future observations show. Unfortunately, there is no known direct probe of spin in the super-Eddington regime, where we predict equilibrium spins close to 0. Current probes of spin rely on the existence of a sharp transition in the dynamics of the accreting disk at the innermost stable circular orbit. Such a feature is expected to be present in geometrically thin disks (and is the basis of the X-ray reflection method), but it is washed out in geometrically thick disks such as are found for \(f_{\rm Edd}\gg 1\) (e.g., this work).
It is worth mentioning that in the present radiative MAD models, as well as others in the literature, roughly \(\sim 60\%\) of the jet power can be transformed into radiation at large radius (Curd and Narayan, 2023). This can occur because inverse Compton scattering can transform much of the kinetic energy of the jet fluid into highly beamed radiation. However, we refrain from providing radiative efficiencies from our simulations, because we find that numerical floors in the jet region can artificially inflate the total energy in the jet at large radii. Fortunately, this artificially injected energy simply outflows from the simulation box and does not affect the region of interest.
The analytic formulae devised in this work can be applied to galactic or cosmological scale simulations, conveniently bridging the sub-Eddington and super-Eddington regimes. When placing these models in an astrophysical context, the most important caveat is the assumption that these systems are magnetically saturated in the MAD state. Event horizon scale polarimetric imaging the largest black holes on the sky do currently favor MAD models over their SANE counterparts (Event Horizon Telescope Collaboration et al., 2021, 2022; Wielgus et al., 2022), and ab-initio simulations of gas and magnetic field transport onto Sgr A* can indeed naturally produce MAD states (Ressler et al., 2020, 2023), but this evidence pertains only to low-Eddington ratio BHs. Super-Eddington MAD disks can explain jetted tidal disruption events (Tchekhovskoy et al., 2014; Curd and Narayan, 2019), but these objects are only \(\sim\)1% of known TDEs and may not be representative of the typical super-Eddington disk. Future observational and theoretical developments to test the robustness of the MAD state would help validate the modeling performed here. Furthermore, our simulations are limited to \(M=10\)\(M_{\odot}\) and \(M=10^{4}\)\(M_{\odot}\), and Figure 2 hints at a possible trend with mass. We do not expect our results to be very sensitive to BH mass on physical grounds, but this should be verified in future work in the context of varying the metallicity as well.
## 5 Acknowledgments
This work was supported in part by NSF grants AST1816420 and OISE-1743747, and by the Black Hole Initiative at Harvard University, made possible through the support of grants from the Gordon and Betty Moore Foundation and the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the Moore or Templeton Foundations. koral(Sadowski et al., 2013, 2014), Matplotlib (Hunter, 2007), SciPy (Virtanen et al., 2020), NumPy (Harris et al., 2020)
## 6 Data Availability
Most plotted values can be downloaded from data files that accompany this publication. In addition, we pro
Figure 5: Example evolutionary pathways of BH mass and spin computed using the fitting functions derived in this work. In each column, the panels show from top to bottom the Eddington ratio \(f_{\rm Edd}\), the BH mass \(M\), BH spin \(a_{*}\), and the jet power \(P_{\rm jet}\) as a function of time. Each pathway is tuned to produce \(M\approx 10^{9}\)\(M_{\odot}\) at the final time. _Left:_ Constant Eddington ratio scenarios, each of which reaches a distinct equilibrium spin from which their recent cosmically averaged \(f_{\rm Edd}\) could be inferred. _Right:_ Fueling-limited scenarios where we prescribe \(\dot{M}\) as a function of time. The Constant \(\dot{M}\) scenario is tuned to match the Wang et al. (2021) quasar found when the Universe was only 670 Myr old (\(z=7.642\)). The Power-Law \(\dot{M}\) scenario has a prescribed time-variable accretion rate, \(\dot{M}\propto\left(1+\left(t/10^{7}\ {\rm yr}\right)^{2}\right)^{-1}\) (motivated by Hopkins et al., 2006, 2006), and might represent the history of a currently low-Eddington rate SMBH at the center of a galaxy cluster such as Messier 87.
vide a Python script including the equations presented in this work, as well as the integrator that was used to produce Figure 4 and Figure 5.
|
2306.04508 | Enhancing In-Context Learning with Answer Feedback for Multi-Span
Question Answering | Whereas the recent emergence of large language models (LLMs) like ChatGPT has
exhibited impressive general performance, it still has a large gap with
fully-supervised models on specific tasks such as multi-span question
answering. Previous researches found that in-context learning is an effective
approach to exploiting LLM, by using a few task-related labeled data as
demonstration examples to construct a few-shot prompt for answering new
questions. A popular implementation is to concatenate a few questions and their
correct answers through simple templates, informing LLM of the desired output.
In this paper, we propose a novel way of employing labeled data such that it
also informs LLM of some undesired output, by extending demonstration examples
with feedback about answers predicted by an off-the-shelf model, e.g., correct,
incorrect, or incomplete. Experiments on three multi-span question answering
datasets as well as a keyphrase extraction dataset show that our new prompting
strategy consistently improves LLM's in-context learning performance. | Zixian Huang, Jiaying Zhou, Gengyang Xiao, Gong Cheng | 2023-06-07T15:20:24Z | http://arxiv.org/abs/2306.04508v1 | # Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering
###### Abstract
Whereas the recent emergence of large language models (LLMs) like ChatGPT has exhibited impressive general performance, it still has a large gap with fully-supervised models on specific tasks such as multi-span question answering. Previous researches found that in-context learning is an effective approach to exploiting LLM, by using a few task-related labeled data as demonstration examples to construct a few-shot prompt for answering new questions. A popular implementation is to concatenate a few questions and their correct answers through simple templates, informing LLM of the desired output. In this paper, we propose a novel way of employing labeled data such that it also informs LLM of some undesired output, by extending demonstration examples with feedback about answers predicted by an off-the-shelf model, e.g., correct, incorrect, or incomplete. Experiments on three multi-span question answering datasets as well as a keyphrase extraction dataset show that our new prompting strategy consistently improves LLM's in-context learning performance.
## 1 Introduction
Recently, the rise of large language models (LLMs) [5, 22, 21] represented by ChatGPT1 provides a new paradigm for NLP research, which can perform well using only natural language instructions rather than being trained on the target dataset. Based on LLMs, many tasks are expected to be more convenient and accessible to users with different needs, including _multi-span question answering_ (MSQA). MSQA aims to automatically find one-to-many answers at the span level for a given question, which has attracted many in-depth research works [15, 26] based on pre-trained language models (PLMs), and has broad application scenarios such as medical question answering [34, 11].
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
However, compared with PLMs fine-tuned on the complete training data, LLMs still have a large gap on difficult MSQA datsets [13] such as DROP [8, 21]. To address it, _in-context learning_[7] is a promising approach to enhancing the capability of LLMs. The idea of in-context learning is to concatenate the test question with an analogous demonstration context to prompt LLMs to generate answers. As shown in the left half of Figure 1, the demonstration context consists of a few task-related demonstration examples with labeled answers, which can be retrieved from the training set of the target dataset.
**Motivation:** Although existing works have designed a range of approaches for retrieving and exploiting demonstration examples [18; 1; 20], the common practice of constructing a demonstration context is still concatenating questions and labeled answers through simple templates. We argue that only showing demonstration questions with correct answers may not guide LLMs to think deeply about demonstration examples, e.g., _lack of reflection on mistakes in problem solving_, which may lead to under-utilization of the labeled answers.
**Our Work:** In this paper, we propose to enhance in-context learning with diverse information derived from labeled answers to improve their utilization. Inspired by supervised learning which receives feedback from training loss to update model, we design a novel prompting strategy for LLM to obtain _feedback_ information in the form of _corrected answers_.
Specifically, as shown in the right part of Figure 1, this strategy first answers the demonstration question using an off-the-shelf model (e.g., based on conventional PLMs), compares its results with labeled answers, and records the corrected answers as feedback (e.g., correct, incorrect, or missing answers). Then we use both demonstration examples and corrected answers to construct an enhanced prompt for LLM. With this idea, we conducted experiments on three MSQA datasets as well as one keyphrase extraction dataset. The results show that our feedback-based prompting strategy significantly improves the capability of ChatGPT to answer multi-span questions.
Figure 1: An example of our new prompting strategy (right) compared with the conventional prompting strategy (left). Our strategy first answers the demonstration question using an off-the-shelf model (e.g., based on conventional PLMs) and records the corrected answers as feedback, and then combines demonstration examples with corrected answers to construct a prompt for LLM.
## 2 Related Work
### Large Language Models
From GPT-3 [5] to the latest GPT-4 [21], the emergence of powerful LLMs in recent years has triggered new thinkings and paradigms in NLP research. LLMs perform various downstream tasks using only text instructions, have matched state-of-the-art results in many tasks including machine translation [10] and relation extraction [30], and have influenced a range of domain applications such as education [28] and medical writing [4]. Despite the great success of LLMs, studies have also reported that it still has shortcomings in specific tasks [24, 2] and has a large gap in handling difficult tasks compared with PLM-based methods [13].
In particular, question answering (QA) is a task with long-term research and is faced with various challenges. The performance of LLMs on QA has received extensive attention. Some analytical works reported that LLMs have many limitations in QA tasks, including insufficient stability [29], poor performance on newly released datasets [17], and suffering from hallucinations [2]. Based on empirical observations, some works designed methods to improve the performance of LLMs on specific QA tasks such as commonsense QA [3], open-domain QA [16], and multi-document QA [23].
However, as an important and realistic QA task, _Multi-Span QA (MSQA) currently lacks dedicated research based on LLMs, whose performance on this task remains unclear._ In this paper, we propose and evaluate a novel strategy for effectively adapting LLMs to the MSQA task.
### In-Context Learning
With the development of LLMs, in-context learning [7] has also received extensive attention in recent years. Some research works studied it from the perspective of demonstration formatting, proposing template engineering to construct better human-written or automatically generated prompts [19, 32]. Some other methods enhanced in-context learning by selecting better demonstration examples, searching for the best ordering of demonstration examples [20], or using the KNN algorithm with lexical [1] or semantic [18] features to dynamically retrieve demonstration examples for each question.
The usage of labeled answers in the above methods is to append them to the question using some simple templates, which leads to potential under-utilization of labeled answers. The work most similar to ours is [30], which feeds demonstration examples to LLM to obtain a clue about the gold labels in a given document in a relation extraction task. However, the clue generated by LLM often contains mistakes, which also causes some loss of label information, and it is very expensive to interact every demonstration example with LLM. By contrast, in this paper, _we obtain answer feedback by comparing the prediction results on the demonstration example with the labeled answers, and use it to enrich in-context learning with more insightful information obtained from the corrected answers._
## 3 Approach
Given a question \(Q\) and a reference document \(D\), the goal of MSQA is to generate a set of \(n\) answers \(\mathcal{A}=\{A_{1},\ldots,A_{n}\}\), where \(A_{i}\) is a span-level text that may be either present in \(D\) or absent in \(D\). Let \(\mathcal{T}=\{[D_{1}^{T},Q_{1}^{T},\mathcal{A}_{1}^{T}],\ldots\}\) be a set of labeled examples, i.e., the set of all the available question-document-answ triples from which demonstration examples can be selected for in-context learning, e.g., the training set of a MSQA dataset.
Figure 2 gives an overview of our strategy, which includes a retrieval stage searching for relevant demonstration examples, an exercise stage for producing feedback, and a reasoning stage for in-context learning with feedback.
### Retrieval Stage
We first search for a few relevant demonstration examples for test question \(Q\) from the labeled examples set \(\mathcal{T}\). To this end, a question index \(\mathcal{I}\) is built for each question \(Q_{i}^{T}\) in \(\mathcal{T}\), and a retrieval module is executed to obtain the set \(\mathcal{E}\) of top-\(k\) relevant labeled examples:
\[\begin{split}\mathcal{I}&=\texttt{Index}(\mathcal{T })\\ \mathcal{E}&=\texttt{Retriever}(Q,\mathcal{I}), \text{where}\quad\mathcal{E}\subset\mathcal{T}\,,\end{split} \tag{1}\]
where \(\texttt{Index}(\cdot)\) and \(\texttt{Retriever}(\cdot,\cdot)\) are indexing and retrieval functions, respectively, and we realize them using an inverted index and BM25 in our experiments. \(\mathcal{E}=\{[D_{1}^{E},Q_{1}^{E},\mathcal{A}_{1}^{E}],\ldots,[D_{k}^{E},Q_{k }^{E},\mathcal{A}_{k}^{E}]\}\) is the selected demonstration examples set with size \(k\).
Figure 2: An overview of our prompting strategy, which includes a retrieval stage searching for relevant demonstration examples, an exercise stage for producing feedback, and a reasoning stage for in-context learning with feedback.
### Exercise Stage
Then we regard the selected demonstration examples \(\mathcal{E}\) as exercises to predict their answers and extend them with corrected answers as feedback. The set of predicted answers \(\mathcal{A}_{i}^{P}\) for each demonstration question \(Q_{i}^{E}\) is obtained as follows:
\[\mathcal{A}_{i}^{P}=\texttt{QAModel}(D_{i}^{E},Q_{i}^{E})\,, \tag{2}\]
where \(\texttt{QAModel}(\cdot,\cdot)\) is an off-the-shelf MSQA model (e.g., a conventional MSQA method based on PLMs), and \(\mathcal{A}_{i}^{P}=\{A_{1}^{P},\ldots,A_{m}^{P}\}\) is the predicted answers set with size \(m\).
Next, the predicted answers set \(\mathcal{A}_{i}^{P}\) is compared with the labeled answers set \(\mathcal{A}_{i}^{E}\) to obtain feedback about the predicted answers. The feedback consists of three parts: the correctly predicted set \(\mathcal{A}_{i}^{C}\), the incorrectly predicted set \(\mathcal{A}_{i}^{I}\), and the unpredicted (i.e., missing) set \(\mathcal{A}_{i}^{M}\), satisfying that \(|\mathcal{A}_{i}^{C}|+|\mathcal{A}_{i}^{I}|=m\) and \(|\mathcal{A}_{i}^{C}|+|\mathcal{A}_{i}^{M}|=n\).
### Reasoning Stage
After obtaining the answer feedback, an extended demonstration context is constructed from \(\mathcal{E}\) and the feedback. For each demonstrati
\begin{table}
\begin{tabular}{c|c|l} \hline Task & Function & Templates \\ \hline \multirow{8}{*}{MSQA \& KE} & \multirow{8}{*}{
\begin{tabular}{c} FeedbackTemp \\ (\(\cdot,\cdot,\cdot\)) \\ \end{tabular} } & Here are some **correct** answers (or present/absent keyphrases) responded by other AI model: \\ & & 1. **[CORRECT1]**; 2. **[CORRECT2]**;... \\ & & Here are some **incorrect** answers (or present/absent keyphrases) responded by other AI model: \\ & & 1. **[INCORRECT1]**; 2. **[INCORRECT2]**;... \\ & & Here are some answers (or present/absent keyphrases) **missed** by other AI model: \\ & & 1. **[MISS1]**; 2. **[MISS2]**;... \\ \hline \multirow{8}{*}{MSQA} & TaskTemp & Reading the passage: **[DOCUMENT]** \\ & & Extract spans from the above passage to answer the question: **[QUESTION]** \\ & & Answer as a list e.g. 1. answer1; 2. answer2 \\ & & Answer: 1. **[ANS1]**; 2. **[ANS2]**;... \\ \cline{1-1} & & Example1: **[DEMO CONTEXT1]** \\ & & Example2: **[DEMO CONTEXT2]** \\ & &... \\ & & Then, answer me a question like the above examples: **[TEST QUESTION]** \\ \hline \multirow{8}{*}{KE} & TaskTemp & Reading the passage: **[DOCUMENT]** \\ & & Extract present (or Generate absent) keyphrases from the above passage: **[**Response as a list e.g. 1. keyphrase1; 2. keyphrase2** \\ & & Keyphrases: 1. **[KEYPHRASE1]**; 2. **[KEYPHRASE2]**;... \\ \cline{1-1} & & Example1: **[DEMO CONTEXT1]** \\ & ConcatTemp & Example2: **[DEMO CONTEXT2]** \\ & &... \\ & & Then, extract present (or generate absent) keyphrases like the above cases: **[TEST QUESTION]** \\ \hline \end{tabular}
\end{table}
Table 1: Prompting templates used for MSQA and Keyphrase Extraction (KE)
a task description template to construct demonstration context \(\texttt{Prompt}_{i}^{\text{DEMO}}\), use a feedback template to construct feedback context \(\texttt{Prompt}_{i}^{\text{FB}}\), and the expended demonstration context \(\texttt{Prompt}_{i}^{\text{DEMO}+}\) is constructed by concatenating \(\texttt{Prompt}_{i}^{\text{DEMO}}\) and \(\texttt{Prompt}_{i}^{\text{FB}}\):
\[\texttt{Prompt}_{i}^{\text{DEMO}} =\texttt{TaskTemp}(D_{i}^{T},Q_{i}^{T},\mathcal{A}_{i}^{T}) \tag{3}\] \[\texttt{Prompt}_{i}^{\text{FB}} =\texttt{FeedbackTemp}(\mathcal{A}_{i}^{C},\mathcal{A}_{i}^{I}, \mathcal{A}_{i}^{M})\] \[\texttt{Prompt}_{i}^{\text{DEMO}+} =[\texttt{Prompt}_{i}^{\text{DEMO}};\texttt{Prompt}_{i}^{\text{FB }}]\,,\]
where \(\texttt{TaskTemp}(\cdot,\cdot,\cdot)\) and \(\texttt{FeedbackTemp}(\cdot,\cdot,\cdot)\) are two template filling functions. The details of the templates can be found in Table 1.
For the test question \(Q\), we construct test context using the same task description template but set the answers to an empty set:
\[\texttt{Prompt}_{i}^{\text{TEST}}=\texttt{TaskTemp}(D,Q,\varnothing)\,. \tag{4}\]
Finally, we use a concatenation template to construct the complete prompt and feed it into LLM:
\[\texttt{Prompt} =\texttt{ConcatTemp}(\{\texttt{Prompt}_{i}^{\text{DEMO}+},\dots \},\texttt{Prompt}_{i}^{\text{TEST}}) \tag{5}\] \[A^{\text{LLM}} =\texttt{LLM}(\texttt{Prompt})\,,\]
where \(\texttt{ConcatTemp}(\cdot,\cdot)\) is a template filling function detailed in Table 1, and \(A^{\text{LLM}}\) is a text answer returned by LLM. Since the instruction in the prompt requires LLM to answer in the form of a list, we can easily parse the text into multiple span-level answers to the test question.
## 4 Experimental Setup
We refer to our approach as **FBPrompt**.
### Datasets
We compared FBPrompt with baselines on three MSQA datasets: **MultispanQA**[8], **QUOREF**[6], and **DROP**[15]. Since the test set of them is hidden, we used the official development set as our test set. In addition, we used a keyphase extraction dataset **Inspec**[9], which has a similar format to MSQA, with one document input and multiple span-level outputs, but without question. Considering the experimental cost, we only randomly sampled 500 samples for evaluation from QUOREF and DROP. Table 2 shows some statistics about these datasets.
### Baselines
We compared FBPrompt with five popular usages of LLM as follows:
**Zero-shot** prompts LLM only using handle-written instructions without demonstration examples.
**Random** Sampling randomly selects \(k\) demonstration examples from the training set for each test question to construct prompt as done in [18].
**BM25** calculates lexical similarity between questions to obtain top-\(k\) relevant demonstration examples for each test question. It can be viewed as a simplified version of our FBPrompt--without using answer feedback.
**KATE**[18] uses KNN algorithm selecting \(k\) demonstration examples with highest semantic similarity score for each test question. We implemented it based on dense passage retrieval [12].
**Label-induced** Reasoning [30] feeds labeled answers, the question, and the document to LLM to obtain a clue about the relation between question and answers. We implemented it using the same BM25 results as our FBPrompt.
### Evaluation Metrics
We evaluated on each dataset using their official metrics [15, 8, 6, 31]. For MultiSpanQA, we used Exact Match F1 (**EM**) and Partial Match F1 (**PM**). For QUOREF and DROP, we used Exact Match Global (**EM\({}_{\mathbf{G}}\)**) and F1 score (**F1**). For INSPEC, we used macro-averaged **F1@5** and **F1@M**.
### Implementation Details
We used OpenAI official API2 with the model gpt-3.5-turbo-0301 for all our experiments. We used the T5-base [25] model as our off-the-shelf model in FBPrompt. For the keyphrase extraction task, we performed extraction of present keyphrases and generation of absent keyphrases in two independent steps with two slightly different instructions as show in Table 1. Unless otherwise specified, we set \(k=3\), i.e., FBPrompt and all the few-shot baselines used three demonstration examples.
Footnote 2: [https://platform.openai.com/](https://platform.openai.com/)
## 5 Experimental Results
### Comparison with Baselines
In Table 3, FBPrompt significantly outperforms previous LLM-based methods on all metrics in the four datasets. In particular, compared with BM25 which
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline Dataset & Type & \# Test & \# Used & Present Labels (\%) & Avg. \# Answers \\ \hline MultiSpanQA[15] & MSQA & 653 & 653 & 100 & 2.89 \\ QUOREF[6] & MSQA & 2537 & 500 & 100 & 1.14 \\ DROP[8] & MSQA & 9,622 & 500 & 73.03 & 1.09 \\ INSPEC[9] & KP & 500 & 500 & 26.42 & 2.48 \\ \hline \end{tabular}
\end{table}
Table 2: Dataset statistics. Present Labels (%) indicates the percentage of answers in MSQA datasets or percentage of keyphrases in keyphrase extraction datasets that explicitly appear in the document.
uses the same demonstration examples as ours, FBPrompt exceeds it by a large lead, thus exhibiting the performance brought by our proposed answer feedback.
We also show the state-of-the-art (SOTA) results reported by other papers using fully-supervised fine-tuned models: they are [14] for MultiSpanQA, [26] for QUOREF, [33] for DROP, and [27] for INSPEC. Although the experimental results on the three MSQA datasets are not directly comparable due to inconsistent test data, it can be found that LLM-based models are still weaker than the fully-supervised models, but performs relatively well on the keyphrase extraction dataset INSPEC. FBPrompt closes the gap to SOTA on MSQA and achieves new SOTA results on the INSPEC dataset.
### The Effectiveness of Different Feedback
We compare FBPrompt with the method using only one type of feedback to analyze whether all the three types of feedback bring benefits. The results reported in Table 4 reveal that each part of the feedback has a effect in improving the performance of LLM. In particular, using only correct answers leads to the largest loss compared with using only incorrect or missing answers, which shows that negative feedback has the largest benefit to LLM.
\begin{table}
\begin{tabular}{l c c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{MultiSpanQA} & \multicolumn{2}{c|}{QUOREF} & \multicolumn{2}{c|}{DROP} & \multicolumn{3}{c}{INSPEC} \\ \cline{2-13} & EM & PM & EM\({}_{\text{G}}\) & F1 & EM\({}_{\text{G}}\) & F1 & \multicolumn{2}{c|}{Present} & \multicolumn{2}{c}{Absent} \\ \cline{5-13} & & & & & & & F1@5 & F1@M & F1@5 & F1@M \\ \hline FBPrompt & **64.60** & **83.11** & **73.60** & **80.55** & **62.00** & **69.11** & **0.425** & **0.499** & **0.034** & **0.055** \\ - only correct & 62.70 & 82.75 & 71.40 & 79.69 & 58.40 & 65.60 & 0.401 & 0.463 & 0.027 & 0.046 \\ - only incorrect & 62.93 & 82.97 & 72.40 & 80.23 & 60.20 & 67.92 & 0.417 & 0.490 & 0.030 & 0.048 \\ - only missing & 63.48 & 82.90 & 72.80 & 79.75 & 61.20 & 68.80 & 0.416 & 0.480 & 0.027 & 0.046 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effectiveness of different feedback. The best results are in bold.
\begin{table}
\begin{tabular}{l c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{MultiSpanQA} & \multicolumn{2}{c|}{QUOREF} & \multicolumn{2}{c|}{DROP} & \multicolumn{3}{c}{INSPEC} \\ \cline{2-13} & EM & PM & EM\({}_{\text{G}}\) & F1 & EM\({}_{\text{G}}\) & F1 & \multicolumn{2}{c|}{Present} & \multicolumn{2}{c}{Absent} \\ \cline{5-13} & & & & & & & & F1@5 & F1@M & F1@5 & F1@M \\ \hline SOTA & **73.13\({}^{*}\)** & **83.36\({}^{*}\)** & **80.61\({}^{*}\)** & **86.70\({}^{*}\)** & **84.86\({}^{*}\)** & **87.54\({}^{*}\)** & 0.401\({}^{*}\) & 0.476\({}^{*}\) & 0.030\({}^{*}\) & 0.041\({}^{*}\) \\ \hline Zero-shot & 39.47 & 68.14 & 33.60 & 51.07 & 5.81 & 17.25 & 0.298\({}^{*}\) & 0.417\({}^{*}\) & 0.016\({}^{*}\) & 0.030\({}^{*}\) \\ Random & 58.62 & 80.62 & 71.40 & 80.25 & 47.70 & 60.53 & 0.401 & 0.472 & 0.033 & 0.051 \\ Label-induced & 54.56 & 76.99 & 64.40 & 71.96 & 12.63 & 16.47 & 0.115 & 0.135 & 0.009 & 0.013 \\ KATE & 60.78 & 81.51 & 73.00 & 79.76 & 50.90 & 60.69 & 0.399 & 0.468 & 0.026 & 0.038 \\ BM25 & 61.33 & 81.63 & 70.80 & 79.00 & 58.40 & 65.93 & 0.405 & 0.470 & 0.029 & 0.051 \\ \hline FBPrompt & 64.60 & 83.11 & 73.60 & 80.55 & 62.00 & 69.11 & **0.425** & **0.499** & **0.034** & **0.055** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Main results on MSQA. The best results are in bold. \({}^{\ddagger}\) indicates the results reported in [27]. \({}^{*}\) indicates that the results are not completely comparable due to the difference in test data.
### Comparison with Random Feedback
Then, we simulate feedback by randomly generating predicted answers to observe whether the improvement of FBPrompt is really brought about by our carefully designed feedback. For the labeled answers set \(\mathcal{A}_{i}^{E}\) from demonstration example \([D_{i}^{E},Q_{i}^{E},\mathcal{A}_{i}^{E}]\), we randomly selected a number \(\hat{n}_{1}\) in the range \([0,|\mathcal{A}_{i}^{E}|]\)), and randomly sampled \(\hat{n_{1}}\) positive answers from the labeled answers set \(\mathcal{A}_{i}^{E}\) as pseudo positive predicted answers \(\mathcal{A}^{\text{Pos}}\). Similarly, we randomly selected a number \(\hat{n}_{2}\) in the range \([0,|\mathcal{A}_{i}^{E}|]\)), and randomly sampled \(\hat{n_{2}}\) spans from the document \(D_{i}^{E}\) as pseudo negative predicted answers \(\mathcal{A}^{\text{Neg}}\). Then, we merged \(\mathcal{A}^{\text{Pos}}\) and \(\mathcal{A}^{\text{Neg}}\) as the pseudo predicted answers and executed FBPrompt to generate answers.
As shown in Table 5, the performance of FBPrompt drops significantly when random feedback is used, which shows that our constructed feedback is useful.
### Number of Demonstration Examples
We study whether FBPrompt exhibits consistent effectiveness when the number of demonstration examples varies. In Figure 3, we report the changing trend of FBPrompt and BM25 when the number of examples changes from 1 to 4. We observe that with a varying number of examples in the four datasets, the performance of FBPrompt is consistently higher than that of BM25. Especially in the case of one-shot, FBPrompt largely outperforms BM25.
Figure 3: Results of FBPrompt and BM25 with different numbers of examples on four datasets.
\begin{table}
\begin{tabular}{l c c|c c|c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{MultiSpanQA} & \multicolumn{3}{c|}{QUOREF} & \multicolumn{3}{c|}{DROP} & \multicolumn{3}{c}{INSPEC} \\ \cline{2-11} \cline{3-11} & EM & PM & EM\({}_{\text{G}}\) & F1 & EM\({}_{\text{G}}\) & F1 & \multicolumn{2}{c|}{Present} & \multicolumn{2}{c}{Absent} \\ \cline{5
### Case Study
A real case from MultiSpanQA is presented in Table 6. The left part shows an demonstration example for the test question in the right part. We can observe that the prediction of the baseline method (BM25) makes a mistake, since LLM observes 'produce' in the question and directly finds answers around 'produced', instead of analyzing the meaning of the question thoroughly. As for FBPrompt, our off-the-shelf model also observes 'produce' in the question, and mistakenly finds the answers 'liver','skeletal musc' in the original document near'made', which is semantically close to 'produce'. But after given a feedback, LLM learns not to be confused by such specific word, and tries to understand the entire question. Therefore, FBPrompt finally generates correct answers.
## 6 Conclusion
In this paper, we explore the performance of LLM in multi-span question answering, finding that existing in-context learning methods under-utilize labeled answers. To alleviate this problem, we propose a novel prompting strategy called FBPrompt, which constructs and employs answer feedback from an off-the-shelf model to enhance in-context learning. Experiments on multiple datasets show that FBPrompt using answer feedback significantly improves the performance of LLM on MSQA tasks. In the future, we will deeply analyze the working principle of answer feedback, and try to integrate more useful feedback information into LLM for various tasks.
#### Acknowledgements
This work was supported in part by the NSFC (62072224) and in part by the CAAI-Huawei MindSpore Open Fund.
|
2307.14432 | Compressed gate characterization for quantum devices with
time-correlated noise | As quantum devices make steady progress towards intermediate scale and
fault-tolerant quantum computing, it is essential to develop rigorous and
efficient measurement protocols that account for known sources of noise. Most
existing quantum characterization protocols such as gate set tomography and
randomized benchmarking assume the noise acting on the qubits is Markovian.
However, this assumption is often not valid, as for the case of 1/f charge
noise or hyperfine nuclear spin noise. Here, we present a general framework for
quantum process tomography (QPT) in the presence of time-correlated noise. We
further introduce fidelity benchmarks that quantify the relative strength of
different sources of Markovian and non-Markovian noise. As an application of
our method, we perform a comparative theoretical and experimental analysis of
silicon spin qubits. We first develop a detailed noise model that accounts for
the dominant sources of noise and validate the model against experimental data.
Applying our framework for time-correlated QPT, we find that the number of
independent parameters needed to characterize one and two-qubit gates can be
compressed by 10x and 100x, respectively, when compared to the fully generic
case. These compressions reduce the amount of tomographic measurements needed
in experiment, while also significantly speeding up numerical simulations of
noisy quantum circuit dynamics compared to time-dependent Hamiltonian
simulation. Using this compressed noise model, we find good agreement between
our theoretically predicted process fidelities and two qubit interleaved
randomized benchmarking fidelities of 99.8% measured in recent experiments on
silicon spin qubits. More broadly, our formalism can be directly extended to
develop efficient and scalable tuning protocols for high-fidelity control of
large-arrays of quantum devices with non-Markovian noise. | M. J. Gullans, M. Caranti, A. R. Mills, J. R. Petta | 2023-07-26T18:05:49Z | http://arxiv.org/abs/2307.14432v2 | # Compressed gate characterization for quantum devices with time-correlated noise
###### Abstract
As quantum devices make steady progress towards intermediate scale and fault-tolerant quantum computing, it is essential to develop rigorous and efficient measurement protocols that account for known sources of noise. Most existing quantum characterization protocols such as gate set tomography and randomized benchmarking assume the noise acting on the qubits is Markovian. However, this assumption is often not valid, as for the case of \(1/f\) charge noise or hyperfine nuclear spin noise. Here, we present a general framework for quantum process tomography (QPT) in the presence of time-correlated noise. We further introduce fidelity benchmarks that quantify the relative strength of different sources of Markovian and non-Markovian noise. As an application of our method, we perform a comparative theoretical and experimental analysis of silicon spin qubits. We first develop a detailed noise model that accounts for the dominant sources of noise and validate the model against experimental data. Applying our framework for time-correlated QPT, we find that the number of independent parameters needed to characterize one and two-qubit gates can be compressed by 10x and 100x, respectively, when compared to the fully generic case. These compressions reduce the amount of tomographic measurements needed in experiment, while also significantly speeding up numerical simulations of noisy quantum circuit dynamics compared to time-dependent Hamiltonian simulation. Using this compressed noise model, we find good agreement between our theoretically predicted process fidelities and two qubit interleaved randomized benchmarking fidelities of 99.8% measured in recent experiments on silicon spin qubits. More broadly, our formalism can be directly extended to develop efficient and scalable tuning protocols for high-fidelity control of large-arrays of quantum devices with non-Markovian noise.
## I Introduction
Accurately and efficiently characterizing noise in current quantum computing hardware is essential in realizing their long-term technological promise [1]. Many platforms suffer from non-Markovian, time-correlated noise processes such as drift and \(1/f\) noise whose variance diverges with averaging time. On the other hand, the standard assumption in quantum characterization protocols is that noise-processes have short range correlations in time with a finite variance [2]. For example, the underlying theory of randomized benchmarking (RB) and gate set tomography (GST) explicitly neglect time-dependent fluctuations in the noise model. This assumption is referred to as the Markovian approximation because it arises from modeling the bath as a memoryless system in thermal equilibrium [3].
A number of theoretical studies have made progress in extending quantum characterization methods to deal with different types of time-correlated noise, providing clear evidence of the need to account for these effects when comparing to experiment [4; 5; 6; 7; 8; 9]. However, these works have provided limited guidance on the experimental resources required to characterize non-Markovian noise. At the same time, significant effort has been devoted to speeding up standard gate characterization protocols [10; 11; 2; 2]. In the most general case, accounting for non-Markovian effects significantly increases the amount of characterization needed to describe quantum circuit dynamics [13]. As a result, compressed descriptions become especially important when dealing with correlated noise processes [14].
Non-Markovian noise is particularly prominent in solid-state systems such as superconducting qubits and spin qubits. In superconducting qubits, time-correlated noise arises primarily from electric field fluctuations in the environment [15]. As discussed below, the theoretical analysis of these effects in superconducting qubits have focused primarily on RB [4; 5; 6]. In spin qubits, where quantum information is encoded in an electron or nuclear spin, the noise has a rich variety of non-Markovian sources including lattice nuclear spins, other local magnetic field inhomogeneities, and spin-orbit coupling to electric field fluctuations [16; 17]. Moreover, the first demonstrations of high-fidelity few-qubit control in spin qubits have just recently been obtained with two-qubit gate fidelities \(>99.8\%\)[18; 19; 20]. These results motivate a more detailed investigation of non-Markovian noise effects for quantum process tomography (QPT) of spin qubits.
In this article, we introduce a method for QPT with time-correlated noise. Our basic approach to the problem is to model each quantum gate as a time-dependent quantum channel whose matrix logarithm, the so-called "error generator," follows a (potentially non-stationary) Gaussian stochastic process [21]. Our decomposition of the error channel allows us to naturally separate noise contributions into sources with different spectral characteristics. The spectrally-resolved error generators then
also lead to natural fidelity benchmarks for different sources of noise in the system.
As an application of our methods, we perform a systematic analysis of silicon spin qubits, effectively providing a case study, while also leading to a number of new insights for this platform. Of primary importance, we find that our formalism allows for a significant compression of the noise model both in terms of the degree of temporal correlations and the number of independent parameters. We analyze the problem by first introducing a detailed noise model for silicon spin qubits. We validate this model by comparing to experimental data on conventional one and two-qubit control experiments, finding excellent agreement between theory and experiment. With the microscopic noise model established, we then apply our framework for time-correlated QPT to this platform. Crucially, we find that the dominant noise contributions for one and two-qubit gates are coherent errors with fluctuations that are effectively static on the time-scale of quantum circuits that last for 1 ms or less.
The resulting simplification of the noise model allows for a large speed up in both experimental gate characterization times and numerical simulations of noisy quantum circuits. Using our compressed model, we compare gate fidelities obtained from interleaved RB [22] against exactly computed process fidelities. We find that interleaved RB accurately measures the error rate of two-qubit gates in silicon spin qubits to within a factor of two, consistent with past studies for superconducting qubits [4]. These results are in good agreement with the recent experimental measurements of two-qubit gate fidelities \(>99.8\%\) in silicon spin qubits [20].
The article is organized as follows. Section II introduces a general framework for time-correlated QPT. We use this analysis to introduce fidelity benchmarks for different noise sources. Section III describes a noise model for silicon spin qubits and compares its predictions to experimental data taken from similar devices [20]. Section IV applies our methods for time-correlated QPT to silicon spin qubits using numerically simulated time evolution in the presence of noise with the noise parameters extracted in Sec. III. We then use the tomographic results to develop a simplified error model for each gate, which is used to compare interleaved RB fidelities with directly computed process fidelities. Section V provides some concluding remarks and the outlook for future work.
## II Time correlated quantum process tomography
We now introduce our general framework that allows us to incorporate time-correlated noise in the process tensors for quantum control. We also define fidelity benchmarks for time-correlated noise that allow one to isolate different contributions to the gate infidelities from the measured data.
### Background and Setup
In the Markovian approximation, each unitary gate operation is treated as a fixed quantum channel [21; 2]
\[\mathcal{G}_{i}(\rho)=\mathcal{E}_{i}\circ\mathcal{U}_{i}(\rho), \tag{1}\]
where \(\mathcal{U}_{i}(\cdot)=U_{i}\cdot U_{i}^{\dagger}\) is the ideal gate operation and \(\mathcal{E}_{i}\) is a completely positive, trace preserving (CPTP) error channel that accounts for the noise in the gate. Similarly, state preparation and measurement (SPAM) errors are modeled through fixed quantum channels applied before and after the operation, respectively.
Here, we adapt the standard theory of quantum characterization techniques to account for broad spectrum noise fluctuations in the error channels for each gate. To simplify the analysis and avoid overparameterization, we use a model that is based on a simple extension of the Markovian case to treat time-correlated classical noise; however, it is worth noting that our framework could be further generalized to include the environment following the recently introduce process tensor tomography framework [13]. In our approach, we take each qubit to be coupled to spatially local noise fields with different spectral fluctuations. For example, consider the Hamiltonian for a single-qubit
\[H=H_{0}(t)+H_{M}(t)+H_{s}(t)B(t)+H_{f}(t)V(t), \tag{2}\]
where \(H_{0}(t)\) is the control Hamiltonian and \(H_{M}(t)\) is a deterministic time-dependent error. The third and fourth terms are proportional to fixed error Hamiltonians \(H_{s}\) and \(H_{f}\) that multiply random, zero-mean noise fields \(B\) and \(V(t)\) representing the different physical sources of noise. \(B(t)\) is a random field that models slowly varying magnetic field noise, while \(V(t)\) has a power spectral density of \(1/f\) noise that models electric field noise acting on the qubit. We treat the field \(B(t)\) as a "quasistatic", which means that it is taken as constant on the timescale of full run of QPT, but changes on the timescale of minutes to hours.
In practice, developing and validating a full microscopic model for the noise becomes difficult when considering the evolution of quantum circuits that involve multiple external, time-dependent drives. Instead, we opt for a simplifying approximation that is equivalent to these more microscopic models to lowest order in the noise fields. Specifically, we write the error channel for each \(K\)-qubit gate acting on qubits \(\mathbf{n}=(n_{1},\dots,n_{K})\)
\[\log\mathcal{E}_{i}^{\mathbf{n}}(t)=L_{iM}^{\mathbf{n}}+\sum_{k}[L_{is}^{\mathbf{n}k}B_{n _{k}}+L_{if}^{\mathbf{n}k}V_{n_{k}}(t)],\]
where \(t\) is the time at which the gate is applied, \(B_{n}\) and \(V_{n}(t)\) are the local noise fields acting on qubit \(n\), and \(L_{i\mu}^{\mathbf{n}k}\) are the error generators for the gate that provide a simple generalization of a Hamiltonian to quantum channels [21]. We now show how to extract these error generators using a generalization of QPT.
### Tomographic Reconstruction
To illustrate the general principles behind our approach, we first consider the case of a single qubit and neglect spatial correlations in the noise. The results generalize in a straightforward manner to larger numbers of qubits, including the possibility of spatially correlated noise. Given a tomographically complete set of initial states \(|\rho_{i}\rangle\rangle\) and measurements \(|E_{j}\rangle\rangle\), the error channel can be fully reconstructed from the measurement probabilities
\[p_{jk}^{i}(t)=\langle\langle E_{j}|\mathcal{E}_{i}^{n}(t)\circ\mathcal{U}_{i}| \rho_{k}\rangle\rangle. \tag{3}\]
In a physical experiment, we only obtain samples from these distributions. As a result, it is impossible to obtain an instantaneous estimate of \(\mathcal{E}_{i}^{n}(t)\) in a single shot with only a single copy of the system.
To model the measurement process, we let \(x_{jk}^{i}(t)\) be a random variable that is equal to 1 if the measurement outcome at time \(t_{i}\) is \(E_{j}\) and 0 otherwise. A standard estimator for \(p_{jk}^{i}\) is obtained from many sequential single-shot measurements
\[\begin{split}\tilde{p}_{jk}^{i}(t,T,\delta t)&= \frac{\delta t}{T}\sum_{n}x_{jk}^{i}(t_{n})\\ &\approx\frac{1}{T}\int_{t}^{t+T}dt^{\prime}\langle\langle E_{j}| \mathcal{E}_{i}^{n}(t^{\prime})\circ\mathcal{U}_{i}|\rho_{k}\rangle\rangle, \end{split} \tag{4}\]
where \(\delta t\) is the interval between measurements and \(T\) is the total measurement time. This averaging process effectively acts as a filter function on the time-dependent fluctuations of \(V_{\ell}(t)\) that cuts off frequencies \(\omega\gg 1/T\). As a result, we can write an effective model for \(\hat{p}_{jk}^{i}(t,T,\delta t)\) with new random fields \(\hat{V}(t,T)\)
\[\tilde{p}_{jk}^{i}(t,T) =\langle\langle E_{j}|\hat{\mathcal{E}}_{i}^{n}(t,T)\circ \mathcal{U}_{i}|\rho_{k}\rangle\rangle, \tag{5}\] \[\log\hat{\mathcal{E}}_{i}^{n}(t,T) =L_{iM}^{n}+L_{is}^{n}B_{n}+L_{if}^{n}\hat{V}_{i}(t,T), \tag{6}\]
where \(\hat{V}_{\ell}(\omega)=o(1/\omega^{2})\) for \(\omega>1/T\) and \(\hat{V}_{\ell}(\omega)\approx V_{\ell}(\omega)\) for \(\omega\ll 1/T\).
To extract the error generators, one first performs full tomography of \(\hat{\mathcal{E}}_{i}^{n}\) on a timescale \(T\) and then takes its matrix logarithm. Denoting averages over each noise source by \(\mathbb{E}_{a}\), we have \(\mathbb{E}_{s}B_{n}=\mathbb{E}_{f}V_{n}(t,T)=0\), which allows one to extract \(L_{iM}^{n}\) by averaging many independent estimates of the error generators. The magnitude of the matrix elements of \(L_{is/f}^{n}\) can be determined by examining the sample-averaged power-spectral density of individual matrix elements of \(\log\hat{\mathcal{E}}_{i}^{n}(t)-L_{iM}^{n}\). To determine the sign of the matrix elements we can use a microscopic model to estimate the sign of one of the error generator terms, which can then be used to fix the remaining signs.
### Fidelity Benchmarks for Time-Correlated Noise
Many quantum devices suffer from a variety of fluctuating noise sources. Their relative importance is usually quantified through Ramsey and spin-echo type measurements of the single qubits. However, these metrics may not be reflective of the relative importance of the different noise contributions in the specific gate implementations. Such detailed information on the individual gate performance is crucial for advanced error mitigation strategies [23; 24] and noise-tailored fault-tolerant protocols [25].
One application of our formalism from the previous section is the ability to extract fidelity benchmarks for the different noise sources in Eq. (5). We propose relative fidelity benchmarks as generalizations of the average gate fidelity
\[F_{iM}^{n} =\int d\psi\langle\langle\psi|e^{L_{iM}^{n}}|\psi\rangle\rangle, \tag{7}\] \[F_{is}^{n} =\mathbb{E}_{s}\int d\psi\langle\langle\psi|e^{L_{is}^{n}B_{n}}| \psi\rangle\rangle,\] (8) \[F_{if}^{n} =\mathbb{E}_{f}\int d\psi\langle\langle\psi|e^{L_{if}\hat{V}_{n}( t)}|\psi\rangle\rangle, \tag{9}\]
and similarly for the different combinations of noise, e.g., \(F_{iMs}^{n}=\mathbb{E}_{s}\int d\psi\langle\langle\psi|e^{L_{iM}^{n}+L_{is}^{n}B_{ n}}|\psi\rangle\rangle\rangle\). The average gate fidelity is given by \(F_{iMsf}^{n}\) in this notation. These metrics serve as benchmarks for the relative importance of different noise sources for each gate, providing detailed information on device improvements needed to enhance the performance of quantum circuits.
## III Silicon spin qubit noise model
In this section, we develop a detailed noise model for correlated noise processes in silicon spin qubits and compare it to experimental data from devices similar to those used in Ref. [20]. The model explicitly includes time-correlated \(1/f\)-noise and quasi-static noise in the Hamiltonian parameters arising from electrical noise and nuclear spin fluctuations. This model is used to benchmark our analysis of time-correlated QPT in the later sections. A main result of this paper is to develop a compressed noise model (see Sec. IV.3) for noisy quantum circuit dynamics that quantitatively matches the behavior of the microscopic Hamiltonian model introduced in this section.
### Model
The low-energy Hamiltonian for an array of silicon quantum dots with one electron per site takes the form [27]
\[H=\sum_{i}g\mu_{B}\mathbf{B}_{i}^{\rm tot}\cdot\mathbf{s}_{i}+\sum_{i,j}J_{ij}(\mathbf{s}_{ i}\cdot\mathbf{s}_{j}-1/4), \tag{10}\]
where \(B_{i}^{\rm tot}=B_{\rm ext}\hat{z}+\mathbf{B}_{i}^{M}\) is the local magnetic field of spin \(i\), including the contributions from a global external field and local fields, \(s_{i}^{\mu}\) are spin-1/2 operators on site \(i\), and \(J_{ij}\) is the exchange interaction between dots \(i\) and \(j\), which are assumed to be in a quasi-1D spatial arrangement. Single-site quantum control is achieved in this system using electric-dipole spin resonance (EDSR), while two-qubit gates are implemented through time-dependent control of the exchange interaction.
\[H =\sum_{i}\hbar\Delta_{i}s_{i}^{z}+\sum_{i}\hbar\Omega_{i}(e^{-i \theta_{i}}s_{i}^{+}+h.c.) \tag{11}\] \[+\sum_{i,j}J_{ij}(s_{i}^{z}s_{j}^{z}-1/4)+\frac{J_{ij}}{2}[e^{i( \omega_{i}-\omega_{j})t}s_{i}^{+}s_{j}^{-}+h.c.],\]
where we have moved each spin into a local reference frame rotating with the frequency of the EDSR drives \(\omega_{i}\) and neglected far-off resonant terms. The parameter \(\Delta_{i}=g\mu_{B}B_{i}^{\rm tot}/\hbar-\omega_{i}\) is the detuning between the local field and the EDSR drive, \(\Omega_{i}\) is a real EDSR Rabi frequency, and \(\theta\) is the axis of rotation.
For sufficient control over \(\Omega_{i}(t)\), \(\theta(t)\), and \(J_{i,j}(t)\), this Hamiltonian is capable of universal quantum computation. However, we also need to account for time-dependent fluctuations due to charge noise and nuclear spins. We model these noise processes using the parameterizations [28]
\[\Delta_{i}(t) =\Delta_{i}^{0}+\Delta_{i}^{n}v_{i}(t)+g\mu_{B}B_{n}/\hbar, \tag{12}\] \[\Omega_{i}(t) =\Omega_{i}^{0}[1+\delta\Omega_{i}^{n}v_{i}(t)],\] (13) \[J_{ij}(t) =J_{ij}^{0}\{1+\delta J_{ij}^{n}[v_{i}(t)+v_{j}(t)]\}, \tag{14}\]
where \(v_{i}(t)\) is a classical noise field on dot \(i\), \(\Delta_{i}^{0}\) is the target detuning of qubit \(i\) from the EDSR frequency \(\omega_{i}\), \(B_{n}\) is a random Hyperfine magnetic field from the nuclear spins that we take to be static over the course of the full QPT experiment (but fluctuating between QPT runs on the timescale of minutes), \(\Omega_{i}^{0}\) is the target EDSR Rabi frequency, and \(J_{ij}^{0}\) is the target exchange. \(\Delta_{i}^{n}\) is a noise sensitivity parameter that describes the change in the qubit frequency, \(\delta\Omega_{i}^{n}\) measures the fractional change in the EDSR Rabi frequency, and \(\delta J_{ij}^{n}\) measures the fractional change in the exchange interaction between qubits \(i\) and \(j\) in response to the noise, under the simplifying assumption that the exchange couples with equal magnitude to the noise field for each dot. Finally, we remark that in experiments the term \(\delta\Omega^{n}\) is expected to be quite small relative to the other effects. Typically, the fluctuations in the Rabi frequency are instead driven by temperature drifts that occur on a slower timescale
Figure 1: Benchmarking the noise model: (a) Pulse sequences used in experiments. The initial qubit state is spin-down. (b) Ramsey decay comparing simulation and experiment. \(P_{0}\) is the spin-down return probability. In (b)-(f), we took parameters \(T_{2}^{*}=1.9\)\(\mu\)s, \(T_{2}=40\)\(\mu\)s, \(f_{c}=10\) MHz, \(f_{\ell}=100\) Hz, and \(\sqrt{A_{0}}=0.5\)\(\mu\)eV [16; 26]. (c) Decay of Rabi amplitude for a pulsed drive of a given time \(t\). In the simulation, we took \(\Omega^{0}/2\pi=5\) MHz. \(A\) is the amplitude of the Rabi oscillations in the spin-down probability at the \(\pi\) times. (d) Spin echo decay comparing simulation and experiment. The dashed lines are fits to a Gaussian. (e) CPMG decay times as a function of sequence length \(n_{\pi}\). Inset: Individual decay curves for the CPMG sequences. (f) Extracted spectral density from the CPMG data using the methods described in Ref. [16]. Both simulation and experiment show clear signatures of \(1/f\) noise.
relative to the effects considered here.
We take the noise field \(v_{i}(t)\) to have a \(1/f\) power spectral density
\[S(f)=A_{0}/f,\ f_{\ell}<f<f_{c}, \tag{15}\]
where \(A_{0}\) is the amplitude and \(f_{c}\) and \(f_{\ell}\) are high and low-frequency cutoffs. For \(f<f_{\ell}\) we take a white noise spectrum \(S(f)=A_{0}/f_{\ell}\) and, for \(f>f_{c}\), we have a cutoff of the form \(S(f)=A_{0}f_{c}/f^{2}\). Neglecting spatial correlations in the noise, the noise sensitivity parameters can be related to the coherence times \(T_{2i}^{*}\) of the qubits and the envelope decay rates of Rabi \(\gamma_{ri}\) and exchange \(\gamma_{eij}\) oscillations [28]
\[\Delta_{i}^{n} =\sqrt{\frac{1}{A_{0}\log\left(\frac{f_{c}}{f_{\ell}}\right)}} \sqrt{\frac{1}{T_{2i}^{*2}}-\frac{(g\mu_{B}B_{n})^{2}}{2\hbar^{2}}}, \tag{16}\] \[\delta\Omega_{i}^{n} =\sqrt{\frac{1}{A_{0}\log\left(\frac{f_{c}}{f_{\ell}}\right)}} \frac{\gamma_{ri}}{\Omega_{i}^{0}},\] (17) \[\delta J_{ij}^{n} =\sqrt{\frac{2}{A_{0}\log\left(\frac{f_{c}}{f_{\ell}}\right)}} \frac{\gamma_{eij}}{J_{ij}^{0}}. \tag{18}\]
We can estimate \(\Delta B_{n}\) from the value of \(T_{2}\) assuming a large value of \(\omega_{c}\)[29]
\[(g\mu_{B}B_{n}/\hbar)^{2}=\frac{2}{T_{2}^{*2}}-\frac{\log(f_{c}/f_{\ell})}{T_{ 2}^{2}\log 4}. \tag{19}\]
### Benchmarking the noise model with spin qubit experiments
The noise model described in the previous section is much simpler than the most general possible noise model for two-qubits with time-correlated error generators. Therefore, it is important to first test its predictions against more standard noise characterization protocols from the fields of nuclear magnetic resonance and electron spin resonance that also capture the presence of time-correlated noise processes [30; 31]. The pulse sequences we employed to benchmark the noise model are shown in Fig. 1(a).
In Figs. 1(b)-(f), we compare the predictions of our noise model with four different single-qubit experiments: (i) Ramsey decay to measure \(T_{2}^{*}\), (ii) Rabi oscillations to extract the quality factor of the Rabi oscillations, (iii) spin-echo decay to measure \(T_{2}\), and (iv) Carr-Purcell-Meiboom-Gill (CPMG) spectroscopy to extract the noise power spectral density of the qubit [32; 33; 34; 16; 34]. In all cases, we see excellent agreement between the simulations and experiment. Most notably, we see clear evidence of the \(1/f\) nature of the qubit noise in the CPMG spectroscopy [35; 16; 34].
These simulations explicitly neglect incoherent processes such as \(T_{1}\) decay and stochastic Pauli noise processes. The neglect of \(T_{1}\) decay is justified by the extremely long \(T_{1}\) times of silicon spin qubits that routinely approach 100 ms [36; 37; 38; 39; 40], including in the presence of a micromagnet [41]. This timescale should be compared to the time of a single gate, which is on the order of 50-200 ns. As a result, \(T_{1}\) decay will contribute to the error generators at the level of \(10^{-6}-10^{-7}\). The neglect of stochastic Pauli noise is based on a microscopic model for how dissipation arises in these systems. It is worth noting that our model does give rise to stochastic Pauli noise from the high-frequency contributions to the \(1/f\) charge noise. However, we find in the following section that these effects provide a negligible contribution to the error rates for the gates in experimentally relevant parameter regimes. Given the excellent agreement between this noise model and the experimental data it is a reasonable assumption to use our approximations as a baseline model. A more careful experimental investigation of the stochastic error generators for one and two-qubit gates will be needed to more definitively rule out other sources of noise.
## IV Time correlated QPT of spin qubits
In this section, we use the framework developed in Sec. II to study the behavior of the error generators for one and two-qubit gates simulated using the noise model described in Sec. III. Using the information obtained from the error generators, we then develop a compressed model for gate noise in silicon spin qubits. Using fast numerical simulations enabled by the compressed model, we then study performance of interleaved RB circuits with experimentally relevant parameters.
### Single-Qubit Gates
To illustrate the basic principles behind our methodology for dealing with time-correlated noise, we first provide a detailed analysis of the error generator statistics for single qubit gates.
We consider a universal gate set consisting of \(\pi/2\) rotations about the \(X\) and \(Y\) axes, plus \(Z\) rotations with an arbitrary phase. Since the \(Z\)-rotations are implemented in software when applying the \(X\) and \(Y\) rotations, they can be treated as having perfect fidelity [42]. In the case of the \(X\) and \(Y\) rotations, the symmetries of our noise model imply that it is sufficient to just treat one rotation axis. Therefore, we analyze the case of a single \(\pi/2\) rotation about the \(X\) axis in the Bloch sphere.
We analyze the gate characteristics using numerical simulations of the gate in the presence of quasistatic and \(1/f\)-noise with noise parameters taken from Sec. III.2. We study the process matrix for the gate at different time slices in one realization of correlated noise. To account for the average in Eq. 3, we take a mixture of the noisy realizations of each gate over several hundred consecutive gate times. We then compute the error generators over time for up to several thousand total gate times and multiple noise realizations. In this way we can numerically sample the full spectral density of the
error generators for the gate. The results for the power spectral density of the individual matrix elements of the Pauli transfer matrix for the error generators are shown in Fig. 2(a).
Interestingly, we see from Fig. 2(a) that the higher frequency fluctuations of the error generators are strongly suppressed compared to the low-frequency dynamics. These high-frequency contributions to the error generators are sufficiently weak that they can be safely neglected in analyzing the gate fidelities at the level of error rates above \(10^{-4}\). Moreover, the error generators are seen to be dominated by low-frequency fluctuations of the off-diagonal terms corresponding to coherent errors. The diagonal stochastic errors are strongly suppressed at all frequencies.
To capture the role of the Markovian noise relative to the low-frequency fluctuations, we use a model for the error generator of the gate
\[L_{\text{eff}}=L_{M}+L_{s}R, \tag{20}\]
where \(L_{M}=\mathbb{E}[L]\), \(|L_{s\mu\nu}|^{2}=\mathbb{E}[(L_{\mu\nu}-L_{M\mu\nu})^{2}]\), and \(R\) is Gaussian random field with zero mean and unit variance. The relative sign of the matrix elements for \(L_{s}\) are set by the signs of one realization. With this approach to the parameterization, we then compute the fidelity benchmarks \(F_{M}\) and \(F_{s}\) for this gate, see Fig. 2(b-c). In the regime where the Rabi frequency suffers from weak fluctations \((\gamma_{r}t_{g})^{-1}\gtrsim 100\), we can see from Fig. 2(b) that the Markovian terms are negligible compared to the quasistatic effects. However, as the Rabi decay rate increases \((\gamma_{r}t_{g})^{-1}\lesssim 10\), Fig. 2(c) shows that the Markovian contributions to the noise become comparable to the quasistatic contributions.
### Two-Qubit Gates
We now apply our formalism to the case of two-qubit exchange gates. We focus on the two-qubit CZ-gate whose implementation is
\[U_{\text{CZ}}=e^{i\pi(s_{1}^{z}+s_{2}^{z})/2}e^{-iJ_{12}s_{1}^{z}s_{2}^{z}t_{ \text{ex}}} \tag{21}\]
where \(t_{\text{ex}}=\pi/J_{12}\). In the presence of a large magnetic field gradient, this gate can be implemented by adiabatically turning the exchange on and off, followed by a \(\pi/2\)-rotation about the \(Z\) axis for both qubits [43; 27; 44].
In the case of two-qubits, the number of individual matrix elements of the error generator is much larger (256 compared with 16 for a single qubit). Therefore, to present a more compressed representation of the noise sources we use the formalism of Ref. [21] to break up the error generator into contributions from different noise types: (i) Hamiltonian noise, (ii) stochastic noise, and (iii) active noise. Hamiltonian noise gives a contribution to the error generator that we denote by \(\mathcal{H}_{J}\) for an operator \(J\). This term acts on density matrices as
\[\mathcal{H}_{J}[\rho]=-i[J,\rho]. \tag{22}\]
The stochastic error generator term \(\mathcal{S}_{J}\) for a unitary operator \(J\) acts diagonally on the density matrix as
\[\mathcal{S}_{J}[\rho]=J\rho J^{\dagger}-\rho. \tag{23}\]
There are also two other types of error generators called symmetric and anti-symmetric generators that, for two Pauli operators \(P\) and \(Q\), take the form
\[\mathcal{C}_{P,Q}[\rho] =P\rho Q+Q\rho P-\frac{1}{2}\{\{P,Q\},\rho\}, \tag{24}\] \[\mathcal{A}_{P,Q}[\rho] =i\Big{(}P\rho Q-Q\rho P-\frac{1}{2}[\{P,Q\},\rho]\Big{)}. \tag{25}\]
Figure 2: (a) Power-spectral density of the matrix elements of the error generator for \(\pi/2\) X-gate. We took \(t_{g}=100\) ns, \(f_{\ell}=100\) Hz, \(f_{c}=10\) MHz, \(\gamma_{r}=8\) kHz, \(T_{2}^{*}=3\)\(\mu\)s, \(T_{2}=30\)\(\mu\)s, and \(T_{\text{tot}}=1.6\) ms. (b,c) Infidelity of Markovian \(F_{M}\) and quasistatic \(F_{s}\) noise sources as a function of \(T_{2}^{*}\) and \(1/\gamma_{r}\), respectively. We took \(\gamma_{r}=40\) kHz in (b) and \(T_{2}^{*}=1.5\)\(\mu\)s in (c). (d) Power-spectral density of the matrix elements of the error generator for a two-qubit CZ gate. We took \(t_{g}=50\) ns, \(J_{0}=10\) MHz, \(\gamma_{e}=45\) kHz, \(T_{2}^{*}=0.5\)\(\mu\)s for both qubits, \(T_{2}=30/105\)\(\mu\)s for the two qubits, and \(T_{\text{tot}}=0.05\) ms. Inset: Infidelity of Markovian \(F_{M}\) and quasistatic \(F_{s}\) noise sources as a function of \(T_{2}^{*}\) with other parameters fixed.
For each of these types of terms, there is a corresponding error rate associated with its contribution to the error generator. To calculate the error rate we introduce a dual basis
\[\mathcal{H}^{\prime}_{P} =\mathcal{H}_{P}/d^{2}, \tag{26}\] \[\mathcal{S}^{\prime}_{P}[\rho] =P\rho P^{\dagger}/d^{2},\] (27) \[\mathcal{C}^{\prime}_{P,Q}[\rho] =(P\rho Q+Q\rho P)/2d^{2},\] (28) \[\mathcal{A}^{\prime}_{P,Q}[\rho] =i(P\rho Q-Q\rho P)/2d^{2}, \tag{29}\]
where \(d\)=4 is the Hilbert space dimension for two qubits. The dual basis was introduced because it satisfies the identity
\[\mathrm{Tr}[\mathcal{B}^{\prime\dagger}\mathcal{D}]=\delta_{BD}. \tag{30}\]
As a result, the time-dependent error rate can be extracted from the error generator \(L\) via the formula
\[H_{P}(t)=\mathrm{Tr}[\mathcal{H}^{\prime\dagger}_{P}L(t)],\ S_{P}(t)=\mathrm{ Tr}[\mathcal{S}^{\prime\dagger}_{P}L(t)], \tag{31}\]
where we represent the superoperators as \(d^{2}\times d^{2}\) matrices in the Pauli basis.
The advantage of changing from the standard Pauli basis to this representation is that many conventional noise sources can be described as either coherent or stochastic errors. As a result, we can compress the representation of the error generator from \(d^{4}\) terms to \(2(d^{2}-1)\). A notable exception is the case of \(T_{1}\) decay processes, but we saw in Sec. III that these contribute negligibly to the individual gate fidelities.
Similar to the matrix elements in the Pauli basis, we can compute the power-spectral density of the coefficients of each noise type. In Fig. 2(d), we show an example for a simulated CZ gate using the noise model of Sec. III.2. We plot the power-spectral density of the Hamiltonian and stochastic contributions to the error generators.
As in the single-qubit case, we see that the error generator is dominated by low-frequency fluctuations that dominate over the Markovian noise sources. This behavior is further borne out by the fidelity benchmarks shown in the inset to Fig. 2(d). Moreover, the stochastic noise is strongly suppressed compared to the coherent errors, suggesting that the noise model can be greatly simplified to only include quasi-static coherent noise. In the following subsections, we show how to use this information to develop effective models for the qubit dynamics to predict the behavior of more complicated RB experiments.
### Compressed Gate Noise Model
In this sub-section, we use our characterization studies in the previous section to simplify the noise models for quantum circuit dynamics. Based on the observed power spectral densities for the error generators, we use a quasistatic model for the error generators for each single and two-qubit gate. As our gate set, we take (1) the identity gate on each site, \(\pi/2\)-rotations about the (2) \(X\) and (3) \(Y\) axis, and (4) a two-qubit CZ gate. We also allow for arbitrary rotations about the \(Z\)-axis that can be implemented without noise in software [42]. We model the error generators for the one and two-qubit gates by Hamiltonian noise
\[L_{ia} =\sum_{\mu}h_{iaP}\mathcal{H}_{P},\] \[L_{CZ} =h_{XX}(\mathcal{H}_{XX}+\mathcal{H}_{YY})+h_{ZZ}\mathcal{H}_{ZZ}\] \[+h_{ZI}\mathcal{H}_{ZI}+h_{IZ}\mathcal{H}_{IZ},\]
where each coefficient is modeled as
\[h_{iaP} =\bar{h}_{iaP}+\delta h_{iaP}R_{a},P\neq I, \tag{32}\] \[h_{PQ} =\bar{h}_{PQ}+\delta h_{PQ}(R_{1}+R_{2})/\sqrt{2},\ P,Q\neq I,\] (33) \[h_{PI/IP} =\bar{h}_{PI/IP}+\delta h_{PI/IP}R_{1/2},\ P\neq I, \tag{34}\]
where \(R_{1/2}\) are Gaussian random fields drawn randomly at the start of each circuit implementation. For each set of parameters and gate configurations, we directly measure each coefficient from the full time-dependent simulation of the gate using the noise model described in Sec. III.2.
As an example, we take the gate parameters from Fig. 2 and, fitting vs \(\epsilon=t_{g}/T_{2}^{*}\) for varying values of \(T_{2}^{*}\), we find a compressed gate model
\[\bar{h}_{1P} =0,\ \delta h_{1Z}=1.4\epsilon,\ \delta h_{1X/Y}=0\] \[\bar{h}_{2Z} =\bar{h}_{2Y}=0,\ \delta h_{2Z}=\delta h_{2Y}=0.0034+0.86\epsilon,\] \[\bar{h}_{2X} =0.018-0.031\epsilon-0.18\epsilon^{2},\] \[\delta h_{2X} =0.018-0.088\epsilon+0.43\epsilon^{2},\] \[\bar{h}_{3Z} =\bar{h}_{3X}=0,\ \delta h_{3Z}=\delta h_{3X}=0.0034+0.86\epsilon,\] \[\bar{h}_{3Y} =0.018-0.031\epsilon-0.18\epsilon^{2},\] \[\delta h_{3Y} =0.018-0.088\epsilon+0.43\epsilon^{2},\] \[\bar{h}_{XX} =0.016,\ \bar{h}_{ZZ}=-0.006,\ \bar{h}_{IZ/ZI}=0,\] \[\delta h_{XX} =0.0007+0.15\epsilon,h_{ZZ}=0.036,\] \[\delta h_{IZ/ZI} =0.0009+1.4\epsilon.\]
This model is quantitatively accurate as it includes the dominant contributions to the error generators for each gate, while also accounting for the dominant temporal fluctuations observed in the power-spectral density.
### Interleaved Randomized Benchmarking
The compressed gate model introduced in the previous sub-section is particularly amenable to efficient numerical simulations of quantum circuit dynamics. In contrast to the full time-dependent Hamiltonian simulations, each gate is simulated as a single-time step as opposed to several thousand. Furthermore, since the
errors are purely coherent it is sufficient to evolve pure states. As a result, it becomes possible to efficiently simulate complex RB circuits that consists of hundreds of individual gates per circuit and many random realizations.
In Fig. 3(a) we perform a direct simulation of interleaved RB for the two-qubit CZ gate within the compressed gate noise model. We generate sequences of random Clifford gates with an inverting gate at the end and plot the probability of returning to the two qubit initial state \(|00\rangle\). In the interleaved sequence we insert a CZ gate in between each random Clifford gate [22]. From the relative decay rate of each curve we can then extract an estimate of the average two-qubit gate fidelity.
To gain insight into the accuracy of interleaved RB, in Fig. 3(b), we compare the exact CZ gate fidelity against the estimate obtained from interleaved RB for different values of \(T_{2}^{*}\). Interleaved RB is simulated using the compressed gate noise model, which includes static gate-dependent coherent noise. It has been seen in previous studies that RB leads to systematic errors in the extracted gate fidelities for such noise models [4; 5; 6]. We observe similar behavior in our model, with interleaved RB displaying a systematic deviation from the true fidelity. Nevertheless, the relative error on the infidelity agrees to within a factor of two and we see that interleaved RB serves to systematically _underestimate_ the true fidelity. As a result, under these conditions we expect the interleaved RB fidelity to serve as a good proxy for the average gate fidelity, lending further theoretical support to recent experimental observations of high-fidelity two-qubit gates in silicon spin qubits based on interleaved RB measurements [20].
## V Discussion
We developed a systematic framework to address the effects of time-correlated noise in QPT. Through comparison to experiment, we validated a detailed theoretical model for noise in semiconductor spin qubits. Extending these results, we showed how to accurately account for such noise processes in QPT. Using our methods, we were then able to significantly compress the noise model needed to accurately predict noisy quantum circuits. We then compared interleaved RB fidelities with the exact process fidelities, finding good agreement that supports recent experimental observations of high-fidelity operation in silicon spin qubits [18; 19; 20].
Looking to future work, we expect that our methods for QPT will enable a significant speed-up of GST for solid-state qubits using methods based on compressed noise models [12]. The more rapid characterization of qubit operations afforded by these methods will enable more frequent device calibrations and therefore the maintenance of high-fidelity operation over long-time scales, which is crucial for fault-tolerant quantum computing. Moreover, we expect that systematic studies of fluctuation statistics of the error generators across different platforms, devices and fabrication processes will significantly deepen the collective understanding of relevant noise sources for quantum device operation.
As a further extension of this research, it is important to investigate more genuinely quantum sources of time-correlated or non-Markovian noise that arise from coherent interactions with an environment. Nearby quantum-coherent two-level fluctuators and residual nuclear spins could both serve as rich sources of time-correlated noise in spin qubit devices, although nuclear spins typically fluctuate only on timescales much longer than considered in this work [17]. Improving characterization methods to account for these quantum-coherent sources of non-Markovian noise is crucial for further validating the operation of solid-state quantum devices.
Figure 3: (a) Return probability for two-qubit interleaved randomized benchmarking circuit. We used the parameterized error generators from Sec. IV.3 with \(T_{2}^{*}=3\)\(\mu\)s. (b) Comparison between exact average gate fidelity and fidelity obtained from interleaved randomized benchmarking. The interleaved error rate comes within a factor of two of the exact error rate, but consistently underestimates the true value.
## Acknowledgments
We thank M. D. Stewart, Jr. and G. White for helpful discussions. We acknowledge support from ARO grants W911NF-15-1-0149, W911NF-23-1-0104, and W911NF-23-1-0258.
|
2310.11185 | Diagnostics of charge breeder electron cyclotron resonance ion source
plasma with consecutive transients method | The consecutive transients (CT) method is a diagnostics approach combining
experimental and computational techniques to probe the plasma parameters of
Charge Breeder Electron Cyclotron Resonance Ion Sources (CB-ECRIS). The method
is based on short pulse injection of singly charged ions into the charge
breeder plasma, and the measurement of the resulting transients of the charge
bred multiply charged ions. Estimates for plasma density, average electron
energy and characteristic times of ion confinement, electron impact ionization
and charge exchange are then computationally derived from the experimental
data. Here the CT method is applied for parametric studies of CBECRIS plasma.
Potassium ions were charge bred with hydrogen support plasma, and the effects
of varied microwave power, neutral gas pressure and magnetic field strength on
the plasma parameters and charge breeding efficiency are presented. It is shown
that the method is sufficiently sensitive to provide relevant information on
changing plasma conditions with the control parameters. The neutral gas
pressure had the strongest impact on the plasma parameters, and the results
agree with trends obtained by using other diagnostic methods, e.g. the increase
of plasma density with increased neutral gas pressure. Furthermore, the method
can provide information inaccessible with other methods, such as the
characteristic times of ion confinement, ionization and charge exchange, and
the hierarchy between them. The results show that the peak charge breeding
efficiency is obtained for the highest ion charge state for which the
ionization time remains shorter than the charge exchange and the ion
confinement times. | Julien Angot, Olli Tarvainen, Hannu Koivisto, Miha Luntinen, Thomas Thuillier, Ville Toivanen | 2023-10-17T12:03:57Z | http://arxiv.org/abs/2310.11185v1 | Diagnostics of charge breeder electron cyclotron resonance ion source plasma with consecutive transients method
###### Abstract
The consecutive transients (CT) method is a diagnostics approach combining experimental and computational techniques to probe the plasma parameters of Charge Breder Electron Cyclotron Resonance Ion Sources (CB-ECRIS). The method is based on short pulse injection of singly charged ions into the charge breeder plasma, and the measurement of the resulting transients of the charge bred multiply charged ions. Estimates for plasma density, average electron energy and characteristic times of ion confinement, electron impact ionization and charge exchange are then computationally derived from the experimental data. Here the CT method is applied for parametric studies of CB-ECRIS plasma. Potassium ions were charge bred with hydrogen support plasma, and the effects of varied microwave power, neutral gas pressure and magnetic field strength on the plasma parameters and charge breeding efficiency are presented. It is shown that the method is sufficiently sensitive to provide relevant information on changing plasma conditions with the control parameters. The neutral gas pressure had the strongest impact on the plasma parameters, and the results agree with trends obtained by using other diagnostic methods, e.g. the increase of plasma density with increased neutral gas pressure. Furthermore, the method can provide information inaccessible with other methods, such as the characteristic times of ion confinement, ionization and charge exchange -- and the hierarchy between them. The results show that the peak charge breeding efficiency is obtained for the highest ion charge state for which the ionization time remains shorter than the charge exchange and the ion confinement times.
## I Introduction
Charge Breder Electron Cyclotron Resonance Ion Sources (CB-ECRIS) are used in Isotope Separation Online (ISOL) -facilities for post-acceleration of radioactive nuclei [1; 2]. The charge breeding process involves deceleration and capture of the incident 1+ ion beam, stepwise electron impact ionization to high charge state in the magnetically confined minimum-B ECRIS plasma, and extraction of the charge bred ions together with buffer (or support) gas ions. Optimising the charge breeding efficiency and time benefits from dedicated plasma diagnostics for the CB-ECRIS plasma parameters affecting these steps.
ECRIS plasmas are unique in many aspects. The electron energy distribution (EED) is strongly non-Maxwellian (see e.g. Ref. [3]) owing to the efficient energy transfer from the microwave electric field to the electrons on a closed magnetic isosurface where the relativistic resonance condition \(\omega_{\rm RF}=\omega_{ce}=eB/\gamma m_{e}\) is met. The resulting electron energies range from a few eV to several hundred keV with the high energy electrons being strongly confined magnetically. The high charge state ions remain relatively cold, i.e. \(5-30\) eV as indicated by the Doppler broadening of their emission lines [4], and are confined electrostatically in a local potential minimum caused by the accumulation of hot electrons in the centre of the trap [5; 6]. In fact, simulations [7; 32] allude the presence of two (small) potential dips, along the plasma chamber axis, at the mirror points for hot electrons near the ECR-zone. Non-invasive diagnostics methods applied for studying minimum-B ECRIS plasmas (not necessarily charge breeders) include, bremsstrahlung and x-ray diagnostics, microwave interferometry, plasma diamagnetism measurement, optical emission spectroscopy, measurement of the plasma potential, detection of kinetic instabilities, and escaping electron spectroscopy [9; 10; 11; 12; 13; 14; 15; 16; 17].
The consecutive Transient (CT) method [18; 19] is a recently developed method combining computational techniques and experiments for probing the plasma density \(n_{e}\), (warm) electron average energy \(\langle E_{e}\rangle\), and characteristic times of ion confinement \(\tau_{\rm conf}\), charge exchange \(\tau_{\rm ex}\) and electron impact ionization \(\tau_{\rm ion}\) in CB-ECRIS plasmas. The CT method is based on short pulse 1+ injection into the ECRIS plasma and the analysis of the resulting N+ ion beam transients extracted from the ECRIS. As such, the CT method has both benefits and drawbacks. The positives are: the method can be considered non-invasive, the magnitude of the perturbation it causes can
be controlled by adjusting the injected 1+ pulse width and intensity, and the equipment required for the 1+ injection and pulsing as well as N+ current measurement is readily available in all CB-ECRIS facilities. The down-sides of the method are: limited combinations of 1+ ions and plasma species (clean charge state distribution spectrum with minimum of 5 consecutive charge states without \(m/q\)-overlap is necessary for the method), the complexity of data analysis, and the large uncertainties of the characteristic times due to the lack of accurate cross section data for high charge state ionisation. As such, the method either complements existing CB-ECRIS diagnostics while requiring fewer assumptions, or provides information inaccessible through other techniques. For example, it has been shown with the CT method that the ion confinement time is not a linear function but rather increases exponentially with the charge state [18; 19], which is commensurate with electrostatic ion confinement in a local potential dip [5; 6]. It has been shown that the inherent uncertainties of the CT method can be reduced e.g. by two-component injection or overlapping the \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets of neighbouring charge states albeit with the caveat of additional assumption [20]. Furthermore, it has been demonstrated that the main contributor to the large relative uncertainty of the CT method is the lack of precise ionization cross section data whereas the presumed EED has a smaller effect [21].
In this paper we apply the CT method for parametric studies of the CB-ECRIS plasma. We do not detail the method itself but rather refer the reader to the literature [18; 19; 20; 21] for a comprehensive account of the assumptions, computational details and data analysis.
In the following sections we describe the experimental setup, and present the measured plasma energy content and characteristic times along with the charge breeding efficiency of potassium as a function of the CB-ECRIS microwave power, neutral (hydrogen) pressure and magnetic field strength. These sweeps are carried out to demonstrate that the CT method is sensitive enough to pick up trends in the above plasma parameters (observables) responding to the change of the control parameters. The results of the CT method are placed in context comparing them to the outcomes of other ECRIS plasma diagnostics.
## II Experimental setup and procedure
The experiments were carried out on the LPSC 1+\(\rightarrow\)N+ test bench, shown in Fig. 1, dedicated for the development of the PHOENIX CB-ECRIS [22] -- in particular measurements of the charge breeding efficiency, charge breeding time and \(m/q\)-contamination.
Delivering on this remit requires generating a stable 1+ beam with fine tuning of the ion injection energy, a good base vacuum on the order of \(10^{-8}\) mbar or better, the hardware for pulsing of the 1+ beam and beam diagnostics. Hence, the 1+ beam line is equipped with a surface ionisation source producing alkali metal beams, a dipole magnet for mass separation, a Faraday cup to measure the beam intensity, beam optics and deflecting plates for 1+ injection optimisation and pulsing. The 1+ source potential is typically set to HV=20 kV. The CB-ECRIS plasma chamber is then biased to HV-\(\Delta\)V with a negative supply floating at the 1+ source potential. This configuration allows fine-tuning the 1+ ion energy, which is essential for the 1+ beam capture by the CB plasma through electrostatic deceleration by the charge breeder and its plasma potential, and subsequent thermalization of the injected ions in ion-ion collisions with the buffer gas ions. The beams extracted from the charge breeder are analysed in the N+ beam line with a mass spectrometer and diagnostics including a Faraday cup for beam intensity measurement.
The current incarnation of the LPSC charge breeder is a 14.5 GHz minimum-B ECR ion source equipped with three coils to create the axial magnetic profile with two magnetic mirrors at the injection and extraction, respectively [23]. Typical operational values of the injection, minimum-B and extraction axial magnetic fields are B\({}_{\mathrm{inj}}\approx 1.6\) T, B\({}_{\mathrm{min}}\approx 0.4\) T, and B\({}_{\mathrm{ext}}\approx 0.8\) T. A permanent magnet sextupole surrounding the plasma chamber creates the radial magnetic mirror of 0.8 T at the plasma chamber wall, in front of the pole (the total radial field then being affected by the radial component of the solenoid field). A 2 kW klystron microwave amplifier for plasma (electron) heating is connected to the plasma chamber through a direct waveguide port. The vacuum pumping system assures a base pressure of approximately \(3\times 10^{-8}\) mbar at the source injection.
Here we apply the CT method to observe the influence of different charge breeder tuning parameters on the plasma characteristics i.e. \(n_{e}\), \(\left\langle E_{e}\right\rangle\) and \(\tau_{\mathrm{conf}}\), \(\tau_{\mathrm{cex}}\) and \(\tau_{\mathrm{lon}}\) of potassium (\({}^{39}\)K) ions. We chose K as the injected element because it is an alkali element, thus minimising the wall recycling, with several (consecutive) charge states from K\({}^{+}\) to K\({}^{12+}\) found in the \(m/q\) spectrum without overlap with the support or residual gas ions. Hydrogen was chosen as plasma support gas to obtain high charge breeding efficiencies of high charge state K ions. The ion source control parameters varied systematically in this study were: (i) the microwave power as it presumably influences the EED and plasma density (see e.g. Refs. [24; 11; 25]), (ii) the support gas feed rate (pressure) which acts on the neutral and electron densities (see e.g. Refs. [26; 24]), (iii) the magnetic field minimum B\({}_{\mathrm{min}}\) as it affects the tail of the EED and the occurrence of kinetic plasma instabilities [27; 28; 29], and (iv) the extraction magnetic field B\({}_{\mathrm{ext}}\), which allegedly affects the trapping of the hot electrons and the global plasma confinement [30]. For each parameter sweep, a 500 \(-\)900 nA K\({}^{+}\) beam was produced with the 1+ ion source. The beam line optics and the CB parameters were optimized for the charge breeding of K\({}^{9+}\) resulting in 19-20% efficiency at best. The 1+ pulsing was then used to generate 10 ms
bunches of injected ions at 1 Hz repetition rate allowing the ion current transients to decay before the onset of the subsequent 1+ pulse. The multi-charged K beam intensity responses (transients) were measured from the Faraday cup of the N+ beam line averaging over 64 waveforms for each charge state to improve the signal-to-noise ratio of the measurement. The microwave power and gas feed rate studies were carried out in the control parameter ranges yielding a stable CB regime i.e. the magnetic field was chosen accordingly to avoid kinetic instabilities. Only one CB-ECRIS control parameter was varied during each sweep. The B\({}_{\mathrm{min}}\) and B\({}_{\mathrm{ext}}\) values corresponding to certain combinations of coil currents were simulated with Radia3D [31]. The coil currents were then adjusted so that in each sweep only either B\({}_{\mathrm{min}}\) or B\({}_{\mathrm{ext}}\) varied while other field values remained constant. The parameter settings for each sweep are given in Section III along with the data plots. For each setting, the \(\Delta\)V value was adjusted to optimize the K\({}^{9+}\) breeding efficiency.
## III Results
In the following subsections we present the results of the CB-ECRIS parameter sweeps. We first describe the K\({}^{+}\rightarrow\) K\({}^{n+}\) charge breeding efficiencies as a function of each parameter. The \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets derived from the transients of each ion charge state are used for calculating the plasma energy content \(n_{e}\left\langle E_{e}\right\rangle\), which is then presented along with the characteristic times.
For clarity, we present two examples of the \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets for K\({}^{10+}\) in Figs. 2(a) and 2(b) highlighting the change of the calculated plasma energy content from \(3.5\times 10^{14}\)eV/cm\({}^{3}\) to \(6.9\times 10^{14}\)eV/cm\({}^{3}\) with the notable shift of the \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution space towards higher plasma density. The energy content value is taken as the median value of the product of the \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solutions from the CT-method. These examples were measured as a part of the gas pressure sweep discussed later. The \(n_{e}\) and \(\left\langle E_{e}\right\rangle\) values were restricted to \(10^{11}\,\mathrm{cm}^{-3}\leq n_{e}\leq 2.6\times 10^{12}\,\mathrm{cm}^{-3}\) and \(10\,\mathrm{eV}\leq E_{e}\leq 10\,\mathrm{keV}\). These limits are based on experimental evidence and simulations of the electron density, and electron energy as explained in
Figure 1: Schematic view of the 1+\(\rightarrow\)N+ CB-ECRIS test bench.
Ref. [18] and references therein.
The plasma energy contents and characteristic times are presented in Sections III.1 - III.4 without the estimated uncertainties for the sake of illustration clarity. The uncertainties are discussed separately in Section III.5.
### Microwave power
The charge breeding efficiencies of K\({}^{4+}\)-K\({}^{12+}\) as a function of the microwave power are shown in Fig. 3. Increasing the power increases the average charge state of K ions, which causes the breeding efficiency of K\({}^{9+}\)-K\({}^{11+}\) to improve significantly with this control parameter. In contrast, the efficiency of charge states \(\leq\)K\({}^{8+}\) exhibit a maximum efficiency at 350 W and then a decrease as the power is ramped up. Two sweeps were made to ensure the reproducibility of the observed trends but are only shown for the plasma energy content plots. The increase of the high charge state breeding efficiency with the microwave power was observed in both microwave power sweeps. The other source settings in these sweeps were as follows: neutral gas pressure \(1.2\times 10^{-7}\)\(-1.3\times 10^{-7}\) mbar, \(\rm B_{inj}\)\(1.57-1.58\) T, \(\rm B_{min}\)\(0.44-0.45\) T and \(\rm B_{ext}\)\(0.84\) T.
Figure 4 shows the (median) energy content as a function of the microwave power for potassium charge states from K\({}^{8+}\) to K\({}^{10+}\) in the two microwave power sweeps. Three observations can be made; (i) the calculated energy content depends on the charge state, which is attributed to the highest charge states originating from the core of the plasma where the plasma density and electron energies can be argued to be higher, (ii) the trend of the plasma energy content is to increase with microwave power by 20-40% (ignoring a single outlier data point for K\({}^{10+}\) at 350 W), and (iii) the trend of the energy content was found to be similar for both microwave power sweeps.
Figure 5 shows \(\tau_{\rm conf}\), \(\tau_{\rm ex}\) and \(\tau_{\rm ion}\) for charge states K\({}^{6+}\)-K\({}^{10+}\) as a function of the microwave power. The confinement time of the high charge states K\({}^{8+}\)-K\({}^{10+}\) is longer than the confinement time of the charge states K\({}^{6+}\)-K\({}^{7+}\), which is commensurate with the spatial distribution of the ions found in simulations and experiments [32; 33; 34] suggesting that the highest charge states are highly collisional and electrostatically confined (as opposed to magnetically confined electrons). The trend of \(\tau_{\rm conf}\) with the microwave power is to decrease for charge states K\({}^{6+}\)-K\({}^{7+}\) and to increase for charge states K\({}^{8+}\)-K\({}^{10+}\). Admittedly there are data points deviating from the trend, and altogether the variation of \(\tau_{\rm conf}\) is not drastic. Nevertheless, we draw the attention to the fact that the confinement time of the highest charge states,
Figure 3: The charge breeding efficiency of K\({}^{4+}\) - K\({}^{12+}\) as a function of the microwave power.
Figure 2: Examples of the \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets for K\({}^{10+}\) and the corresponding plasma energy contents. The H\({}_{2}\) gas pressure was increased from \(6.8\times 10^{-8}\) mbar in (a) to \(2.3\times 10^{-7}\) mbar in (b). The energy content increases due to increased median plasma density from \(3.8\times 10^{11}\) cm\({}^{-3}\) to \(1.5\times 10^{12}\) cm\({}^{-3}\).
i,e. K\({}^{8+}\)-K\({}^{10+}\), are always longer than their ionization times. Furthermore, in the case of K\({}^{10+}\) the notable increase of \(\tau_{\rm conf}\) at high microwave power is correlated with significant increase of the corresponding charge breeding efficiency. The charge exchange time is found longest for the high charge state ions and the trend of \(\tau_{\rm cex}\) is to decrease with the microwave power, but again, the changes are very small. Finally, \(\tau_{\rm ion}\) does not exhibit a clear trend with the microwave power, but is shorter for the lower charge states than for the higher ones. This is probably due to very low neutral gas density in the core plasma (potential dip) where the highest charge states reside as discussed in Ref. [32]. The ionization rate coefficient first increases and then typically plateaus towards high \(\langle E_{e}\rangle\) (e.g. \(>1000\) eV for K\({}^{8+}\)), so improved electron heating is not expected to affect \(\tau_{\rm ion}\) by much assuming sufficiently high \(\langle E_{e}\rangle\) as indicated by the solution sets. Generally speaking, high charge state production requires \(\tau_{\rm ion}\) to be shorter than \(\tau_{\rm cex}\), which is the case found throughout the power sweep for most of the charge states, especially those below the K\({}^{9+}\) peak charge state of the CB-efficiency. Finally, we note that the continuous increase of the K\({}^{9+}\) efficiency with the microwave power is accompanied with a decrease of \(\tau_{\rm ion}\) of K\({}^{6+/7+}\) and an increase of \(\tau_{\rm conf}\) of the higher charge state ions. This highlights the importance of the hierarchy of the characteristic times in regards to the optimum charge state for the highest CB efficiency.
### Neutral gas pressure
The effect of the neutral H\({}_{2}\) pressure (gas feed rate) on the charge breeding efficiency and plasma parameters was studied through two sweeps with otherwise almost identical configuration, i.e. \(530-\)560 W microwave power, \(1.57\) T B\({}_{\rm inj}\), \(0.44-\)0.45 T B\({}_{\rm min}\) and \(0.83-\)0.84 T B\({}_{\rm ext}\). Fig. 6 shows the charge breeding efficiencies of K\({}^{4+}\)-K\({}^{12+}\) for one of the sweeps. The behaviour is rather complex but can be summarized as follows; the higher the charge state, the lower the optimum pressure for maximising the charge breeding efficiency.
The effect of the H\({}_{2}\) neutral pressure on the plasma energy content is shown in Fig. 7 for potassium charge states from K\({}^{8+}\) to K\({}^{10+}\). Here the results of both sweeps are displayed (either solid or dashed lines). It is seen that the trend of the plasma energy content is to increase with the gas pressure (with the exception of the highest pressure), which is attributed to higher plasma density as indicated by the histograms of the solution sets shown as projections to the axes in Fig. 2. The finding is commensurate with diamagnetic loop experiments reporting the plasma energy content to increase with the neutral gas pressure (saturating at high pressures) [11]. Figure 7 also shows the evolution of the \(\Delta\)V-value. The \(\Delta\)V appears to follow the same trend as the energy content. This is consistent as the optimum energy (tuned with \(\Delta\)V) for 1+ beam capture relies on the plasma potential [35], which presumably depends on the low energy electron density as implied by the data in Ref. [14].
Figure 8 shows \(\tau_{\rm conf}\), \(\tau_{\rm cex}\) and \(\tau_{\rm ion}\) for charge states K\({}^{6+}\)-K\({}^{10+}\) as a function of the H\({}_{2}\) neutral gas pressure. All these characteristic times tend to decrease with the neutral (buffer) gas pressure, with only the ionisation time of K\({}^{10+}\) breaking the trend. We note that obvious outlier points, e.g. \(>\)1s \(\tau_{\rm cex}\) for K\({}^{6+}\) arising from poor fits to experimental transient data are not shown in the figure, which explains'missing data points' at low pressure.
### Magnetic field minimum, B\({}_{\rm min}\)
The charge breeding efficiencies of K\({}^{4+}\)-K\({}^{12+}\) at different B\({}_{\rm min}\) are shown in Fig. 9. The other source parameters, i.e. magnetic field maxima, microwave power and H\({}_{2}\) gas pressure were kept constant at B\({}_{\rm inj}\) of \(1.51\) T, B\({}_{\rm ext}\) of \(0.82\) T, \(530\) W and \(1.1\times 10^{-7}\) mbar, respectively. The CB efficiency of charge states \(\leq\)K\({}^{7+}\) decreases, and the efficiency of charge states \(\geq\)K\({}^{8+}\) increases with increasing B\({}_{\rm min}\).
The plasma energy content at three different B\({}_{\rm min}\) settings is presented for K\({}^{8+}\)-K\({}^{10+}\) in Fig. 10. The highest energy content is systematically found at the strongest B\({}_{\rm min}\).
The characteristic times \(\tau_{\rm conf}\), \(\tau_{\rm cex}\) and \(\tau_{\rm ion}\) for charge states K\({}^{6+}\)-K\({}^{10+}\) are shown in Fig. 11 as a function of B\({}_{\rm min}\). There are no systematic trends except the confinement time of the highest K\({}^{10+}\) charge state approximately doubling from the weakest to strongest B\({}_{\rm min}\) value, which together with the enhanced CB efficiency of even higher charge states implies improved (electro
Figure 4: The (local) plasma energy content of K\({}^{8+}\), K\({}^{9+}\) and K\({}^{10+}\) as a function of the microwave power. The solid and dashed lines represent two different power sweeps.
static) ion confinement.
### Extraction mirror magnetic field, \(\mathbf{B_{\mathrm{ext}}}\)
The effect of the extraction mirror field \(\mathrm{B_{ext}}\) on the charge breeding efficiencies of K\({}^{4+}\)-K\({}^{12+}\) is illustrated in Fig. 12. Here the other source parameters were as follows: \(\mathrm{B_{inj}}\) of 1.58 T, \(\mathrm{B_{min}}\) of 0.45 T, 530 W microwave power and \(1.4\times 10^{-7}\) mbar H\({}_{2}\) pressure. The CB efficiency of high charge state ions, i.e. K\({}^{9+}\) and higher, exhibits a clear optimum at 0.83 \(-\)0.84 T while the efficiency of lower charge states decreases monotonically with increasing \(\mathrm{B_{ext}}\).
Figure 13 shows the plasma energy content (for K\({}^{8+}\)-K\({}^{10+}\)) as a function of \(\mathrm{B_{ext}}\). The extraction field has very little effect on the energy content, i.e. there is no trend observed with this parameter.
The characteristic times \(\tau_{\mathrm{conf}}\), \(\tau_{\mathrm{ex}}\) and \(\tau_{\mathrm{ion}}\) of charge states K\({}^{6+}\)-K\({}^{10+}\) measured with different \(\mathrm{B_{ext}}\) are shown in Fig. 14. There are no clear trends, which is in line with \(\mathrm{B_{ext}}\) having little effect on the CB efficiency compared to e.g. the neutral gas pressure. Nevertheless, we note that the efficiency increase of the optimum charge state K\({}^{9+}\) at \(\mathrm{B_{ext}}\) between 0.822 T and 0.852 T corresponds to an increase of the confinement time.
Figure 5: The confinement, charge exchange and ionisation times (\(\tau_{\mathrm{conf}}\), \(\tau_{\mathrm{ex}}\) and \(\tau_{\mathrm{ion}}\)) of K\({}^{6+}\) - K\({}^{10+}\) ions as a function of the microwave power.
Figure 6: The charge breeding efficiency of K\({}^{4+}\) - K\({}^{12+}\) as a function of the H\({}_{2}\) (buffer) gas pressure.
Figure 7: The (local) plasma energy content of K\({}^{8+}\), K\({}^{9+}\) and K\({}^{10+}\) together with the optimum \(\Delta\)V value as a function of the H\({}_{2}\) (buffer) gas pressure. The solid and dashed lines represent two different pressure sweeps.
### On the uncertainty of \(\tau_{\text{conf}}\), \(\tau_{\text{ex}}\) and \(\tau_{\text{lon}}\)
As stated earlier, the most prominent downside of using the CT method for estimating the characteristic times \(\tau_{\text{conf}}\), \(\tau_{\text{ex}}\) and \(\tau_{\text{ion}}\) of the high charge state ions is the large uncertainty. Typical uncertainties of high charge state potassium confinement and charge exchange times are 100-200% while the ionization times can be estimated more accurately, i.e. with 40-70% relative uncertainty [18; 19; 20; 21], which raises the concern that the CT method might not be able to detect small variations of the plasma parameters. Thus, the statistical relevance of the measurement results presented above could be questioned. However, it has been shown in Ref. [21] that the large uncertainties are inherited from the ionization cross section data [36]. This allows us to argue that the CT method can reveal trends of the characteristic times as a function of a control parameter, such as microwave power, gas pressure and magnetic field strength, although the absolute values of the times are subject to systematic errors of the cross section data. In other words, the conclusions based on the trends of the characteristic time median values displayed in Figs. 5, 8, 11 and 14 are not affected by the uncertainties of the individual data points. Hence, the data are presented without the corresponding uncertainties for the clarity of the illustration.
Figure 8: The confinement, charge exchange and ionisation times of K\({}^{6+}\) - K\({}^{10+}\) as a function of the H\({}_{2}\) (buffer) gas pressure.
Figure 10: The (local) plasma energy content of K\({}^{8+}\), K\({}^{9+}\) and K\({}^{10+}\) as a function of B\({}_{\text{min}}\).
## IV Conclusions and discussion
It was found that the charge breeding efficiency of high charge state K ions and the plasma energy content in the core of the ECR discharge increases with the microwave power. Closer inspection of the solution set histograms reveals that this is most likely due to increase of the median \(\langle E_{e}\rangle\) rather than \(n_{e}\) as illustrated in Fig. 15 showing the \(\left(\langle E_{e}\rangle\right.,n_{e})\) solution sets for K\({}^{10+}\) at microwave powers of 150 W (a) and 600 W (b) as a representative example. This interpretation is commensurate with the shift of the charge state distribution (charge breeding efficiency vs. charge state) as the peak of the ionisation cross section is at higher energy for the high charge ions. The increase of the plasma energy content with the microwave power has been observed earlier with diamagnetic loop diagnostic [11]. No sound conclusions can be made from the characteristic times as a function of the microwave power.
Neutral gas pressure was found to be the control parameter producing the clearest trends of the plasma energy content and characteristic times. The increase of the plasma energy content with the neutral gas pressure is attributed to the increase of the plasma density rather than the average energy of the warm electrons (see Fig. 2). Similar conclusion, i.e. increase of \(n_{e}\) with the neutral gas pressure, has been drawn when probing ECRIS plasmas with diamagnetic loop [11], K-alpha x-ray emission [24] or 1+ in-flight ionisation in a charge breeder [34]. The behavior of the charge breeding efficiency with the buffer gas pressure can be explained as follows: as the
Figure 11: The confinement, charge exchange and ionisation times of K\({}^{6+}\) - K\({}^{10+}\) as a function of B\({}_{\rm min}\).
Figure 12: The charge breeding efficiency of K\({}^{4+}\) - K\({}^{12+}\) as a function of B\({}_{\rm ext}\).
neutral gas feed rate is increased, the enhanced charge exchange rate limits the high charge state ion production whereas ions with low or medium charge state first benefit from the increased plasma density (electron impact ionisation rate) and only at very high neutral gas pressure their production is limited by the charge exchange. This interpretation is supported by the fact that the charge exchange times have a decreasing trend with the pressure. It is worth noting that the ionisation time of the highest charge states is shorter than their charge exchange time only at the lowest pressure, which matches the observed trend of the charge breeding efficiency. The decrease of the confinement time with the pressure is attributed to higher plasma density, which increases the electron flux and ambipolar plasma potential (see e.g. [14]), thus reducing the average ion confinement time as the fluxes of negative and positive charge carriers are equal in equilibrium condition (i.e. \(n_{e}/\tau_{e}=\Sigma qn_{i}^{q}/(\tau_{\text{conf}}^{q})\) where \(q\) refers to the charge state of the ion). The decrease of the charge exchange and ionisation times with the neutral gas pressure are presumably due to increased neutral and plasma (electron) densities affecting the charge exchange and electron impact ionisation rates \(n_{n}n_{i}\left\langle\sigma_{\text{cex}}v_{i}\right\rangle\) and \(n_{e}n_{i}\left\langle\sigma_{\text{ion}}v_{e}\right\rangle\), respectively.
Examining the solution sets obtained in the extremes of the minimum-B magnetic field sweep reveals that the small increase of the plasma energy content with \(B_{\text{min}}\) is most likely due to increasing \(\left\langle E_{e}\right\rangle\) rather than \(n_{e}\) as illustrated in Fig. 16 showing the \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets for K\({}^{10+}\) at \(B_{\text{min}}\) of 0.37 T (a) and 0.40 T (b). This is consistent with \(B_{\text{min}}\) being the most influential parameter affecting the plasma bremsstrahlung spectral temperature [37] and the occurrence of kinetic instabilities [29] driven by the anisotropy of the hot electron component [38]. Finally, \(B_{\text{ext}}\) sweep yielded very similar solution sets (not shown for brevity) and median \(n_{e}\) and \(\left\langle E_{e}\right\rangle\) -values regardless of the absolute strength of the extraction mirror field, i.e. \(B_{\text{ext}}\) appears to have a smaller effect on the
Figure 14: The confinement, charge exchange and ionisation times of K\({}^{6+}\) - K\({}^{10+}\) as a function of B\({}_{\text{ext}}\).
Figure 15: The \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets for K\({}^{10+}\) and the corresponding plasma energy contents. The microwave power was increased from 150 W in (a) to 600 W in (b). The energy content increases due to increase in the median electron energy from 1.0 keV to 1.5 keV.
plasma parameters than \(B_{\rm min}\) as discussed in Ref. [30].
Besides the parametric trends of the plasma energy content and characteristic times we can make some general observations. The conditions for producing fully stripped ions in ECRIS plasma have been postulated in Ref. [39] presenting the so-called Golovanivsky plot displaying the required product \(n_{e}\tau_{\rm conf}\) at the optimum electron temperature \(T_{e}\) of a Maxwellian distribution with \(kT_{e}=\left\langle E_{e}\right\rangle2/3\) to produce various fully stripped ions. For argon, which is the neighbouring element to potassium, the triple product \(n_{e}\tau_{\rm conf}T_{e}\) required for fully stripped ions is approximately \(3.2\times 10^{15}\)eVs/cm\({}^{3}\). In this work we have found that the plasma energy content \(n_{e}\left\langle E_{e}\right\rangle\) ranges from \(0.2\times 10^{15}\)eV/cm\({}^{3}\) to \(1.0\times 10^{15}\)eV/cm\({}^{3}\) with \(10-15\) ms confinement times for the highest charge states of potassium. These values translate to triple product of \(0.1\)-\(1.0\times 10^{13}\)eVs/cm\({}^{3}\) suggesting that fully stripped K\({}^{19+}\) ions cannot be produced with the CB-ECRIS, which is commensurate with the extracted charge state distribution where the maximum detectable charge state of potassium is K\({}^{12+}\).
Further to the absolute scale of the triple product (plasma parameters) we note that the results reveal a hierarchy of characteristic times relevant for high charge state ion production. The peak of the charge breeding efficiency distribution is at K\({}^{9+}\), which is the highest charge state for which we find consistently \(\tau_{\rm ion}<\tau_{\rm ex}\) and \(\tau_{\rm ion}<\tau_{\rm conf}\). In other words, for charge states 9+ and lower, the ionisation time is shorter than the charge exchange time or the ion confinement time, which causes ions to "pile-up" on that charge state making its charge breeding efficiency the highest. For charge states above the peak of the breeding efficiency distribution the time hierarchy appears to convert to \(\tau_{\rm ex}<\tau_{\rm conf}\) and \(\tau_{\rm ex}<\tau_{\rm ion}\), i.e. charge exchange limits the production of very high charge state ions.
Overall, the results discussed here have demonstrated that despite of the large relative uncertainty (see Ref. [21] for thorough discussion) the CT method is sensitive enough to identify trends in the plasma parameters, e.g. the increase of the plasma density with the neutral gas pressure. Importantly, these trends are similar to those inferred from other diagnostics such as diamagnetic loop experiments and K-alpha emission [11; 24]. The advantage of the CT method over other diagnostics techniques arises from its simplicity; practically all charge breeders are readily equipped with 1+ beam pulsing and N+ beam current (transient) detection apparatus. The computational analysis tools required to translate the beam current transients into \(\left(\left\langle E_{e}\right\rangle,n_{e}\right)\) solution sets and corresponding characteristic times, \(\tau_{\rm conf}\), \(\tau_{\rm ex}\) and \(\tau_{\rm ion}\), are open source and available through GitHub [40].
###### Acknowledgements.
We acknowledge grants of computer capacity from the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533), and support of the Academy of Finland Project funding (Grant No:315855).
|
2310.16826 | Deep machine learning for meteor monitoring: advances with transfer
learning and gradient-weighted class activation mapping | In recent decades, the use of optical detection systems for meteor studies
has increased dramatically, resulting in huge amounts of data being analyzed.
Automated meteor detection tools are essential for studying the continuous
meteoroid incoming flux, recovering fresh meteorites, and achieving a better
understanding of our Solar System. Concerning meteor detection, distinguishing
false positives between meteor and non-meteor images has traditionally been
performed by hand, which is significantly time-consuming. To address this
issue, we developed a fully automated pipeline that uses Convolutional Neural
Networks (CNNs) to classify candidate meteor detections. Our new method is able
to detect meteors even in images that contain static elements such as clouds,
the Moon, and buildings. To accurately locate the meteor within each frame, we
employ the Gradient-weighted Class Activation Mapping (Grad-CAM) technique.
This method facilitates the identification of the region of interest by
multiplying the activations from the last convolutional layer with the average
of the gradients across the feature map of that layer. By combining these
findings with the activation map derived from the first convolutional layer, we
effectively pinpoint the most probable pixel location of the meteor. We trained
and evaluated our model on a large dataset collected by the Spanish Meteor
Network (SPMN) and achieved a precision of 98\%. Our new methodology presented
here has the potential to reduce the workload of meteor scientists and station
operators and improve the accuracy of meteor tracking and classification. | Eloy Peña-Asensio, Josep M. Trigo-Rodríguez, Pau Grèbol-Tomàs, David Regordosa-Avellana, Albert Rimola | 2023-10-25T17:56:28Z | http://arxiv.org/abs/2310.16826v2 | Deep machine learning for meteor monitoring: advances with transfer learning and gradient-weighted class activation mapping
###### Abstract
In recent decades, the use of optical detection systems for meteor studies has increased dramatically, resulting in huge amounts of data being analyzed. Automated meteor detection tools are essential for studying the continuous meteorod incoming flux, recovering fresh meteorites, and achieving a better understanding of our Solar System. Concerning meteor detection, distinguishing false positives between meteor and non-meteor images has traditionally been performed by hand, which is significantly time-consuming. To address this issue, we developed a fully automated pipeline that uses Convolutional Neural Networks (CNNs) to classify candidate meteor detections. Our new method is able to detect meteors even in images that contain static elements such as clouds, the Moon, and buildings. To accurately locate the meteor within each frame, we employ the Gradient-weighted Class Activation Mapping (Grad-CAM) technique. This method facilitates the identification of the region of interest by multiplying the activations from the last convolutional layer with the average of the gradients across the feature map of that layer. By combining these findings with the activation map derived from the first convolutional layer, we effectively pinpoint the most probable pixel location of the meteor. We trained and evaluated our model on a large dataset collected by the Spanish Meteor Network (SPMN) and achieved a precision of 98%. Our new methodology presented here has the potential to reduce the workload of meteor scientists and station operators and improve the accuracy of meteor tracking and classification.
keywords: meteorites, meteors, meteoroids, machine learning, convolutional neural networks, transfer learning +
Footnote †: journal: Planetary and Space Science
## 1 Introduction
Meteors, popularly known as shooting stars, particularly the most luminous ones called fireballs or bolides, are spectacular physical processes that have fascinated mankind for centuries (Trigo-Rodriguez, 2022). These dazzling streaks of light occur when a meteorod enters the Earth's atmosphere at hypersonic velocity, causing intense heating through repeated collisions with air molecules (Ceplecha et al., 1998; Silber et al., 2018; Trigo-Rodriguez, 2019). Meteors are formed due to the extreme heat produced by the interaction with the gaseous environment and the rising atmospheric pressure which causes the meteorod to undergo rapid vaporization, a process known as ablation. This ablation leads to the formation of a luminous trail composed of ionized gas and fragmented debris, which can be observed and recorded from the ground using optical devices. Current digital video imagery provides a sequential recording useful to obtain complete light curves, high temporal and spatial resolution measurements, and spectra (Hughes, 1978; Koschny et al., 2017; Subasinghe et al., 2017; Drolshagen et al., 2021).
Traditionally, two classes of meteors are considered: those that are expected to occur in a specific period of the year because they are associated with meteorod streams, and those that are sporadic and have no discernible periodic pattern (Wiegert and Brown, 2004; Jopek and Williams, 2013; Dumitru et al., 2017; Jenniskens, 2017; Vaubaillon et al., 2019; Pena-Asensio et al., 2022, 2023). Although showers exhibit regular activity, the sporadic meteors require constant sky monitoring to quantify the meteorod flux and properties of the different sources (Trigo-Rodriguez and Blum, 2022).
Meteors can provide valuable information about the composition, dynamics, and origin of our Solar System (Koschny et al., 2019). By analyzing the physical and chemical properties of meteorites, which are pieces of meteoroids that survive their journey through the Earth's atmosphere and land on the surface, scientists can gain insight into the formation and evolution of comets and asteroids, as they provide information about the age of the Solar System, the composition of the early solar nebula, and the processes that led to the formation of the planets (Bottke et al., 2002; Lauretta and McSween, 2006).
The growing interest in meteoritics has driven an increase in the number of video meteor detection networks around the world (Ceplecha, 1987; Ceplecha et al., 1998; Koten et al., 2019; Colas et al., 2020). Comprised of strategically placed stations equipped with cameras and other sensors, these networks are designed to monitor atmospheric volumes with clear views of the sky, aiming to maximize the number of recorded meteors within common observing fields. A noteworthy trend in recent years has been the rise of pro-am collaborations in this field, involving professional scientists, amateur astronomers, and cit
zen scientists working together to collect valuable data. These collaborations have significantly expanded the reach of meteor networks, enabling the recording of events from diverse locations and perspectives.
However, with the increasing number of detection stations, the accumulation of video and image data has also surged. Consequently, this data influx has created a bottleneck in processing and analysis, as traditional manual methods prove to be excessively time-consuming and resource-intensive. To deal with these issues, many networks are embracing automation as a means to efficiently handle the significant volumes of generated data (Molau, 2001; Spurny et al., 2007; Gural and Segon, 2009; Brown et al., 2010; Gural, 2011; Weryk et al., 2013; Howie et al., 2017; Suk and Simberova, 2017; Nikolic, 2019; Pena-Asensio et al., 2021a,b; Vida et al., 2021). These automated approaches allow meteor scientists to analyze and interpret meteor data faster and more efficiently than ever before, helping to uncover new insights into meteor behavior and properties.
The detection of luminous sources moving in the sky is relatively easy to solve as the camera control software only needs to store and overwrite the last few minutes of recording, and in the event of a sudden increase in illumination, permanently save this data. However, the trigger threshold must be carefully calibrated to avoid missing any meteors while minimizing the number of false positives. Defining this cut-off is complex as it can vary depending on a number of factors, including general lighting conditions, which need to be updated periodically to consider specific dusk and dawn illumination conditions or the presence of the Moon.
The pipelines that attempt to automate the detection and tracking of meteors face a difficult task because meteors are virtually random phenomena and can occur in a variety of ways due to impact geometry, variable velocity, size, shape, composition, viewing angle, sky conditions (clouds or illumination), etc. In addition, meteors must be distinguished from false positives caused by satellites, airplanes, helicopters, drones, birds, lightning, or artificial light sources. The combination of possible characteristics that meteors can exhibit makes it difficult to define fixed parameters that work in all cases. As a result, many networks still rely on human experts to manually review the footage and identify/classify meteors. However, human operators can occasionally make errors, particularly when artificial events cause confusion or ambiguity. However, there are also networks that use fully automated approaches based on traditional computer vision techniques, such as image processing algorithms with fixed instructions, e.g. CAMS (Jenniskens et al., 2011), SonotaCo (SonotaCo, 2016), or EDMOND (Kornos et al., 2014). Some of the detection pipelines currently in use are _MetRec_, _MeteorScan_, and _UFOCapture_; an overview of their capabilities is given in Molau and Gural (2005). These automated approaches show a high percentage of events with suspicious calculated results due to their reliance on fixed parameters that may not be appropriate for all scenarios (Hajdukova et al., 2020).
Consequently, addressing the challenges of meteor monitoring requires the adoption of artificial intelligence techniques. In this paper, we delve into the utilization of new methodologies for meteor classification and processing tasks. Specifically, we investigate how Machine Learning (ML) approaches can effectively enhance the accuracy and efficiency of automated pipelines, given the massive volume of data generated by meteor networks. We present a fully automated pipeline leveraging Convolutional Neural Networks (CNNs) for meteor detection and tracking using transfer learning and the Gradient-weighted Class Activation Mapping (Grad-CAM) technique.
## 2 Artificial intelligence for meteor detection
Advances in computer technology and hardware performance have fueled the remarkable progress of ML, particularly artificial neural networks with multiple layers, which are classified as deep learning. Neural networks have become increasingly popular in various domains due to their exceptional performance in image classification and recognition. Among them, CNNs have gained popularity for their fault tolerance and self-learning capabilities through multi-layer feedforward networks with a convoluted structure (Gu et al., 2018). They can handle complex environments and unclear background problems with significantly better generalization ability compared to other methods. A typical CNN architecture includes an input layer, multiple convolutional layers, pooling layers, a fully connected layer, and an output layer. CNNs can be used for both supervised and unsupervised learning, and are applied in diverse fields such as computer vision, natural language processing, and others (Hastie et al., 2001).
In the context of meteor monitoring, current research efforts using ML focus on two main objectives. First, some studies concentrate on determining the presence of a meteor in a given event, with the goal of distinguishing meteoric events from non-meteoric phenomena. Alternatively, other works rely on ML for meteor tracking, facilitating the accurate localization and monitoring of meteoroids throughout their bright atmospheric trajectory. The primary challenge is to effectively distinguish false positives caused by non-meteor objects such as airplanes, birds, and insects, or atmospheric conditions (e.g. clouds). These innovative approaches are being increasingly employed within meteor networks worldwide, including the Global Meteor Network (GMN) (Gural, 2019), AllSky7 Fireball Network Germany (FNG1), Meteorite Orbits Reconstruction by Optical Imaging (MOROI) (Nedelcu et al., 2018), Canadian Automated Meteor Observatory (CAMO) (Weryk et al., 2013), Cameras for All-sky Meteor Surveillance (CAMS) (Jenniskens et al., 2011), EXOSS meteor network2, and Meteor Automatic Imager and Analyzer (MAIA) (Vitek et al., 2011).
Footnote 1: [https://allsky7.net/](https://allsky7.net/)
Footnote 2: [https://exoss.org/](https://exoss.org/)
In image classification, it is customary to use transfer learning techniques with pre-trained models of CNNs (Sennlaub et al., 2022; Marsola and Lorena, 2019; Galindo and Lorena, 2018). These methods allow inheriting the ability to detect objects from those pre-trained models, which need to be retrained on meteors. With this methodology, optimal results are
achieved with a smaller number of training data compared to starting from an uninitialized model. As underlined in Galindo and Lorena (2018), uttermost results are achieved when the proper pre-trained dataset is selected. They compared the performance of ImageNET and Fashion-MNIST (Xiao et al., 2017) with fine-tuning, concluding that the latter is the most optimal as it is already trained to work with black and white images. They also checked whether the CNN could distinguish meteors if the image was previously tweaked (slightly zoomed, rotated, or flipped). The results showed that these transformations could produce unrealistic apparent trajectories and worsen the classification. In order to solve these types of problems, Ganju et al. (2023) use a windowing technique to create new frames from existing ones. With this, all meteor detection would have the same number of frames, easing the posterior analysis. In Cecil and Campbell-Brown (2020) is shown a comparison of different sets of combinations of different techniques used in image processing, such as convolutions and max-pools.
Even though most CNN meteor detection algorithms have been trained to reach a satisfying prediction percentage (\(>\) 99%), particularly when considering large sample sizes of more than \(\sim\)10,000 events, some anomalies are still misclassified as meteors for small datasets. The next step in meteor detection algorithms is to consider the intrinsic properties of meteors on camera images to discard these misleading anomalies. Additionally, the main drawback of the current meteor tracking algorithms is the runtime required to analyze high-definition 1080p video images. However, they perform well when dealing with small, low-resolution video images.
Beyond CNNs, other ML techniques are often used. It is the case of Recurrent Neural Networks (RNN), Gradient Boost (GB), or Random Forest (RF), which can also be used as complementary analysis tools (Gural, 2019; Anghel et al., 2022). Temporal resolution can be introduced in the analysis by using other networks such as Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Temporal Convolutional Network (TCN), Time Delay Neural Network (TDNN), Support Vector Machine (SVM), or VGG16 (Siladi et al., 2015; Simonyan and Zisserman, 2015; Sennlaub et al., 2022), supporting 16 layers. A particular additional technique is the discrete pulse transform (DPT) (Rohwer and Laurie, 2006), in which image signals are decomposed in pulses. Vitek and Nasyrova (2019) introduced DPT to characterize the number of pulses related to the meteor compared to those of stars. Even though it is not specified if the results are better than other works, they do underline that using DPT is faster than other methods used in the MAIA data.
Sennlaub et al. (2022) also classified those false positives based on their origin. They did not achieve solid statements but pointed out similarities among false positive subgroups. As sketched in Le Lan and Dinh (2021) a future algorithm to classify the false positives would include prior knowledge of each subgroup. These could also be expanded to classify the re-entry of artificial space debris. They could also include a cross-matching identification between different stations as a method of improving the overall accuracy (Anghel et al., 2023).
Table 1 provides a comprehensive summary of the outcomes achieved in these works accompanied by the data source, the number of samples, the technique used, and the results such as F1 score and accuracy. The F1 score is a performance metric that combines precision and recall to measure the accuracy of a classification model. Precision represents the proportion of true positive predictions out of all positive predictions, while recall represents the proportion of true positive predictions out of all actual positive instances. The F1 score considers both precision and recall, making it useful when the dataset is imbalanced or when false positives and false negatives have different consequences. Accuracy is a common evaluation metric used to measure the overall correctness of a classification model. It calculates the proportion of correctly predicted instances (both true positives and true negatives) out of the total number of instances in the dataset.
## 3 CNN, transfer learning, and Grad-CAM
Our work aims to use ML techniques to achieve two main goals: detecting the presence of meteors in images and tracking the motion of meteors in the field of view. To achieve this, and based on the review of the scientific literature, we have decided to develop a CNN model that classifies images into two groups, "Meteors" and "No-meteors", using transfer learning. In addition, we implemented a novel application of Grad-CAM to track the coordinates of the meteor's motion.
### Detection
To build our model, we chose to use ResNet-34, which is a 34-layer pre-trained CNN from the Residual Network family (He et al., 2015). This allowed us to quickly specialize the model for our specific use case using transfer learning techniques with a small dataset and rapidly inherting all detection skills already learned by the network. ResNet-34 mainly consists of an input layer, convolutional layers, residual blocks, shortcut connections, downsampling, global average pooling, and fully connected layers. One of the key elements of this network is the residual building block, which is its infrastructure. As shown in Figure 1, the residual building block consists of several convolutional layers (Conv), batch normalizations (BN), a rectified linear unit (ReLU) activation function, and a link. This block is used for all 34 layers of ResNet-34, as depicted in Figure 2. The output of the residual block is given by the formula \(y=F(x)+x\), where \(F\) is the residual function and \(x\) and \(y\) are the input and output of the residual function, respectively. The entire residual network is composed of the first convolutional layer and multiple basic blocks, making it a highly effective and efficient deep learning architecture for image recognition tasks.
For training and testing the model, we used a dataset of 982 images of meteors detected by optical devices of the Spanish Meteor Network (SPMN) network stations (Trigo-Rodriguez et al., 2006), along with 56,285 images without meteor detection collected over the year 2021, particularly from the Pujalt observatory. To balance the two groups, we generated a
dataset of 982 images with meteors and 1,050 without detections for training. To ensure reliable model performance, a portion of these images, specifically 20%, is allocated for validation, which is utilized to evaluate the training progress. From this dataset, 300 images were specifically set aside for testing purposes, 150 from the meteor class and 150 from the no-meteor. The test set serves as an independent dataset to evaluate the final performance of the trained model after the completion of the training process.
The dataset consisted of grayscale long exposure (30 seconds) images that were pre-processed to enhance the meteor trail and remove static visual elements from the background by subtracting consecutive images. This included converting the images to black and white, resizing them to 400x400, and subtracting successive images to remove the background. To facilitate the generalization of the model and reduce overfitting, data augmentation techniques were used during the transfer learning process. Specifically, each batch of images received by the CNN during the 35 epochs of training was modified with geometric transformations such as randomly flipping, cropping, rotating, and translating the images, or applying lighting modifications.
### Tracking
The final layer of our CNN exhibits activations corresponding to neurons that are specifically triggered when a meteor is detected in an image. While such activations provide initial utility, leveraging the subsequent classification layer's weights in conjunction with these activations further enhances their significance. It is important to note that the classification layer possesses a comprehensive understanding of the meteor classification task, enabling it to assign appropriate weights to the activated neurons. Hence, certain previously activated neurons may be deemed less influential in the final classification decision. This is the foundation of the so-called Class Activation Mapping (CAM) technique (Zhou et al., 2015).
The CAM technique is usually employed to generate a heatmap for a given class (class "meteor" in our case). This technique involves capturing the activations from the last convolutional layer of the CNN and multiplying them by the corresponding weights from the last fully connected layer responsible for the classification task. By performing this operation, the CAM technique effectively highlights the areas within the image that contribute the most to the meteor classification, that
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{Layer name} & \multicolumn{2}{c}{Output size} & \multicolumn{2}{c}{34-layer} \\ \hline Conv1 & 112\(\times\)112 & \multicolumn{2}{c}{\(7\times 7,64\), _stride2_} \\ & & \multicolumn{2}{c}{\(3\times 3\)_max pool, stride2_} \\ Conv2\_x & 56\(\times\)56 & \multicolumn{2}{c}{\(3\times 3,64\)} & \(\times\) 3 \\ Conv3\_x & 28\(\times\)28 & \multicolumn{2}{c}{\(\begin{cases}3\times 3,128\\ 3\times 3,128\end{
is, the Region of Interest (ROI). Figure 2 illustrates the overall procedures of this method.
However, in order to further enhance the performance of our model, we opted to incorporate an advanced variant of CAM known as Grad-CAM (Selvaraju et al., 2016). Grad-CAM builds upon the CAM methodology by integrating gradient information instead of the weights of the classification layer. This provides a more fine-grained localization of important regions within an image. Grad-CAM computes the gradients of the target class of a specific layer with respect to the activations of the same layer. By multiplying the activations from the last convolutional layer with the average of the gradients across the feature map of that layer, Grad-CAM obtains the importance weights for the activation maps. The weighted combination of the activation maps produces the final heatmap, which visually highlights the critical regions within the input image for the classification of the target class.
To capture finer details for properly tracking the meteor motion, we focus on the activations of the initial convolutional layer, restricting our attention to activations falling within the defined ROI delineated by the Grad-CAM. Within the CNN, the activation of the deepest layers offers a more detailed resolution map (56x56). However, these layers are more prone to noise and less reliable in accurately identifying the ROI related to meteor detection. To tackle these difficulties, we apply a noise reduction strategy by selectively retaining only the cells that align with the cells from the higher precision but lower resolution Grad-CAM (7x7) with a non-zero value. By doing so, we filter out noisy activations and focus on the cells that have a meaningful impact on the meteor classification. Subsequently, we extract the cells with the maximum activation values from the refined high-resolution activation map. By calculating the average position among these selected cells, we are able to project a single point onto the original image. This refinement process significantly enhances the accuracy of meteor detection by precisely pinpointing the location of meteors within the frames generated by our model. Figure 3 illustrates the described meteor detection and tracking process.
Note that we do not multiply the activations of the initial layer with the weights of the classification layer because the spatial and semantic gap between the initial convolutional layer and the classification layer limits the effectiveness of such an approach. Although Grad-CAM allows the analysis of any layer in the network, we observed that focusing just on the activations from the initial convolutional layer within the ROI calculated using Grad-CAM on the last layer yields good results.
## 4 Training and results
Figure 4 shows the evolution during the training process of the loss function as a function of the batches processed, which is a metric that measures the deviation between predicted and actual values. The primary goal during training is to minimize this function. The training set consists of the images used to train the CNN model, while the validation set consists of a subset of 150 images randomly selected and reserved for evaluating the model's performance and generalization ability. By evaluating the loss function on both sets, we monitor the progress of the model and detect any signs of overfitting or underfitting.
A batch refers to a group of images that are processed together during each iteration of the training algorithm. The total number of batches processed can be calculated using the formula \(Batches=N*(I/BS)\), where \(N\) is the number of epochs, which is the number of times the entire training data set is passed through the network during training. \(I\) denotes the total number of images in the training dataset, including both meteor and non-meteor images. Finally, \(BS\) refers to the batch size, which represents the number of images fed to the network in each training iteration. For this study, we used a batch size of 32.
By substituting these values, we determined that a total of 1,515 batches were processed during the training process. This corresponds to the maximum value observed in Figure 4, indicating the completion of all batches. Understanding the relationship between the loss function, the batches processed, and the progress of the training and validation sets provides valuable insight into the training dynamics and performance evaluation of the neural network model. As every batch comprises 32 images and is subjected to various random transformations, the overall training process, including data augmentation, incorporates a varied array of 48,480 distinct images. Table 3 compiles the evolution of different metrics during the training process.
The next step involves defining the crucial hyperparameter for the training process: the learning rate. By evaluating the loss function values across various learning rates, we can identify the region of sustained and substantial loss reduction, disregarding transient peaks and irregular drops as they do not represent reliable trends. Once this region is identified, we pinpoint the midpoint of the steepest descent line, which corresponds to the most significant loss reduction. This specific learning rate is then selected for the subsequent training procedure. For our specific study, we determined a learning rate of 0.003 to be the optimal choice.
The training phase of our pipeline ends with an F1 score of 0.94, indicating the high precision and recall achieved during the training process. We then evaluated the performance of the trained model on the test dataset to assess its generalization capabilities. The evaluation results show that our model achieved an accuracy of 0.96 on the test dataset. This accuracy metric indicates the model's ability to correctly classify meteor
Figure 2: The general process of class activation mapping method. Adapted from Jiang et al. (2021).
## 4 Conclusion
In this paper, we have proposed a new method for constructing
and non-meteor images with a high degree of accuracy. Furthermore, when specifically considering the meteor class, our model achieved a precision of 0.98.
To further analyze the performance of the model, a confusion matrix was constructed as it provides insight into the classification performance by showing the distribution of the predicted labels against the true labels. The confusion matrix for our two labels "Meteor" and "No-Meteor" is shown in Figure 5.
In the confusion matrix, the rows correspond to the true labels, while the columns represent the predicted labels. The matrix values indicate the proportion of images belonging to each category. The confusion matrix shows that 47% of the images were correctly classified as meteors, while 3.3% of the images were incorrectly classified as non-meteor images. Furthermore, 1% of the images were incorrectly classified as meteor images, while 49% were correctly identified as non-meteor images. The high accuracy achieved by the pipeline demonstrates its robustness in accurately detecting and classifying meteor images. The low misclassification rates for both meteor and non-meteor classes indicate the effectiveness of the trained CNNs in distinguishing between these classes, then minimizing the required human time for these time-consuming tasks.
It is worth noting that the proposed model shows the ability to generalize and accurately detect meteor trails even in color frames that have not undergone the previous frame subtraction process. This is particularly noteworthy because these frames may contain stationary elements such as clouds, the Moon, buildings, or other obstructions that can cause interference and affect the accuracy of meteor detection. Despite these challenges, the model can effectively distinguish and identify meteor trails, providing robust and reliable results. This further validates the model's ability to operate under real-world conditions, making it a valuable tool for meteor scientists and enthusiasts alike.
However, this very capability in detecting meteor trails renders it susceptible to satellite misidentifications, as they often exhibit a similar trace when reflecting the sunlight. An instance of such incorrect classification is illustrated in the top panel of Figure 6 with a transit of a _SpaceX Starlink_ satellite train, necessitating retraining the network to encompass this new class. Conversely, the bottom panel shows an undetected meteor, possibly due to its proximity to the full moon and dimming by a cloud-covered field of view.
Figure 4: Training and validation loss during model training.
\begin{table}
\begin{tabular}{c c c c c} \hline Epoch & Error & Accuracy & Precision & F1 \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 3: Metrics evolution during the training process, including for each epoch the error rate, accuracy, precision, recall, and F1 score.
Figure 5: Confusion matrix of the trained model with normalized values in parentheses.
We used data augmentation techniques during the training process, which is a common strategy that helps prevent overfitting and improves the model's ability to generalize well to unseen data. However, it is worth noting that Xiao et al. (2017) suggested data augmentation could potentially degrade model performance in certain cases. Therefore, it is possible that our use of data augmentation in the training process may have resulted in slightly lower performance compared to some of the models reported in Table 1. Despite this slight performance difference, our pipeline still demonstrates a high level of accuracy and efficiency in meteor detection and classification. The inclusion of data augmentation techniques is crucial to promote better generalization and robustness in the model, even though it may have had a slight impact on the overall results compared to other models in the comparison.
We compared our results with those obtained by Galindo and Lorena (2018) and Marsola and Lorena (2019), who used datasets of similar size to ours. In this comparison, our methodology equals or outperforms existing approaches, demonstrating superior performance. Furthermore, our pipeline incorporates the added complexity of meteor tracking, which presents a significant challenge due to the smaller luminous trace in each frame. Meteor tracking enables the computation of the velocity curve, a key factor in both discriminating between artificial and natural objects and in determining the heliocentric orbit and the potential meteorite strewn field. Automating this process facilitates the extraction of orbital elements for hundreds or thousands of meteors detected nightly, providing valuable insights into both the sporadic meteoroid background flux and the characteristics of meteoroid streams.
In subsequent phases, we intend to enhance the pipeline by refining the balance of false positives, encompassing a diverse spectrum of potential false positive sources including satellites, planes, birds, bugs, and other light sources in the training process.
## 5 Conclusions
The implementation of automated detection software has led to a massive increase in the amount of data collected and reduced every year by meteor networks. However, the need for human oversight to filter out false positives and organize the records has created a bottleneck, and the traditional computer vision techniques implemented have limited performance due to the random and specific characteristics of each meteor event. We employed CNNs to address these challenges.
In our study, we used a dataset of 982 meteor images along with 1,050 images without meteors detected by SPMN stations in 2021 to train a CNN model. A transfer learning technique was applied, and Grad-CAM was used for accurate tracking. Our main results are as follows:
1) Our approach utilized ResNet-34, a deep learning architecture consisting of 34 pre-trained layers. By using pre-trained layers, we capitalized on the knowledge and representations gained from a large dataset during the initial training phase, resulting in improved model performance. In addition, data augmentation techniques were employed to facilitate the model's ability to accurately generalize and mitigate overfitting. The results achieved demonstrate a precision of 98% for meteor classification.
2) Grad-CAM was used to track the coordinates of the meteor within each image. This technique involved analyzing deeper layers of the neural network, which have higher accuracy but lower resolution. ROI information was extracted using the gradients of the last convolutional layer and then combined with activation information from the initial layer, which is characterized by higher resolution but lower accuracy. This fusion of information allowed the identification of the most critical pixel corresponding to the position of the meteoroid. This technique allows for the precise localization of meteor positions within frames.
3) The high performance achieved by our pipeline underscores its robustness in precisely detecting and classifying meteor images. The success rate, even with a relatively small dataset, highlights the potential of our method to significantly reduce the workload of meteor scientists and station operators involved in meteor data analysis. This potential is further enhanced by one of the most notable advancements in our methodology: the novel use of Grad-CAM for meteor tracking in combination with the initial activation map.
Figure 6: Top panel: False positive of a _SpaceX Startlink_ satellite track as it exhibits similar characteristics as a meteor trail. Recording obtained from the Alpict SPMN station under the operation of Marc Corretég-Gilart. Bottom panel: False negative of SPMN070523G superbolide recorded near the full moon with a cloudy sky. Recording obtained from Bartolo-Castello SPMN station under the operation of Vicente Ibanez
In summary, our study highlights the significant potential of applying ML techniques to meteor monitoring. It illustrates the effectiveness of CNNs and transfer learning in reducing false positives and correctly identifying meteors in images. By automating the meteor monitoring process, our pipeline increases the efficiency of meteor detection and tracking using the GradCAM technique. This, in turn, facilitates the study of meteoroid fluxes, aids in population characterization, and improves our ability to distinguish between meteorite-dropping events, thereby increasing fresh extraterrestrial material recovery rates.
## Acknowledgments
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 865657) for the project "Quantum Chemistry on Interstellar Grains" (QUANTUMGRAIN). JMT-R, EP-A, and PG-T acknowledge financial support from the project PID2021-128062NB-I00 funded by MCIN/AEI/10.13039/501100011033. AR acknowledges financial support from the FEDER/Ministerio de Ciencia e Innovacion - Agencia Estatal de Investigacion (PID2021-126427NB-I00, PI: AR). This work was also partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M. We thank all SPMN station operators and photographers whose continuous dedication has allowed to record inessantly the meteors of the Iberian Peninsula, the Balearic Islands, and the Canary Islands. We express our gratitude to David Puiggros-Figueras for his assistance in the CAM and Grad-CAM methodology. The following _Python_ packages were extensively used in the study: _Numpy_, _OpenCV_, _Matplotlib_, _PyTorch_, and _fast.ai_.
|
2304.14176 | Exploring the flavor structure of quarks and leptons with reinforcement
learning | We propose a method to explore the flavor structure of quarks and leptons
with reinforcement learning. As a concrete model, we utilize a basic
value-based algorithm for models with $U(1)$ flavor symmetry. By training
neural networks on the $U(1)$ charges of quarks and leptons, the agent finds 21
models to be consistent with experimentally measured masses and mixing angles
of quarks and leptons. In particular, an intrinsic value of normal ordering
tends to be larger than that of inverted ordering, and the normal ordering is
well fitted with the current experimental data in contrast to the inverted
ordering. A specific value of effective mass for the neutrinoless double beta
decay and a sizable leptonic CP violation induced by an angular component of
flavon field are predicted by autonomous behavior of the agent. Our finding
results indicate that the reinforcement learning can be a new method for
understanding the flavor structure. | Satsuki Nishimura, Coh Miyao, Hajime Otsuka | 2023-04-27T13:25:34Z | http://arxiv.org/abs/2304.14176v3 | # Exploring the flavor structure of quarks and leptons with reinforcement learning
###### Abstract
We propose a method to explore the flavor structure of quarks and leptons with reinforcement learning. As a concrete model, we utilize a basic policy-based algorithm for models with \(U(1)\) flavor symmetry. By training neural networks on the \(U(1)\) charges of quarks and leptons, the agent finds 21 models to be consistent with experimentally measured masses and mixing angles of quarks and leptons. In particular, an intrinsic value of normal ordering tends to be larger than that of inverted ordering, and the normal ordering is well fitted with the current experimental data in contrast to the inverted ordering. A specific value of effective mass for the neutrinoless double beta decay and a sizable leptonic CP violation induced by an angular component of flavon field are predicted by autonomous behavior of the agent.
## 1 Introduction
The origin of flavor structure of quarks and leptons is one of the unsolved problems in particle physics. To understand the peculiar pattern of fermion masses and mixings, the flavor symmetries have been utilized to explain these flavor puzzle.1 An attractive feature of the flavor symmetry is that it may connect the bottom-up approach of flavor model building with the top-down approach based on an ultra-violet completion of the Standard Model such as the string theory. In this paper, we adopt a bottom-up approach to explore the flavor structure of quarks and leptons.
Footnote 1: See, e.g., Refs. [1; 2; 3; 4; 5; 6; 7; 8; 9; 10] for a review.
In most traditional approaches to address the flavor structure of quarks and leptons, one assumes a certain representation of quarks and leptons under the flavor symmetry among all possible configurations. Indeed, it will be difficult to exhaust all possible realistic flavor patterns from a broad theoretical landscape. For instance, in a global \(U(1)\) flavor symmetric model using the Froggatt-Nielsen (FN) mechanism [11], we have a degree of freedom of \(U(1)\) charge assignment of each quark and lepton. When we consider the flavor dependent \(U(1)\)
charges of the quarks \(q_{i}\) within the range \(-9\leq q_{i}\leq 9\), it results in \({\cal O}(10^{14})\) patterns of \(U(1)\) charges even for the quark sector. When we combine with the lepton sector, we are faced with the problem of doing a brute-force search over a higher-dimensional parameter space. This is a simple flavor model using the continuous flavor symmetry, but in general, it is difficult to find a realistic flavor pattern from a huge amount of possibilities in flavor models with discrete symmetries. Thus, it motivates us to apply recent machine learning techniques for an exhaustive search of flavor models.
In this paper, we will focus on a reinforcement learning (RL), which is one of machine learning techniques, to explore such a huge landscape of flavor models. In the framework of RL, an agent autonomously discovers desirable behavior to solve given problems, where a systematic search is impossible. So far, such a technique was utilized to find the parameter space of FN models with an emphasis on the quark sector [12], where only the experimental values of quark masses are efficiently reproduced. However, it is quite important to see whether one can reproduce the flavor structure of all the fermion masses and mixings. Throughout this paper, we assume the Type-I see-saw mechanism to realize active neutrino masses and large mixing angles in the lepton sector. We will utilize a basic policy-based algorithm, where the neural network is trained by data given by an environment. To find the flavor structure of quarks and leptons efficiently, we set the environment where the inputs consist of \(U(1)\) charges of quarks and leptons under the \(U(1)\) flavor symmetry and the coefficients appearing in Yukawa couplings are randomly fixed as \({\cal O}(1)\) real values. The outputs of the neural network are probabilities for the action determined by a policy. Here, the action of agent is given by increasing or decreasing one of the \(U(1)\) charges by one, and the agent receives the reward (punishment) for this action when the fermion masses and mixings determined by the \(U(1)\) charges approach (deviate from) the experimental values. Specifically, the reward function is defined by the intrinsic value consisting of fermion masses and elements of CKM and PMNS matrices whose values are minimized under a vacuum expectation value (VEV) of complex flavon field.
In addition to reproducing the experimental values, RL will provide new insights on the neutrino mass ordering and CP phase in the lepton sector. Note that a source of CP violation is assumed to be originating from the phase of complex flavon field. By training neural networks without specifying the neutrino mass orderings, RL can help to find whether the neutrinos are in the normal ordering or in the inverted ordering. From the results of trained network, we find that the normal ordering is statistically favored by the agent. Furthermore, the sizable Majorana CP phases and effective mass for the neutrinoless double beta decay are predicted around specific values.
This paper is organized as follows. After briefly reviewing RL with an emphasis on Deep Q-network in Sec. 2, we establish the FN model with RL in Sec. 3. We begin with the model building with RL by focusing on the quark sector in Sec. 4, and the training of the lepton sector is performed in Sec. 5. In particular, we analyze two scenarios for the neutrino sector. In Sec. 5.1, we implement the FN model with fixed neutrino mass ordering to the neural network, but the neutrino mass ordering is not specified in the analysis of
Sec. 5.2. Sec. 6 is devoted to the conclusion and discussion. In Appendix A, we list our finding \(U(1)\) charge assignment of quarks and leptons.
## 2 Reinforcement learning with deep Q-network
In this section, we briefly review RL with the Deep Q-Network (DQN) used in the analysis of this paper. For more details, see, e.g., Ref. [13]. The RL is constructed by an agent and an environment. At a certain time, the agent observes the environment and takes some action. Depending on the change of the environment caused by the action, the agent will receive rewards or penalties. By repeating those processes and searching for actions that maximize the total rewards, the agent is designed to exhibit autonomous behavior in the environment.
To determine the action, we utilize the neural network model. In the multi-layer perceptrons, a \(n\)-th layer with \(N_{n-1}\)-dimensional vector \(\vec{x}_{n-1}=(x_{n-1,1},x_{n-1,2},\cdots,x_{n-1,N_{n-1}})\) in multi-layer perceptrons transforms into a \(N_{n}\)-dimensional vector \(\vec{x}_{n}=(x_{n,1},x_{n,2},\cdots,x_{n,N_{n}})\):
\[x_{n,i}=h_{n}(w_{ij}^{n}x_{n-1,j}+b_{i}^{n}), \tag{1}\]
with \(h,w,b\) being the activation function, the weight and the bias, respectively. In the analysis of this paper, we employ a fully-connected layer. Then, the DQN known as one of the RL methods is characterized by Q network, target network, and experience replay. In this paper, we consider the neural networks whose output can be constructed by a softmax layer. Note that weights and biases in Q network and target network are generically different from each other.
RL using the DQN proceeds through the following 5 steps:
1. An agent observes the environment state \(s\) which is given as an input in the target neural network, as shown in Fig. 1. The target network (TN) gives the probability \(p\) as an output. Since we adopt the softmax layer defined by \[f:\mathbb{R}^{n}\rightarrow[0,1]^{n},\] (2) with \(f(\mathbf{x})_{i}=\frac{e^{x_{i}}}{\sum_{i=1}^{n}e^{x_{i}}}\), the output will be regarded as probabilities.
Figure 1: As a first step, the state \(s\) observed by the agent is given as input in the target network.
2. In a second step, the agent determine the _action_\(\mathfrak{a}\), taking into account the probability \(p\) given by the first step. At the initial stage, the neural network cannot judge whether the action is an appropriate one. To acquire the ability of autonomous behavior for the agent, we adopt the \(\epsilon\)-greedy method, where the greedy action \(\mathfrak{b}\) is selected with probability \(1-\epsilon\) and a random action \(\mathfrak{c}\) is selected with probability \(\epsilon\) (see Fig. 2), that is, \[\mathfrak{a}=\left\{\begin{array}{ll}\mathfrak{b}&\left(\text{with}\,1- \epsilon\right)\\ \mathfrak{c}&\left(\text{with}\,\epsilon\right)\end{array}\right..\] (3) The number of actions is specified by \(N_{\text{step}}\) for one episode and the agent repeats this step \(N_{\text{ep}}\) times as shown in Table 1. Note that the greedy action is determined by taking into account the probability \(p\) in the first step. The value of \(\epsilon\) is chosen to ensure that the agent gradually takes the greedy action, whose explicit form will be given by \[\epsilon=\epsilon_{0}r^{k-1},\] (4) with \(k=1,2,...,N_{\text{ep}}\). In the following analysis, we adopt \(\epsilon_{0}=1\) and \(r=0.99999\). Furthermore, we set the lower bound on \(\epsilon\), that is, \(\epsilon_{\text{min}}=0.01\). If the value of \(\epsilon\) is less than \(\epsilon_{\text{min}}\) for \(k\), we define \(\epsilon=\epsilon_{\text{min}}\). It means that the agent gains various experiences for the large \(\epsilon\), using which the agent gradually takes a plausible action.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Step 1 & Step 2 & \(\cdots\) & Step \(N_{\text{step}}\) \\ \hline Episode 1 & \(s_{1}^{1}\) & \(s_{2}^{1}\) & \(\cdots\) & \(s_{N_{\text{step}}}^{1}\) \\ \hline Episode 2 & \(s_{1}^{2}\) & \(s_{2}^{2}\) & \(\cdots\) & \(s_{N_{\text{step}}}^{2}\) \\ \hline ⋮ & ⋮ & ⋮ & \(\cdots\) & ⋮ \\ \hline Episode \(N_{\text{ep}}\) & \(s_{1}^{N_{\text{ep}}}\) & \(s_{2}^{N_{\text{ep}}}\) & \(\cdots\) & \(s_{N_{\text{step}}}^{N_{\text{ep}}}\) \\ \hline \end{tabular}
\end{table}
Table 1: The environment states \(s\) are changed by the actions. The agent performs at most \(N_{\text{step}}\) step for one episode.
Figure 2: In the second step, the agent selects the action \(\mathfrak{a}\) through the \(\epsilon\)-greedy policy.
3. The state \(s\) is updated to \(s^{\prime}\) through the action \(\mathfrak{a}\). Depending on the states \(s^{\prime}\), the agent receives a reward \(\mathcal{R}\). In a third step, the transition \(e=\langle s,\mathfrak{a},s^{\prime},\mathcal{R}\rangle\) corresponding to trajectories of experience is stored in the replay buffer as seen in Fig. 3.
4. A fourth step consists of "experience replay" and "stochastic gradient method". The experience replay is to extract a mini-batch of transitions randomly sampled from the replay buffer, where the Q network is optimized by using at most a batch of transitions times epoch number. The advantage of this experience replay is twofold. First, the transitions in a batch are uncorrelated due to the random selection of past experiences. Second, one can reuse each transition in the training because all the experience is stored in the replay buffer.
The Q network is updated by the stochastic gradient method where the mini-batch of transitions is used in the training data. When we denote outputs of the target network and Q network by \(y\) and \(\tilde{y}\), respectively, the weights and the biases are updated by
Figure 4: In the fourth step, we randomly pick up the transitions with batch size from the replay buffer, and the weights and the biases of the Q network are updated by the stochastic gradient method in terms of these transitions.
Figure 3: In the third step, the transition \(e=\langle s,\mathfrak{a},s^{\prime},\mathcal{R}\rangle\) is stored in the replay buffer.
minimizing a loss function \(L(y,\tilde{y})\). In this paper, we adopt the Huber function: \[L_{\rm Huber}(y,\tilde{y})=\left\{\begin{array}{ll}\frac{1}{2}(y_{i}- \tilde{y}_{i})^{2}&\quad\text{if}\,|y_{i}-\tilde{y}_{i}|\leq\delta\\ \delta\cdot|y_{i}-\tilde{y}_{i}|-\frac{1}{2}\delta^{2}&\quad\text{if}\,|y_{i} -\tilde{y}_{i}|>\delta\end{array}\right.,\] (5) with \(\delta=1\), which combines a mean squared error and a mean absolute error. Note that the training of Q network is carried out at the end of one episode, including at most the \(N_{\rm step}\) step as shown in Table 1.
5. In the framework of DQN, there are two neural networks: Q network and target network, each with different parameters \(\Theta=\{w,b\}\) and \(\Theta^{\prime}=\{w^{\prime},b^{\prime}\}\), respectively. Lastly, the parameters \(\Theta\) in the Q network are slightly reflected in \(\Theta^{\prime}\) in the target network (see Fig. 5). Specifically, in the case of a soft update, this reflection proceeds as follows: \[\Theta^{\prime}\leftarrow(1-\alpha)\Theta^{\prime}+\alpha\Theta,\] (6) where \(\alpha\) is called the "learning rate". When \(\alpha\) is large, the stability of the learning will be lost, but the small \(\alpha\) will lead to a slow learning. In this paper, we adopt \(\alpha=2.5\times 10^{-4}\).
## 3 Froggatt-Nielsen model with reinforcement learning
### The environment
In FN models, the hierarchical structure of fermion masses and the flavor structure are addressed by the global \(U(1)\) symmetry. For simplicity, we introduce only one complex scalar field (so-called flavon field), charged under \(U(1)\). The relevant Yukawa terms of quarks and leptons are given by
\[\mathcal{L} =y_{ij}^{u}\left(\frac{\phi}{M}\right)^{n_{ij}^{u}}\bar{Q}_{i}H^{ c}u_{j}+y_{ij}^{d}\left(\frac{\phi}{M}\right)^{n_{ij}^{d}}\bar{Q}_{i}Hd_{j}+y_{ ij}^{l}\left(\frac{\phi}{M}\right)^{n_{ij}^{l}}\bar{L}_{i}HI_{j}\] \[+y_{ij}^{\nu}\left(\frac{\phi}{M}\right)^{n_{ij}^{\nu}}\bar{L}_{ i}H^{c}N_{j}+\frac{y_{ij}^{N}}{2}\left(\frac{\phi}{M}\right)^{n_{ij}^{N}}M \bar{N}_{i}^{c}N_{j}+\text{h.c.}, \tag{7}\]
Figure 5: In the last step, the weights and the biases of the target network are updated, following the soft update with the learning rate \(\alpha\).
where \(\{Q_{i},u_{i},d_{i},L_{i},l_{i},N_{i},H\}\) denote the left-handed quarks, the right-handed up-type quarks, the right-handed down-type quarks, the left-handed leptons, the right-handed charged leptons, the right-handed neutrinos, and the SM Higgs doublet with \(H^{c}=i\sigma_{2}H^{*}\), respectively. Here, we assume three right-handed neutrinos and tiny neutrino masses are generated by Type-I seesaw mechanism where the parameter \(M\) is chosen as \(M=10^{15}\,\text{GeV}\) throughout the analysis of this paper, and the Yukawa couplings \(\{y^{u}_{ij},y^{d}_{ij},y^{l}_{ij},y^{\nu}_{ij},y^{N}_{ij}\}\) are \(\mathcal{O}(1)\) real coefficients. Since the SM fields and the flavon field are also charged under \(U(1)\), let us denote their \(U(1)\) charges by
\[\{q(Q_{i}),\,q(u_{i}),\,q(d_{i}),\,q(L_{i}),\,q(l_{i}),\,q(N_{i}),\,q(H),\,q( \phi)\}. \tag{3.2}\]
To be invariant under the \(U(1)\) symmetry, the integers \(n_{ij}\) satisfy the following relations:
\[\begin{split} n^{u}_{ij}&=-\frac{q(\bar{Q}_{i}H^{c }u_{j})}{q(\phi)}=-\frac{-q(Q_{i})-q(H)+q(u_{j})}{q(\phi)},\\ n^{d}_{ij}&=-\frac{q(\bar{Q}_{i}Hd_{j})}{q(\phi)} =-\frac{-q(Q_{i})+q(H)+q(d_{j})}{q(\phi)},\\ n^{l}_{ij}&=-\frac{q(\bar{L}_{i}Hl_{j})}{q(\phi)} =-\frac{-q(L_{i})+q(H)+q(l_{j})}{q(\phi)},\\ n^{\nu}_{ij}&=-\frac{q(\bar{L}_{i}H^{c}N_{j})}{q( \phi)}=-\frac{-q(L_{i})-q(H)+q(N_{j})}{q(\phi)},\\ n^{N}_{ij}&=-\frac{q(\bar{N}^{c}_{i}N_{j})}{q( \phi)}=-\frac{q(N_{i})+q(N_{j})}{q(\phi)},\end{split} \tag{3.3}\]
where \(n_{ij}\) are considered positive integers throughout this paper.2 Furthermore, we require the presence of Yukawa term \(\bar{Q}_{3}H^{c}u_{3}\), irrelevant to \(q(\phi)\):
Footnote 2: See, e.g., Ref.[14], for the possibility of negative integers by introducing vector-like fermions.
\[q(\bar{Q}_{3}H^{c}u_{3})=0\leftrightarrow q(H)=q(u_{3})-q(Q_{3}); \tag{3.4}\]
otherwise one cannot realize the value of top quark mass. Once \(\phi\) and \(H\) develop VEVs, \(\langle\phi\rangle=v_{\phi}\) and \(\langle H\rangle=v_{\text{EW}}=174\,\text{GeV}\), the Dirac mass matrices of quarks and leptons as well as the Majorana mass matrix are given by
\[\begin{split} m^{u}_{ij}&=y^{u}_{ij}\epsilon^{n^{u }_{ij}}v_{\text{EW}},\qquad m^{d}_{ij}=y^{d}_{ij}\epsilon^{n^{d}_{ij}}v_{\text{ EW}},\\ m^{l}_{ij}&=y^{l}_{ij}\epsilon^{n^{l}_{ij}}v_{\text{ EW}},\qquad m^{\nu}_{Dij}=y^{\nu}_{ij}\epsilon^{n^{\nu}_{ij}}v_{\text{EW}}, \qquad m^{N}_{ij}=My^{N}_{ij}\epsilon^{n^{N}_{ij}}.\end{split} \tag{3.5}\]
The light neutrino mass matrix is obtained by integrating out heavy right-handed neutrinos:
\[m^{\nu}_{ij}=-\left(m^{\nu}\cdot(m^{N})^{-1}\cdot(m^{\nu})^{T}\right)_{ij}. \tag{3.6}\]
The quark and lepton mass matrices are diagonalized as
\[\begin{split} m^{u}&=U^{u}\text{diag}(m^{u})(V^{u} )^{\dagger},\qquad m^{d}=U^{d}\text{diag}(m^{d})(V^{d})^{\dagger},\\ m^{l}&=U^{l}\text{diag}(m^{l})(V^{l})^{\dagger}, \qquad m^{\nu}=U^{\nu}\text{diag}(m^{\nu})(U^{\nu})^{T},\end{split} \tag{3.7}\]
and the flavor mixings are given by the difference between mass eigenstates and flavor eigenstates:
\[V_{\rm PMNS} =(U^{l})^{\dagger}V^{\nu}\] \[=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta_{\rm CP }}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta_{\rm CP}}&c_{12}c_{23}-s_{12}s_{23} s_{13}e^{i\delta_{\rm CP}}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta_{\rm CP}}&-c_{12}s_{23}-s_{12}c_{23} s_{13}e^{i\delta_{\rm CP}}&c_{23}c_{13}\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&e^{i\frac{\alpha_{21}}{2}}&0\\ 0&0&e^{i\frac{\alpha_{31}}{2}}\end{pmatrix}, \tag{10}\]
with \(c_{ij}=\cos\theta_{ij}\) and \(s_{ij}=\sin\theta_{ij}\), which also holds for the CKM matrix \(V_{\rm CKM}=(U^{u})^{\dagger}U^{d}\) in the quark sector except the Majorana phases \(\{\alpha_{21},\alpha_{31}\}\). Since the quarks and the leptons are charged under \(U(1)\), the flavon VEV \(\langle\phi\rangle=v_{\phi}\) will lead to the flavor structure due to the smallness of \(|\epsilon|\):
\[\epsilon:=\frac{v_{\phi}}{M}. \tag{11}\]
Recalling that the \(U(1)\) charge of Higgs doublet is determined by Eq. (10), the flavor structure of quarks and leptons is specified by the following charge vector:
\[\mathcal{Q}_{a}:=\{q(Q_{i}),\,q(u_{i}),\,q(d_{i}),\,q(L_{i}),\,q( l_{i}),\,q(N_{i}),\,q(\phi)\}, \tag{12}\]
consisting of 19 elements. In the following analysis using RL, we restrict ourselves to the following range of \(U(1)\) charge:
\[-9\leq\mathcal{Q}_{a}\leq 9, \tag{13}\]
corresponding to total \(19^{19}\sim 10^{24}\) possibilities for the charge assignment.3 It will be a challenging issue to find a realistic flavor pattern by the brute force approach. Note that a generic \(U(1)\) charge of flavon will lead to the non-integer \(n_{ij}\); thereby we focus on \(q(\phi)=1\) or \(-1\) with 50% probability in the following analysis.
Footnote 3: In this counting, a permutation symmetry among the charge assignment is not taken into account, and we will not incorporate this effect for RL analysis.
### Neural Network
Based on the charge assignment \(\mathcal{Q}\), the agent selects the action \(\mathfrak{a}\) among all possible combinations. To determine the action, we utilize the neural network as shown in Table 2. The activation function \(h\) (in Eq.(1)) is chosen as a SELU function for hidden layers 1,2,3 and the softmax function (2) for the output layer. We employ the ADAM optimizer in TensorFlow [15]4, where the weights and biases are chosen to minimize the loss function given by the Huber function (5).
Footnote 4: We use the \({}^{\eta}\)gym\({}^{\ast}\) proposed by the OpenAI.
In the FN model, the flavor structure of quarks and leptons is determined by the charge vector \(\mathcal{Q}_{a}\), including total \(10^{24}\) possibilities for the charge assignment as pointed out before. When we focus on only the quark sector, the parameter spaces of \(U(1)\) charges reduce to \(19^{10}\sim 10^{12}\) possibilities. To achieve a highly efficient learning in a short time, it is better
to perform a separate training for the \(U(1)\) charge assignment of quarks and leptons. Note that only the flavon \(U(1)\) charge connects the quark sector with the lepton sector since the \(U(1)\) charge of the Higgs is determined by the charge of third generation quarks (10). As mentioned before, we focus on \(q(\phi)=1\) or \(-1\) with 50% probability in the following analysis. Thus, we first analyze the parameter space of quark \(U(1)\) charges by RL as will be discussed in detail in Sec. 4, and move to the lepton sector with fixed \(U(1)\) charge of Higgs fields as will be discussed in detail in Sec. 5.
The hyperparameters are set as \(N_{\rm ep}=10^{5}\) and \(N_{\rm ep}=6\times 10^{4}\) for the episode number in the quark and lepton sector respectively, \(N_{\rm step}=32\) for the step number, batch sizes of 32, epoch number of 32, and the learning rate \(\alpha=2.5\times 10^{-4}\), respectively. The hyperparameters in \(\epsilon\)-greedy method are described in the previous section.
### Agent
To implement the FN model in the context of RL with DQN, we specify the following action \(\mathfrak{a}\) of the agent at each step:
\[\mathfrak{a}\,:\,\mathcal{Q}_{a}\rightarrow\mathcal{Q}_{a}\pm 1\,\,(a\in A), \tag{12}\]
where \(A\) corresponds to \(\{Q_{i},u_{i},d_{i},\phi\}\) in the analysis of Sec. 4 and \(\{L_{i},l_{i},N_{i},\phi\}\) in the analysis of Sec. 5. These two candidates of the action make the dimension of the output layer \(2A\) in Table 2. At the initial stage, the \(\mathcal{O}(1)\) coefficients in Yukawa terms (11) are picked up from the Gaussian distribution with an average 0 and standard deviation 0.25 (see Fig. 6) and after the training by neural network introduced in the previous section, they are optimized to proper values by the Monte-Carlo simulation.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline layer & Input & Hidden 1 & Hidden 2 & Hidden 3 & Output \\ \hline Dimension & \(\mathbb{Z}^{A}\) & \(\mathbb{R}^{64}\) & \(\mathbb{R}^{64}\) & \(\mathbb{R}^{64}\) & \(\mathbb{R}^{2A}\) \\ \hline \end{tabular}
\end{table}
Table 2: In the neural network, the input is the charge assignment \(\mathcal{Q}_{a}\) with dimension \(A\), and the activation functions are the SELU function for hidden layers 1,2,3. Since we use the softmax function (2) for the output layer, the output with dimension \(2A\) is interpreted as probabilities. The dimension of the output layer is twice that of the input layer due to the action of the agent (12).
Figure 6: Distribution of \(\mathcal{O}(1)\) coefficients in Yukawa terms (11).
Thus, once the charges are fixed, one can compare the masses and mixings of quarks or leptons given by the action \(\mathfrak{a}\) with the experimental values. Specifically, we define the intrinsic value:
\[\mathcal{V}(\mathcal{Q})=\left\{\begin{array}{ll}-\text{min}_{v_{ \phi}}\big{[}\mathcal{M}_{\text{quark}}+\mathcal{C}\big{]}&\text{(used\,in\, Sec.\,4)}\\ -\big{[}\mathcal{M}_{\text{lepton}}+\mathcal{M}_{\text{neutrino}}+\mathcal{P} \big{]}&\text{(used\,in\,Sec.\,5)}\end{array},\right. \tag{21}\]
whose components will be defined below. Note that the flavon VEV is chosen to maximize the intrinsic value relevant for the quark sector in Sec. 4; thereby there is no flavon dependence in the intrinsic value of the lepton sector.
1. Quark and lepton masses: \(\mathcal{M}_{\text{quark}}\) (\(\mathcal{M}_{\text{lepton}}\)) consists of the ratio of the predicted quark (lepton) masses by the agent to the experimental values: \[\mathcal{M}_{\text{quark}}=\sum_{\alpha=u,d}E_{\alpha},\qquad \mathcal{M}_{\text{lepton}}=\sum_{\alpha=l}E_{\alpha},\] (22) with \[E_{\alpha}=\bigg{|}\text{log}_{10}\left(\frac{|m_{\alpha}|}{|m_{ \alpha,\text{exp}}|}\right)\bigg{|}.\] (23) The experimental values are listed in Tables 3 and 4 for quarks and leptons, respectively.
2. Neutrino masses: Since the ordering of neutrino masses has not been confirmed yet, we search the neutrino structure in two cases: RL with fixed neutrino mass ordering in Sec. 5.1 and RL without specifying the neutrino mass ordering in Sec. 5.2. In each case, the intrinsic value relevant to the neutrino masses is defined as: \[\mathcal{M}_{\text{neutrino}}=\left\{\begin{array}{ll}\sum_{ \alpha=\nu_{21},\nu_{31}}E_{\alpha}^{\nu}&\text{(normal\,ordering\,in\, Sec.\,5.1)}\\ \sum_{\alpha=\nu_{21},\nu_{32}}E_{\alpha}^{\nu}&\text{(inverted\,ordering\,in\, Sec.\,5.1)}\\ 0&\text{(unspecified\,mass\,ordering\,in\, Sec.\,5.2)}\end{array}\right.,\] (24) with \[E_{\alpha}^{\nu}=\bigg{|}\text{log}_{10}\left(\frac{|\Delta m_{ \alpha}^{2}|}{|\Delta m_{\alpha,\text{exp}}^{2}|}\right)\bigg{|},\] (25) where the experimental values are listed in Table 4.
3. Mixing angles: In addition, the intrinsic value includes the information of quark mixings and lepton mixings in \(\mathcal{C}\) and \(\mathcal{P}\): \[\mathcal{C}=\sum_{i,j}E_{\mathcal{C}}^{ij},\qquad\mathcal{P}= \sum_{i,j}E_{\mathcal{P}}^{ij},\] (26)
with \[E_{\mathcal{C}}^{ij}=\bigg{|}\text{log}_{10}\left(\frac{|V_{\text{CKM}}^{ij}|}{|V_{ \text{CKM,\,exp}}^{ij}|}\right)\bigg{|},\quad E_{\mathcal{P}}^{ij}=\bigg{|}\text {log}_{10}\left(\frac{|V_{\text{PMNS}}^{ij}|}{|V_{\text{PMNS,\,exp}}^{ij}|} \right)\bigg{|},\] (3.19) where \(E_{\mathcal{C}}^{ij}\) and \(E_{\mathcal{P}}^{ij}\) represent the ratio of the predicted quark and lepton mixings by the agent to the experimental values, respectively. From Tables 3 and 4, the CKM and PMNS matrices are of the form: \[|V_{\text{CKM,\,exp}}| =\left(\begin{array}{ccc}0.97435\pm 0.00016&0.22500\pm 0.00067&0.0 0369\pm 0.00011\\ 0.22486\pm 0.00067&0.97349\pm 0.00016&0.04182^{+0.00085}_{-0.00074}\\ 0.00857^{+0.00020}_{-0.00018}&0.0410^{+0.00083}_{-0.00072}&0.999118^{+0.00003 1}_{-0.000036}\end{array}\right),\] \[|V_{\text{PMNS,\,exp}}|_{3\sigma} =\left(\begin{array}{ccc}0.803\to 0.845&0.514\to 0.578&0.143\to 0.155\\ 0.244\to 0.498&0.502\to 0.693&0.632\to 0.768\\ 0.272\to 0.517&0.473\to 0.672&0.623\to 0.761\end{array}\right).\] (3.20)
\begin{table}
\begin{tabular}{|c||c|c||c|c|} \hline \multirow{2}{*}{Observables} & \multicolumn{2}{c||}{Normal Ordering (NO)} & \multicolumn{2}{c|}{Inverted Ordering (IO)} \\ \cline{2-5} & \(1\sigma\) range & \(3\sigma\) range & \(1\sigma\) range & \(3\sigma\) range \\ \hline \(\sin^{2}\theta_{12}\) & \(0.303^{+0.012}_{-0.011}\) & \(0.270\to 0.341\) & \(0.303^{+0.012}_{-0.011}\) & \(0.270\to 0.341\) \\ \hline \(\sin^{2}\theta_{13}\) & \(0.02225^{+0.00056}_{-0.00059}\) & \(0.02052\to 0.02398\) & \(0.02223^{+0.00058}_{-0.00058}\) & \(0.02048\to 0.02416\) \\ \hline \(\sin^{2}\theta_{23}\) & \(0.451^{+0.019}_{-0.014}\) & \(0.408\to 0.603\) & \(0.569^{+0.016}_{-0.016}\) & \(0.412\to 0.613\) \\ \hline \(\delta_{\text{CP}}/\pi\) & \(1.29^{+0.20}_{-0.14}\) & \(0.80\to 1.94\) & \(1.53^{+0.12}_{-0.16}\) & \(1.08\to 1.91\) \\ \hline \(\Delta m^{2}_{21}\) & \multirow{2}{*}{\(7.41^{+0.21}_{-0.20}\)} & \multirow{2}{*}{\(6.82\to 8.03\)} & \multirow{2}{*}{\(7.42^{+0.21}_{-0.20}\)} & \multirow{2}{*}{\(6.82\to 8.04\)} \\ \cline{1-1} \(10^{-5}\text{e}\text{V}^{2}\) & & & & \\ \hline \(\Delta m^{2}_{3l}\) & \multirow{2}{*}{\(2.507^{+0.026}_{-0.027}\)} & \multirow{2}{*}{\(2.427\to 2.590\)} & \multirow{2}{*}{\(-2.486^{+0.025}_{-0.028}\)} & \multirow{2}{*}{\(-2.570\to-2.406\)} \\ \cline{1-1} \(10^{-3}\text{e}\text{V}^{2}\) & & & & \\ \hline \(m_{e}/\text{MeV}\) & & & & \\ \hline \(m_{\mu}/\text{MeV}\) & & & & & \\ \hline \(m_{\tau}/\text{MeV}\) & & & & & \\ \hline \end{tabular}
\end{table}
Table 4: Experimental values for the lepton sector obtained from global analysis of the data, where \(\Delta m^{2}_{3l}\equiv\Delta m^{2}_{31}=m^{2}_{3}-m^{2}_{1}>0\) for NO and \(\Delta m^{2}_{3l}\equiv\Delta m^{2}_{32}=m^{2}_{3}-m^{2}_{2}<0\) for IO. Here, we use the data from Ref. [16] for charged lepton masses and NuFIT v5.2 results with Super-Kamiokande atmospheric data for the lepton mixing angles and CP phase [17].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(m_{u}/\text{MeV}\) & \(m_{d}/\text{MeV}\) & \(m_{s}/\text{MeV}\) & \(m_{c}/\text{GeV}\) & \(m_{b}/\text{GeV}\) & \(m_{t}/\text{GeV}\) \\ \hline \(2.16^{+0.49}_{-0.26}\) & \(4.67^{+0.48}_{-0.17}\) & \(93.4^{+8.6}_{-3.4}\) & \(1.27\pm 0.02\) & \(4.18^{+0.03}_{-0.02}\) & \(172.69\pm 0.30\) \\ \hline \hline \(s_{12}\) & \(s_{13}\) & \(s_{23}\) & \(\delta_{\text{CP}}\) & & \\ \hline \(0.22500\pm 0.00067\) & \(0.00369\pm 0.00011\) & \(0.04182^{+0.00085}_{-0.00074}\) & \(1.144\pm 0.027\) & & \\ \hline \end{tabular}
\end{table}
Table 3: Masses, mixing angles, and CP phase in the quark sector [16], where we show the top-quark mass from direct measurements.
The flavon VEV is defined to maximize the intrinsic value, and we search for the VEV within
\[0.01\leq|v_{\phi}|\leq 0.3,\qquad-\pi\leq\arg(v_{\phi})\leq\pi, \tag{3.21}\]
where the angular component of the flavon VEV determines the CP phase. The large intrinsic value indicates that the obtained charge assignment well reproduces the experimental values. Such a charge assignment is called _terminal states_. Specifically, the terminal state is defined to satisfy the following requirement:
\[|\mathcal{V}(\mathcal{Q})|<V_{0},\qquad E_{\alpha},E_{\alpha}^{ \nu}<V_{1}\quad(\mathrm{for}\,\forall\alpha),\qquad E_{\mathcal{C},\mathcal{P} }^{ij}<V_{2}\quad(\mathrm{for}\,\forall i,j). \tag{3.22}\]
In this paper, we adopt \(V_{0}=10.0\), \(V_{1}=1.0\) and \(V_{2}=0.2\). Here, \(V_{1}=1.0\) (\(V_{2}=0.2\)) means that the ratio of the predicted fermion masses (mixings) to the observed masses (mixings) is considered within \(0.1\leq r_{\mathrm{mass}}\leq 10\) (\(0.63\leq r_{\mathrm{mixings}}\leq 1.58\)).
Let us denote the charge assignment \(\mathcal{Q}\) observed by the agent and \(\mathcal{Q}^{\prime}\) after the action \(\mathfrak{a}\). For the action of the agent \((\mathcal{Q},\mathfrak{a})\), we will give the reward \(\mathcal{R}\) in the following prescription:
1. Give the basic point \(\mathcal{R}_{\mathrm{base}}\), depending on the value of intrinsic value: \[\mathcal{R}_{\mathrm{base}}=\left\{\begin{array}{ll}\mathcal{ V}(\mathcal{Q}^{\prime})-\mathcal{V}(\mathcal{Q})&\quad\text{if}\;\mathcal{V}( \mathcal{Q}^{\prime})-\mathcal{V}(\mathcal{Q})>0\\ \mathcal{R}_{\mathrm{offset}}&\quad\text{if}\;\mathcal{V}(\mathcal{Q}^{\prime})- \mathcal{V}(\mathcal{Q})\leq 0\end{array}\right.,\] (3.23) where \(R_{\mathrm{offset}}\) corresponds to a penalty, chosen as \(R_{\mathrm{offset}}=-10\).
2. When the \(\mathcal{Q}^{\prime}\) lies outside \(-9\leq\mathcal{Q}^{\prime}\leq 9\) or the flavon charge satisfies \(q(\phi)=0\), we give the penalty \(\mathcal{R}_{\mathrm{offset}}\) and the environment comes back to the original charge assignment \(\mathcal{Q}\).
3. When the \(\mathcal{Q}^{\prime}\) is turned out to be a terminal state, we give the bonus point \(\mathcal{R}_{\mathrm{term}}\), chosen as \(R_{\mathrm{term}}=100\).
4. Summing up the above points, we define the reward \(\mathcal{R}(\mathcal{Q},\mathfrak{a})\).
## 4 Learning the quark sector
In this section, we analyze the charge assignment of the quark sector, following the RL with DQN introduced in the previous section. Even for the quark sector within the range of \(U(1)\) charges \(-9\leq q\leq 9\), there exists \(19^{10}\sim 6.1\times 10^{12}\) possible states in the environment. By training the neural network about 15 hours on a single CPU, it turned out that terminal states are found after \(\mathcal{O}(20,000)\) episodes as shown in Fig. 7. The loss function tends to be minimized as in Fig. 7, where the small positive loss corresponds to the existence of various paths to terminal states as commented in Ref. [12]. We also check that the reward increases when the loss function decreases. The network leads to terminal states in >6%
of all cases for total episode \(N_{\rm ep}=10^{5}\). Then, after removing the negative integers of \(n_{ij}\) in Eq. (3.3), it results in 21 terminal states. When we focus on only the quark masses in the training of neural network, we will obtain terminal states in 90% as reported in [12]. It implies that the implementation of masses and mixings will be a more difficult task for the agent to find a realistic flavor pattern.
By performing the Monte-Carlo search with the Gaussian distribution shown in Fig. 6, the \(\mathcal{O}(1)\) coefficients \(y_{ij}\) are optimized to more realistic ones, according to which the intrinsic value is also optimized. We show the benchmark point of charge assignment with the highest intrinsic value in Table 5, where the masses and mixing angles are well fitted with observed values up to \(\mathcal{O}(0.1)\%\). This will be improved by a further brute-force search over the parameter space of \(\mathcal{O}(1)\) coefficients. Note that there is no CP phase in the quark sector. Even when the angular component of flavon is non-zero, the CP phase is chosen to 0 due to the phase rotation of quark fields. Nonvanishing CP phase in the quark sector will be realized by introducing multiple flavon fields [18], but it will be left for future work.
Figure 7: Learning results for the quark sector. The results are the output of neural network leading to the best-fit model shown in Table 5. From left to right, three panels show (a) the loss function vs episode number (b) the fraction of terminal episodes vs episode number (c) the number of terminal states vs episode number, respectively.
## 5 Learning the neutrino structure
In this section, we move to the numerical analysis of the lepton sector, following the RL with DQN introduced in the Secs. 2 and 3. Based on the analysis in Sec. 4, we fix the Higgs \(U(1)\) charges and the VEV \(v_{\phi}\) to realize the 21 realistic FN models in the quark sector. However, there still exists \(19^{9}\sim 3.2\times 10^{11}\) possible states within the range of \(U(1)\) charges \(-9\leq q\leq 9\) in the environment. We first analyze the lepton sector with fixed neutrino mass ordering; normal ordering or inverted ordering in Sec. 5.1. In the next analysis of Sec. 5.2, the neutrino mass ordering has not been fixed yet. Thus, one can find plausible FN models whether the neutrino masses are in the normal ordering or in the inverted ordering.
### Fixed ordering of neutrino masses
By training the neural network about 8 hours on a single CPU, it turned out that terminal states are found after \(\mathcal{O}(5,000)\) episodes as shown in Fig. 8 with the normal ordering. The loss function tends to be minimized as in Fig. 8 until \(\mathcal{O}(50,000)\) episodes.5 It is notable that the reward increases when the loss function decreases. After these critical numbers of episodes, the loss function increases, indicating that the lepton sector rapidly leads to the terminal states compared with the quark sector. Indeed, the network leads to terminal states in >0.06% of all cases for total episode \(N_{\rm ep}=6\times 10^{4}\). After removing the negative
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Charges} & \multicolumn{4}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}\\ \hline 9&8&6&1&3&4&6&5&5&-2&1\end{array}\right)\)} \\ \hline \multirow{2}{*}{\(\mathcal{O}\left(1\right)\) coeff.} & \multirow{2}{*}{\(y^{u}\simeq\left(\begin{array}{ccc|ccc}0.430&0.806&-1.220\\ -0.962&-0.598&-0.747\\ -1.328&-1.172&1.018\end{array}\right)\)} & \multirow{2}{*}{\(,\ y^{d}\simeq\left(\begin{array}{ccc|ccc}-0.996&-0.747&-1.068\\ 0.958&1.441&1.033\\ 0.765&-0.500&-1.029\end{array}\right)\)} \\ \hline VEV & \multicolumn{4}{c|}{\(v_{\phi}\simeq 0.181\cdot e^{-0.863i}\)} \\ \hline Intrinsic value & \multicolumn{4}{c|}{\(\mathcal{V}_{\rm opt}\simeq-0.701\)} \\ \hline Masses & \multirow{2}{*}{\(\left(\begin{array}{ccc|ccc}m_{u}&m_{c}&m_{t}\\ m_{d}&m_{s}&m_{b}\end{array}\right)\simeq\left(\begin{array}{ccc|ccc}0.002&1. 468&180.945\\ 0.003&0.102&4.501\end{array}\right)\)} & \multirow{2}{*}{\(\rm GeV\)} \\ \hline Ratios & \multirow{2}{*}{\(\left(\begin{array}{ccc|ccc}E_{u}&E_{c}&E_{t}\\ E_{d}&E_{s}&E_{b}\end{array}\right)\simeq\left(\begin{array}{ccc|ccc}0.008&0. 063&0.020\\ 0.149&0.037&0.032\end{array}\right)\)} \\ \hline CKM matrix & \multirow{2}{*}{\(|V_{\rm CKM}|\simeq\left(\begin{array}{ccc|ccc}0.973&0.229&0.004\\ 0.229&0.972&0.057\\ 0.009&0.057&0.998\end{array}\right)\)} \\ \hline Ratios & \multirow{2}{*}{\(E_{\mathcal{C}}\simeq\left(\begin{array}{ccc|ccc}0.000&0.005&0.047\\ 0.004&0.001&0.149\\ 0.033&0.152&0.000\end{array}\right)\)} \\ \hline \hline \end{tabular}
\end{table}
Table 5: Benchmark point for the quark sector.
integers of \(n_{ij}\) in Eq. (3.3) and picking up flavon \(U(1)\) charge to be consistent with quark sector, we arrive at 63 and 121 terminal states with normal ordering and inverted ordering, respectively. By performing the Monte-Carlo search over the \(\mathcal{O}(1)\) coefficients \(y_{ij}\) with the Gaussian distribution shown in Fig. 6, the lepton masses and mixings are further optimized to more realistic ones, according to which the intrinsic value is also optimized. Specifically, we performed the Monte-Carlo search 10 times to search the realistic values within \(3\sigma\). In the first 10,000 trials, the \(\mathcal{O}(1)\) coefficients \(y_{ij}\) are optimized by using the Gaussian distribution shown in Fig. 6. Then, for the \(\mathcal{O}(1)\) coefficients with highest intrinsic value among them, we performed the second 10,000 trials with the Gaussian distribution where an average is the coefficients obtained by the first Monte-Carlo search and the standard deviation is 0.25.
After carrying out the same procedure 10 times in total, we find that the results of 6 models with normal ordering are in agreement with experimental values within \(3\sigma\). We show the benchmark point with the highest intrinsic value in Table 6 for the normal ordering. Here, we list the effective Majorana neutrino mass \(m_{\beta\beta}\) for the neutrinoless double beta decay:
\[m_{\beta\beta}=\left|m_{1}\cos^{2}\theta_{12}\cos^{2}\theta_{13}+m_{2}\sin^{2 }\theta_{12}\cos^{2}\theta_{13}e^{i\alpha_{21}}+m_{3}\sin^{2}\theta_{13}e^{i (\alpha_{31}-2\delta_{\rm CP})}\right|, \tag{5.1}\]
which would be measured by the KamLAND-Zen experiment [19]. In this analysis, we assume the parameter \(M=10^{15}\) GeV to realize the tiny neutrino masses with \(\mathcal{O}(1)\) coefficients of Yukawa couplings, but we leave the detailed study with different values of \(M\) for future work. Note that the angular component of flavon leads to the nonvanishing Majorana CP phases in contrast to the quark sector.6 Thus, one can analyze the correlation between mixing angles and CP phase as shown in Figs. 9 and 10 for the normal ordering, in which all the terminal states within \(3\sigma\) are shown. The CP phase \(\alpha_{21}\) is predicted at 0.
Figure 8: Learning results for the lepton sector with fixed NO of neutrino masses. The results are the output of neural network leading to the best-fit model (the square in Figs. 9 and 10). From left to right, three panels show (a) the loss function vs episode number (b) the fraction of terminal episodes vs episode number (c) the number of terminal states vs episode number, respectively.
Note that the information of the CP phase has not been implemented in the learning of neural network.
Remarkably, one cannot find the neutrino mass of inverted ordering to be consistent with the experimental values within \(3\sigma\), although we perform the Monte-Carlo search 10 times over the \(\mathcal{O}(1)\) coefficients \(y_{ij}\) of 121 terminal states. It indicates that normal ordering will be favored by the autonomous behavior of the agent. Indeed, the intrinsic value of normal ordering after the Monte-Carlo search tends to be larger than that of inverted ordering as shown in Fig. 11.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c|c c|c c|c|c c|c|c c|c c|c|c c|c|c c|c|c c|c|c c|c c|c|c c|c|c c|c|c c|c|c|c c|c|c|c|c
Figure 9: Neutrino masses vs mixing angle \(\theta_{23}\), where the dotted line represents the global best fit value in NuFIT v5.2 results with Super-Kamiokande atmospheric data [17], and the inside region of each line represents dashed line \(\leq 1\sigma\), dotdashed line \(\leq 3\sigma\) CL, respectively. The sum of neutrino masses is constrained by 0.13 eV (95% CL) corresponding to the black solid line in the case of \(\Lambda\)CDM model [20]. We denote a best-fit point within \(3\sigma\) by a square, and the intrinsic value (3.13) is written in the legend. Note that the neutrino mass ordering is fixed as NO in the training of the neural network.
Figure 10: Majorana phases \(\alpha_{21},\alpha_{31}\) and effective Majorana neutrino mass \(m_{\beta\beta}\) vs mixing angle \(\theta_{23}\), where the dotted line represents the global best fit value in NuFIT v5.2 results with Super-Kamiokande atmospheric data [17], and the inside region of each line represents dashed line \(\leq 1\sigma\), dotdashed line \(\leq 3\sigma\) CL, respectively. The effective Majorana neutrino mass is upper bounded by 0.13 eV (95% CL) corresponding to the black solid line [20]. We denote a best-fit point within \(3\sigma\) by a square, and the intrinsic value (3.13) is written in the legend. Note that the neutrino mass ordering is fixed as NO in the training of the neural network.
### Unfixed ordering of neutrino masses
In this subsection, we train the neural network without specifying the neutrino mass ordering. For each of the 21 realistic FN models in the quark sector, we performed the training twice to obtain a sufficient number of realistic models. Similar to the previous analyses, the neural network is trained about 12 hours on a single CPU. It turned out that terminal states are found after \(\mathcal{O}(2,000)\) episodes as shown in Fig. 12, where the loss function tends to decrease until \(\mathcal{O}(8,000)\) episodes. It is notable that the reward increases when the loss function decreases, and the lepton sector rapidly leads to the terminal states compared with the quark sector. The network leads to terminal states in about >60% of all cases for total episode \(N_{\rm ep}=6\times 10^{4}\). In contrast to the previous analysis, the trained network efficiently leads to terminal states.7 After removing the negative integers of \(n_{ij}\) in Eq. (3.3), we arrive at 13,733 (13,432) and 22,430 (20,357) terminal states with normal ordering and inverted ordering in the first (second) learning, respectively. By performing the Monte-Carlo search over the \(\mathcal{O}(1)\) coefficients \(y_{ij}\) with the Gaussian distribution shown in Fig. 6, the lepton masses and mixings are optimized to more realistic ones, according to which the intrinsic value is also optimized. Specifically, we performed the Monte-Carlo search two times to search the realistic values within \(3\sigma\). In the first Monte-Carlo search, we ran 10,000 trials with the Gaussian distribution shown in Fig. 6. Then, for the \(\mathcal{O}(1)\) coefficients with highest intrinsic value among them, we performed the second 10,000 trials with the Gaussian distribution where an average is the coefficients obtained by the first Monte-Carlo search and the standard deviation is 0.25.
Footnote 7: Note that the specifying the neutrino mass ordering in RL well reproduce the experimental values with high performance.
Figure 11: Boxplots of intrinsic values for the lepton sector, where the neutrino mass ordering is fixed in the learning of neural network. In the left panel, we show intrinsic values obtained in the RL with the lepton sector, but in the right panel, we incorporate the values of the quark sector analyzed in Sec. 4.
After carrying out the Monte-Carlo analysis, we find that the results of 15 models with normal ordering are in agreement with experimental values within \(3\sigma\). Two best fit points with the highest intrinsic value are shown in Tables 7 and 8 for the normal ordering. As presented in the previous section, one can analyze the correlation between mixing angles and the other observed values as shown in Figs. 13 and 14 for the normal ordering, in which all the terminal states within \(3\sigma\) are shown. It turned out that the Majorana CP phases are typically nonzero, and the summation of neutrino masses and the effective mass are not widely distributed but tend to be localized at \(\sum_{i}m_{\nu_{i}}\sim 60\,\mathrm{meV}\) and \(2\,\mathrm{meV}\leq m_{\beta\beta}\leq 6\,\mathrm{meV}\), respectively.
Figure 12: Learning results for the lepton sector by RL without specifying the neutrino mass ordering. The results are the output of neural network leading to the best-fit model (the diamond in Figs. 13 and 14). We observe a similar behavior for other outputs. From left to right, three panels show (a) the loss function vs episode number (b) the fraction of terminal episodes vs episode number (c) the number of terminal states vs episode number, respectively.
\begin{table}
\begin{tabular}{l|c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Charges} & \multicolumn{6}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3}&H&\phi\\ \hline 3&3&2&-3&0&0&0&-3&0&0&-2&1\end{array}\right)\)} \\ \hline \multirow{2}{*}{\(\mathcal{O}\left(1\right)\) coeff.} & \multirow{2}{*}{\(y^{l}\simeq\left(\begin{array}{ccc|ccc}1.728&-1.717&1.790\\ 1.225&-0.456&-1.589\\ -2.243&-2.316&-2.664\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc|ccc}-1.737&-1.060&2.712\\ 3.083&-1.698&-0.342\\ -0.396&0.9445&-0.287\end{array}\right)\)} \\ & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \(y^{N}\simeq\left(\begin{array}{ccc|ccc}-1.031&2.275&1.453\\ 2.275&-0.457&0.333\\ 1.453&0.333&1.559\end{array}\right)\)} \\ \hline \multicolumn{2}{l|}{VEV} & \multicolumn{6}{c}{\(v_{\phi}\simeq 0.181\cdot e^{-0.863i}\)} \\ \hline \multicolumn{2}{l|}{Intrinsic value} & \multicolumn{6}{c}{\(\mathcal{V}_{\rm opt}\simeq-0.859\)} \\ \hline \multirow{2}{*}{\begin{tabular}{l} Masses (output) \\ \end{tabular} } & \(\left(\begin{array}{ccc}m_{e}&m_{\mu}&m_{\tau}\end{array}\right)\simeq\left( \begin{array}{ccc}4.960\times 10^{-1},&8.575\times 10^{1},&6.553\times 10^{2} \end{array}\right)\) MeV \\ & \(\left(\begin{array}{ccc}m_{\nu_{1}}&m_{\nu_{2}}&m_{\nu_{3}}\end{array}\right) \simeq\left(\begin{array}{ccc}0.210,&8.869,&50.18\end{array}\right)\) meV \\ \hline \multirow{2}{*}{\begin{tabular}{l} Ratios (masses) \\ \end{tabular} } & \(\left(\begin{array}{ccc}E_{e}&E_{\mu}&E_{\tau}\\ E_{\nu_{21}}&E_{\nu_{31}}\end{array}\right)\simeq\left(\begin{array}{ccc}0.013 &0.091&0.433\\ 0.026&0.002\end{array}\right)\) \\ \hline \multirow{2}{*}{PMNS matrix (output)} & \multirow{2}{*}{\(|V_{\rm PMNS}|\simeq\left(\begin{array}{ccc}0.823&0.548&0.149\\ 0.332&0.677&0.656\\ 0.460&0.491&0.740\end{array}\right)\)} \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Ratios (mixings) \\ \end{tabular} } & \multirow{2}{*}{\(E_{\mathcal{P}}\simeq\left(\begin{array}{ccc}0.000&0.001&0.001\\ 0.048&0.054&0.028\\ 0.066&0.067&0.029\end{array}\right)\)} \\ \hline \multicolumn{2}{l|}{Majorana phases} & \multicolumn{6}{c}{\(\alpha_{21}\simeq 0.0,\ \alpha_{31}\simeq 0.549\pi\)} \\ \hline \multicolumn{2}{l|}{Effective mass} & \multicolumn{6}{c}{\(m_{\beta\beta}\simeq 2.850\) meV} \\ \hline \hline \end{tabular}
\end{table}
Table 7: Benchmark point for the lepton sector with NO (corresponding to the diamond in Figs. 13 and 14), where the neutrino mass ordering is not specified in the learning of the network.
Remarkably, one cannot obtain the experimental values of neutrino masses and mixings within \(3\sigma\) for the inverted ordering, although we perform the Monte-Carlo searches over the \(\mathcal{O}(1)\) coefficients \(y_{ij}\) of all the terminal states. Thus, the normal ordering of neutrino masses is also favored by the trained neural network, although the neural network itself was trained without any knowledge of neutrino mass ordering. Indeed, the intrinsic value of normal ordering after the Monte-Carlo search tends to be larger than that of inverted ordering as shown in Fig. 15. This conspicuous feature can also be seen by looking at the intrinsic value including both the quark and lepton sectors.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline Charges & \multicolumn{3}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}\\ \hline 2&3&1&-7&-8&-1&-2&-5&-1&-1&1\end{array}\right)\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{ccc|ccc}-0.424&-0.567&0.897\\ -0.482&-0.787&0.827\\ 0.141&-0.704&0.565\end{array}\right)\) & \(y^{\nu}\simeq\left(\begin{array}{ccc|ccc}-1.243&1.096&0.396\\ -0.898&-1.501&-3.224\\ 2.361&2.246&-1.668\end{array}\right)\) \\ \multicolumn{1}{c}{} & \multicolumn{3}{c|}{\(y^{N}\simeq\left(\begin{array}{ccc|ccc}2.311&0.877&-1.491\\ 0.877&-1.746&0.186\\ -1.491&0.186&-0.283\end{array}\right)\)} \\ \hline VEV & \multicolumn{3}{c}{\(v_{\phi}\simeq 0.268\cdot e^{-0.166i}\)} \\ \hline Intrinsic value & \multicolumn{3}{c}{\(\mathcal{V}_{\text{opt}}\simeq-0.720\)} \\ \hline Masses (output) & \(\left(\begin{array}{ccc}m_{e}&m_{\mu}&m_{\tau}\end{array}\right)\simeq \left(\begin{array}{ccc}4.067\times 10^{-1},&1.483\times 10^{2},&2.066\times 10^{3} \end{array}\right)\) MeV \\ \hline Ratios (masses) & \(\left(\begin{array}{ccc}E_{e}&E_{\mu}&E_{\tau}\\ E_{\nu_{21}}&E_{\nu_{31}}\end{array}\right)\simeq\left(\begin{array}{ccc}0. 099&0.147&0.066\\ 0.011&0.000\end{array}\right)\) \\ \hline PMNS matrix (output) & \(\left|V_{\text{PMNS}}\right|\simeq\left(\begin{array}{ccc}0.817&0.556&0.151\\ 0.499&0.552&0.668\\ 0.288&0.621&0.729\end{array}\right)\) \\ \hline Ratios (mixings) & \(E_{\mathcal{P}}\simeq\left(\begin{array}{ccc}0.004&0.008&0.005\\ 0.129&0.035&0.020\\ 0.137&0.035&0.023\end{array}\right)\) \\ \hline Majorana phases & \multicolumn{3}{c}{\(\alpha_{21}\simeq 0.106\pi,\ \alpha_{31}\simeq-0.211\pi\)} \\ \hline Effective mass & \multicolumn{3}{c}{\(m_{\beta\beta}\simeq 5.040\) meV} \\ \hline \hline \end{tabular}
\end{table}
Table 8: Benchmark point for the lepton sector with NO (corresponding to the square in Figs. 13 and 14), where the neutrino mass ordering is not specified in the learning of the network.
Figure 13: Neutrino masses vs mixing angle \(\theta_{23}\), where the dotted line represents the global best fit value in NuFIT v5.2 results with Super-Kamiokande atmospheric data [17], and the inside region of each line represents dashed line \(\leq 1\sigma\), dotdashed line \(\leq 3\sigma\) CL, respectively. The sum of neutrino masses is constrained by 0.13 eV (95% CL) corresponding to the black solid line in the case of \(\Lambda\)CDM model [20]. We denote two best-fit points within \(3\sigma\) by a diamond and a square, and the intrinsic value (3.13) is written in the legend. Since the neutrino mass ordering is unfixed in the training of the neural network, we show only NO results from the terminal states.
Figure 14: Majorana phases \(\alpha_{21},\alpha_{31}\) and effective Majorana neutrino mass \(m_{\beta\beta}\) vs mixing angle \(\theta_{23}\), where the dotted line represents the global best fit value in NuFIT v5.2 results with Super-Kamiokande atmospheric data [17], and the inside region of each line represents dashed line \(\leq 1\sigma\), dotdashed line \(\leq 3\sigma\) CL, respectively. The effective Majorana neutrino mass is upper bounded by 0.13 eV (95% CL) corresponding to the black solid line [20]. We denote two best-fit points within \(3\sigma\) by a diamond and a square, and the intrinsic value (3.13) is written in the legend.
## 6 Conclusion
The flavor symmetries are one of attractive tools to understand the flavor structure of quarks and leptons. To address the flavor puzzle in the Standard Model, we have applied the reinforcement learning technique to flavor models with \(U(1)\) horizontal symmetry. RL will shed new light on the phenomenological approach to scan over the parameter space of flavor models in contrast to the brute-force approach.
In this paper, we have extended the analysis of Ref. [12] to explore the flavor structure of quarks and leptons by employing the RL with DQN. Based on the neural network architectures in the framework of \(U(1)\) flavor model with RL established in Secs. 2 and 3, the agent is designed to exhibit autonomous behavior in the environment (parameter space of \(U(1)\) charges). Since the parameter space of \(U(1)\) charges is huge, we have performed a separate search for the \(U(1)\) charge assignment of quarks and leptons. Trained neural network leads to phenomenologically promising terminal states in >6% for the quark sector and >60% for the lepton sector in the case of unfixed ordering of neutrino masses. In the analysis of Sec. 5.2, we have not specified the neutrino mass ordering in the evaluation of intrinsic value, meaning that the agent does not have any knowledge of neutrino mass ordering. However, the autonomous behavior of the agent suggests us that the intrinsic value of normal ordering tends to be larger than that of inverted ordering as shown in Fig. 15, and the normal ordering is well fitted with the current experimental data in contrast to the inverted ordering. Remarkably, the effective mass for the neutrinoless double beta decay is predicted around specific values, and the Majorana CP phases are nonzero in general.
Figure 15: Boxplots of intrinsic values for the lepton sector from the result of second learning, where the neutrino mass ordering is unspecified in the learning of neural network. In the left panel, we show intrinsic values obtained in the RL with the lepton sector, but in the right panel, we incorporate the values of the quark sector analyzed in Sec. 4. We obtain similar results in the first learning.
Before closing our paper, it is worthwhile mentioning a possible application of our analysis:
* We have focused on the flavor structure of Yukawa couplings, but it is easily applicable to reveal the flavor structure of higher-dimensional operators (see for the Standard Model effective field theory (SMEFT) with \(U(1)\) flavor symmetry [21] and discrete symmetry [22].). Since the trained neural network predicts the plausible charge assignment of quarks and leptons, one can also determine the flavor structure of higher-dimensional operators. It would be interesting to clarify whether RL technique we proposed can explore the flavor structure of the SMEFT.
* On top of that, the CP-odd fluctuation of complex flavon field (flaxion) would be regarded as QCD action as discussed in Refs. [23; 24; 25; 26], where the cosmological problems (such as the origin of dark matter, baryon asymmetry of the Universe, and the inflation) are simultaneously solved by the dynamics of flavon field. Since the flavon field has flavor changing neutral current (FCNC) interactions with quarks and leptons controlled by the \(U(1)\) flavor symmetry, charge assignment of quarks and leptons plays an important role of determining the FCNC processes. It is fascinating to apply our finding charge assignment to such an axion physics, which left for future work.
* We have focused on the \(U(1)\) horizontal symmetry, but it is easily applicable to other flavor symmetries such as discrete flavor symmetries. We hope to elucidate a comprehensive study about the global structure of flavor models in an upcoming paper.
###### Acknowledgements.
This work was supported in part by Kyushu University's Innovator Fellowship Program (S. N., C. M.), JSPS KAKENHI Grant Numbers JP20K14477 (H. O.) and JP23H04512 (H.O).
## Appendix A FN charges
We list our finding charge assignment of quarks in Appendix A.1. For the lepton sector, we present the results of RL by picking the models up only when theoretical values of neutrinos with the normal ordering are within \(3\sigma\) considering \((\Delta m_{21}^{2},\Delta m_{31}^{2},\sin^{2}\theta_{12},\sin^{2}\theta_{13}, \sin^{2}\theta_{23})\). These are summarized in Appendices A.2 and A.3, where the neutrino mass ordering is specified and unspecified in the learning of neural network, respectively.
### Quark sector
\begin{tabular}{l|c|c c|c c|c c|c c|c c} \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc|ccc}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3}&H& \phi\\ \hline 0&-2&3&1&7&1&8&9&7&-2&-1\end{array}\right)\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccc|ccc|ccc}0.856&1.537&0.895\\ -1.071&0.833&-1.377\\ 1.181&-1.507&0.805\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccc|ccc|ccc}-0.915&1.347&-0.746\\ -0.559&1.108&0.884\\ 0.898&1.056&1.111\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c|}{\(v_{\phi}\simeq 0.294\cdot e^{-0.930i}\,\ \mathcal{V}_{\rm opt}\simeq-1.834\)} \\ \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc|ccc}Q_{1}&Q_{2}&Q_{3}\\ \hline-7&-9&-4&-6&-5&-7\end{array}\right)\)} & \multicolumn{1}{c|}{\(y^{d}\simeq\left(\begin{array}{ccc|ccc|ccc}-0.995&0.870&-0.985\\ 1.199&-1.127&-1.412\\ -1.325&-0.882&-1.020\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{1}{c|}{\(v_{\phi}\simeq 0.283\cdot e^{-0.860i}\,\ \mathcal{V}_{\rm opt}\simeq-1.416\)} \\ \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc|ccc}Q_{1}&Q_{2}&Q_{3}\\ \hline-6&-5&-3\end{array}\right)\)} & \multicolumn{1}{c|}{\(5\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(2\)} & \multicolumn{1}{c|}{\(-5\)} & \multicolumn{1}{c|}{\(-7\)} & \multicolumn{1}{c|}{\(-5\)} & \multicolumn{1}{c|}{\(-1\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccc|ccc|ccc}-1.451&1.161&-0.722\\ -1.475&0.953&-1.262\\ 0.798&0.865&1.182\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccc|ccc|ccc}1.229&-0.976&0.756 \\ -1.280&-0.985&-1.159\\ 1.235&0.355&0.825\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c|}{\(v_{\phi}\simeq 0.171\cdot e^{0.649i}\,\ \mathcal{V}_{\rm opt}\simeq-2.066\)} \\ \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc|ccc}Q_{1}&Q_{2}&Q_{3}\\ \hline-6&-8&-3\end{array}\right)\)} & \multicolumn{1}{c|}{\(3\)} & \multicolumn{1}{c|}{\(7\)} & \multicolumn{1}{c|}{\(1\)} & \multicolumn{1}{c|}{\(-5\)} & \multicolumn{1}{c|}{\(-3\)} & \multicolumn{1}{c|}{\(-5\)} & \multicolumn{1}{c|}{\(-4\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccc|ccc|ccc}1.498&1.767&-1.223\\ 1.090&-1.531&-1.080\\ -1.035&-1.383&-0.864\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccc|ccc|ccc}0.778&1.466&0.921 \\ -0.945&-0.858&-1.249\\ -1.077&0.555&-1.006\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c|}{\(v_{\phi}\simeq 0.300\cdot e^{0.912i}\,\ \mathcal{V}_{\rm opt}\simeq-1.881\)} \\ \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{ccc|ccc|ccc}Q_{1}&Q_{2}&Q_{3}\\ \hline-3&-2&0\end{array}\right)\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(4\)} & \multicolumn{1}{c|}{\(4\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccc|ccc|ccc}0.856&1.537&0.895\\ -1.071&0.833&-1.377\\ 1.181&-1.507&0.805\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccc|ccc|ccc}-0.915&1.347&-0.746 \\ -0.559&1.108&0.884\\ 0.898&1.056&1.111\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c|}{\(v_{\phi}\simeq 0.294\cdot e^{-0.930i}\,\ \mathcal{V}_{\rm opt}\simeq-1.834\)} \\ \hline \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Charges & \multicolumn{5}{c|}{\({\cal Q}=\left(\begin{array}{ccccc|c|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3}&H&\phi\\ \hline-5&-4&-2&3&2&0&-1&-2&-1&2&-1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccccc|c}-0.812&-1.382&1.015\\ 0.949&0.924&-1.598\\ -1.319&-1.511&-0.935\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccccc|c}-1.129&-0.680&-0.789\\ 0.505&-0.857&-0.994\\ -1.033&-1.134&1.287\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{5}{c|}{\(v_{\phi}\simeq 0.172\cdot e^{1.985i}\,\ {\cal V}_{\rm opt}\simeq-1.240\)} \\ \hline \hline Charges & \multicolumn{5}{c|}{\({\cal Q}=\left(\begin{array}{ccccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3}&H&\phi\\ \hline 3&5&0&1&-3&1&-4&-5&-3&1&1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccccc|c}1.030&-0.155&1.152\\ -0.934&1.382&-0.996\\ -1.349&1.185&-1.036\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccccc|c}0.323&-1.270&1.333 \\ -0.945&1.084&0.987\\ 0.925&-1.042&0.852\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{5}{c|}{\(v_{\phi}\simeq 0.289\cdot e^{-1.223i}\,\ {\cal V}_{\rm opt}\simeq-1.405\)} \\ \hline \hline Charges & \multicolumn{5}{c|}{\({\cal Q}=\left(\begin{array}{ccccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3}&H&\phi\\ \hline 1&0&-2&1&-2&2&-8&-8&-8&4&1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccccc|c}0.999&0.909&-1.021\\ -0.385&1.263&0.820\\ -1.117&0.825&-0.940\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccccc|c}-1.019&-0.966&-1.312 \\ 1.064&0.979&0.800\\ 1.052&0.824&1.242\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{5}{c|}{\(v_{\phi}\simeq 0.173\cdot e^{1.916i}\,\ {\cal V}_{\rm opt}\simeq-1.158\)} \\ \hline \hline Charges & \multicolumn{5}{c|}{\({\cal Q}=\left(\begin{array}{ccccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3} &H&\phi\\ \hline -4&-6&-1&5&9&3&-2&-1&-3&4&-1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{ccccc|c}1.222&-1.247&1.028\\ -1.169&-1.066&-1.216\\ 1.172&0.916&-1.204\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{ccccc|c}-1.021&0.900&0.793\\ 0.804&1.260&0.375\\ -1.029&0.973&0.401\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{5}{c|}{\(v_{\phi}\simeq 0.294\cdot e^{-2.999i}\,\ {\cal V}_{\rm opt}\simeq-1.676\)} \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Charges & \multicolumn{5}{c|}{\({\cal Q}=\left(\begin{array}{c}Q_{1}\ \ Q_{2}\ \ Q_{3}\\ \hline-3\ -2\ \ 0\end{array}\
\begin{tabular}{|c|c|c c|c c|c c|c c|} \hline Charges & \multicolumn{1}{c}{\({\cal Q}=\left(\begin{array}{cccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3}&H& \phi\\ \hline-6&-5&-3&-5&-6&-8&5&3&5&-5&-1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{cccc|c}0.977&-0.755&0.909\\ -1.460&-1.720&-1.092\\ 0.731&-0.836&1.109\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{cccc|c}1.290&-0.798&-0.694 \\ -0.461&1.375&-1.045\\ -0.560&-0.628&0.843\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.171\cdot e^{-2.525i}\,\ {\cal V}_{\rm opt}\simeq-2.236\)} \\ \hline \hline Charges & \multicolumn{1}{c}{\({\cal Q}=\left(\begin{array}{cccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3}&H&\phi\\ \hline-5&-7&-2&6&4&3&-3&-3&-4&5&-1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{cccc|c}-1.137&-1.003&-1.111\\ 1.483&1.324&1.763\\ -1.162&1.346&-1.035\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{cccc|c}-0.758&-0.830&-1.025\\ 1.349&-1.234&-0.979\\ 1.101&-1.417&1.060\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.290\cdot e^{-2.680i}\,\ {\cal V}_{\rm opt}\simeq-1.077\)} \\ \hline \hline Charges & \multicolumn{1}{c}{\({\cal Q}=\left(\begin{array}{cccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3} &H&\phi\\ \hline-4&-6&-1&2&4&-1&3&5&3&0&-1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{cccc|c}-0.672&-0.788&1.315\\ 1.186&-0.642&0.970\\ 1.060&0.924&0.604\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{cccc|c}-0.796&-1.026&-1.109 \\ 1.127&1.424&-1.072\\ -1.364&1.139&-1.078\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.300\cdot e^{2.723i}\,\ {\cal V}_{\rm opt}\simeq-2.409\)} \\ \hline \hline Charges & \multicolumn{1}{c}{\({\cal Q}=\left(\begin{array}{cccc|c}Q_{1}&Q_{2}&Q_{3}&u_{1}&u_{2}&u_{3}&d_{1}&d_{2}&d_{3} &H&\phi\\ \hline-4&-6&-1&2&5&-1&2&5&2&0&-1\end{array}\right)\)} \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{u}\simeq\left(\begin{array}{cccc|c}-1.334&1.343&-0.781\\ -1.592&-1.149&-1.334\\ -1.565&-0.969&1.063\end{array}\right)\,\ y^{d}\simeq\left(\begin{array}{cccc|c}1.598&-1.740&1.046 \\ -1.041&-1.034&0.984\\ 0.727&0.933&1.393\end{array}\right)\) \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.300\cdot e^{2.498i}\,\ {\cal V}_{\rm opt}\simeq-2.267\)} \\ \hline \end{tabular}
### Lepton sector (RL with NO designated)
\begin{tabular}{c|c c|c c|c c|c c|c c} \hline \hline Charges & \multicolumn{3}{c|}{\(\mathcal{Q}=\left(\begin{array}{cccc|cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3}&H& \phi\\ \hline-1&-1&0&2&4&1&5&6&4&0&-1\end{array}\right)\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{cccc|cccc}-0.705&-0.713&0.799\\ -1.439&-1.472&1.516\\ -0.157&0.186&-2.133\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{cccc|cccc}0.659&-0.836&-1.219\\ -1.264&-2.891&0.651\\ -0.465&-1.062&1.138\end{array}\right)\) \\ \multicolumn{1}{c}{} & \multicolumn{3}{c|}{\(y^{N}\simeq\left(\begin{array}{cccc|cccc}2.553&1.290&1.402\\ 1.290&1.224&-0.902\\ 1.402&-0.902&0.105\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{3}{c|}{\(v_{\phi}\simeq 0.300\cdot e^{2.723i}\,\ \mathcal{V}_{\rm opt}\simeq-0.915\)} \\ \hline \hline Charges & \multicolumn{3}{c|}{\(\mathcal{Q}=\left(\begin{array}{cccc|cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3} &H&\phi\\ \hline 2&1&2&-2&-7&-5&-7&-3&-3&-1&1\end{array}\right)\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{cccc|cccc}3.386&-0.205&-2.696\\ -0.523&-1.396&3.760\\ 1.375&-0.561&-2.044\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{cccc|cccc}-0.774&-2.694&1.101 \\ -0.949&-0.905&-0.432\\ -2.286&-0.314&1.325\end{array}\right)\) \\ \multicolumn{1}{c}{} & \multicolumn{3}{c|}{\(y^{N}\simeq\left(\begin{array}{cccc|cccc}1.246&0.199&1.121\\ 0.199&1.280&-0.879\\ 1.121&-0.879&-0.214\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{3}{c|}{\(v_{\phi}\simeq 0.268\cdot e^{-0.166i}\,\ \mathcal{V}_{\rm opt}\simeq-0.611\)} \\ \hline \hline Charges & \multicolumn{3}{c|}{\(\mathcal{Q}=\left(\begin{array}{cccc|cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3} &H&\phi\\ \hline 2&1&2&-8&-1&-9&-7&-3&-1&1\end{array}\right)\)} \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{cccc|cccc}-0.889&2.056&-0.299\\ -1.584&-2.697&1.542\\ -0.797&0.918&1.501\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{cccc|cccc}1.135&-1.331&0.128 \\ 1.207&-1.203&-0.051\\ -0.671&-2.639&0.074\end{array}\right)\) \\ \multicolumn{1}{c}{} & \multicolumn{3}{c|}{\(y^{N}\simeq\left(\begin{array}{cccc|cccc}1.125&-0.388&0.950\\ -0.388&1.066&-0.349\\ 0.950&-0.349&-0.656\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{3}{c|}{\(v_{\phi}\simeq 0.268\cdot e^{-0.166i}\,\ \mathcal{V}_{\rm opt}\simeq-0.853\)} \\ \hline \end{tabular}
### Lepton sector (RL without specifying the neutrino mass ordering)
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline Charges & \(\mathcal{Q}=\left(\begin{array}{ccc}L_{1}&L_{2}&L_{3}\\ \hline-6&-5&-7\end{array}\right]\) & \(\begin{array}{ccc}N_{1}&N_{2}&N_{3}\\ \end{array}\) & \(\begin{array}{ccc}l_{1}&l_{2}&l_{3}\\ \end{array}\) & \(\begin{array}{ccc}H&\phi\\ \end{array}\) \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{ccc}-0.688&-1.090&-1.149\\ 0.459&-1.353&-0.229\\ 1.044&-0.597&-3.286\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc}-1.270&-1.387&3.625\\ -0.230&1.512&0.826\\ -1.327&0.590&-1.473\end{array}\right)\) \\ & & \(y^{N}\simeq\left(\begin{array}{ccc}-1.328&-1.209&-0.765\\ -1.209&0.714&0.571\\ -0.765&0.571&2.154\end{array}\right)\) & \\ \hline VEV, Value & \(v_{\phi}\simeq 0.171\cdot e^{0.649i}\,\ \mathcal{V}_{\rm opt}\simeq-0.559\) & \\ \hline \hline Charges & \(\mathcal{Q}=\left(\begin{array}{ccc}L_{1}&L_{2}&L_{3}\\ \hline-5&-5&-4\end{array}\right]\) & \(\begin{array}{ccc}N_{1}&N_{2}&N_{3}\\ \end{array}\) & \(\begin{array}{ccc}l_{1}&l_{2}&l_{3}\\ \end{array}\) & \(\begin{array}{ccc}H&\phi\\ \end{array}\) \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{ccc}-1.146&0.729&-0.022\\ 0.954&1.968&-1.317\\ -1.070&0.476&-1.263\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc}-1.236&0.755&0.613 \\ 1.308&0.545&-1.086\\ 0.567&-1.310&-1.059\end{array}\right)\) \\ & & \(y^{N}\simeq\left(\begin{array}{ccc}0.741&1.092&0.703\\ 1.092&-0.498&-0.999\\ 0.703&-0.999&-0.438\end{array}\right)\) & \\ \hline VEV, Value & \(v_{\phi}\simeq 0.300\cdot e^{-1.998i}\,\ \mathcal{V}_{\rm opt}\simeq-1.349\) & \\ \hline \hline Charges & \(\mathcal{Q}=\left(\begin{array}{ccc}L_{1}&L_{2}&L_{3}\\ \hline-5&-6&-4\end{array}\right]\) & \(\begin{array}{ccc}N_{1}&N_{2}&N_{3}\\ \end{array}\) & \(\begin{array}{ccc}l_{1}&l_{2}&l_{3}\\ \end{array}\) & \(\begin{array}{ccc}H&\phi\\ \end{array}\) \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{ccc}-2.733&-0.172&2.087\\ -0.443&-0.578&0.215\\ 1.717&-0.553&0.961\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc}0.027&-0.994 &1.790\\ 1.267&1.215&1.901\\ -1.875&-1.609&1.474\end{array}\right)\) \\ & & \(y^{N}\simeq\left(\begin{array}{ccc}1.318&-1.208&1.567\\ -1.208&0.724&0.904\\ 1.567&0.904&-1.328\end{array}\right)\) & \\ \hline VEV, Value & \(v_{\phi}\simeq 0.294\cdot e^{-2.999i}\,\ \mathcal{V}_{\rm opt}\simeq-0.829\) & \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Charges & \multicolumn{3}{c|}{\({\cal Q}=\left(\begin{array}{ccccc|c}L_{1}&L_{2}&L_{3}\\ \hline 3&3&2&-3&0&0\\ \end{array}\right)\) & \(\begin{array}{ccccc|c}-3&0&0&-2&1\\ \hline 2&2&2&2\\ \end{array}\) & \(\begin{array}{ccccc|c}-1.737&-1.060&2.712\\ \end{array}\) \\ \hline \({\cal O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{ccccc}1.728&-1.717&1.790\\ 1.225&-0.456&-1.589\\ -2.243&-2.316&-2.664\\ \end{array}\right)\) & \(\begin{array}{ccccc}\;y^{\nu}\simeq\left(\begin{array}{ccccc}-1.737&-1.060&2.712 \\ 3.083&-1.698&-0.342\\ -0.396&0.945&-0.287\\ \end{array}\right)\) \\ \hline \(\begin{array}{ccccc}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\begin{tabular}{l|c c|c c c|c c|c c|c c} \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}\\ \hline-5&-5&-5&-5\end{array}\right]\) & \(\begin{array}{cccc}4&9&1&-7&-3&-1\end{array}\) & \(\begin{array}{cccc}5&-1\end{array}\) \\ \hline \(\mathcal{O}\left(1\right)\) coeff. & \(y^{l}\simeq\left(\begin{array}{cccc}1.023&-1.307&-1.767\\ 0.477&1.943&1.116\\ 0.684&-0.425&-0.585\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{cccc}-0.821&-0.611&-1.179\\ -0.908&0.623&0.373\\ 0.548&-0.481&-0.604\end{array}\right)\) \\ \cline{2-10} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{cccc}-0.494&-1.719&1.384\\ -1.719&-0.665&-1.049\\ 1.384&-1.049&-0.629\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.288\cdot e^{-1.881i}\,\ \mathcal{V}_{\rm opt}\simeq-1.138\)} \\ \hline \hline Charges & \multicolumn{1}{c|}{\(\mathcal{Q}=\left(\begin{array}{cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}\\ \hline-2&-3&-2\end{array}\right]\) & \(\begin{array}{cccc}N_{1}&1&1&1&1&1\\ 0.477&1.943&1.116\\ 0.585&-0.585&-0.585\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{cccc}-0.8 21&-0.611&-1.179\\ -0.908&0.623&0.373\\ 0.548&-0.481&-0.604\end{array}\right)\)} \\ \cline{2-10} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{cccc}-0.494&-1.719&1.384\\ -1.719&-0.665&-1.049\\ 1.384&-1.049&-0.629\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.288\cdot e^{-1.881i}\,\ \mathcal{V}_{\rm opt}\simeq-1.138\)} \\ \hline \hline Charges & \multicolumn{1}{c}{\(\mathcal{Q}=\left(\begin{array}{cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}\\ \hline-3&-2&-2\end{array}\right]\) & \(\begin{array}{cccc}N_{1}&1&1&1&1\\ 0.477&1.943&1.116\\ 0.585&-0.585\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{cccc}-0.646&-0.613&1.543\\ 2.919&0.630&0.799\\ 1.475&-0.413&1.272\end{array}\right)\)} \\ \cline{2-10} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{cccc}2.432&2.685&0.485\\ 2.685&-1.013&1.135\\ 0.485&1.135&-2.252\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.185\cdot e^{-0.236i}\,\ \mathcal{V}_{\rm opt}\simeq-0.734\)} \\ \hline \hline Charges & \multicolumn{1}{c}{\(\mathcal{Q}=\left(\begin{array}{cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}\\ \hline-3&-2&-2\end{array}\right]\)} & \(\begin{array}{cccc}N_{1}&1&1&1&1\\ 0.477&1.943&1.116\\ 0.584&-0.481&-0.604\end{array}\right)\) \\ \cline{2-10} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{cccc}-0.494&-1.719&1.384\\ -1.719&-0.665&-1.049\\ 1.384&-1.049&-0.629\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.288\cdot e^{-1.881i}\,\ \mathcal{V}_{\rm opt}\simeq-1.138\)} \\ \hline \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.288\cdot e^{-1.881i}\,\ \mathcal{V}_{\rm opt}\simeq-1.138\)} \\ \hline \hline Charges & \multicolumn{1}{c}{\(\mathcal{Q}=\left(\begin{array}{cccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}\\ \hline-3&-2&-2\end{array}\right]\)} & \(\begin{array}{cccc}N_{1}&1&1&1&1&1\\ 0.477&1.943&1.116\\ 0.584&-0.481&-0.604\end{array}\right)\) \\ \cline{2-10} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{cccc}-0.494&-1.719&1.384\\ -1.719&-0.665&-1.049\\ 1.384&-1.049&-0.629\end{array}\right)\)} \\ \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.288\cdot e^{-1.881i}\,\ \mathcal{V}_{\rm opt}\simeq-1.138\)} \\ \hline \hline VEV, Value & \multicolumn{1}{c}{\(v_{\phi}\simeq 0.
\begin{tabular}{|c|c c|c c|c c|c c|} \hline \multirow{2}{*}{Charges} & \multicolumn{3}{c|}{\({\cal Q}=\left(\begin{array}{ccc|ccc|ccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3}&H&\phi\\ \hline 2&3&1&-7&-8&-1&-2&-5&-1&-1&1\end{array}\right)\)} \\ \hline \multirow{2}{*}{\({\cal O}\left(1\right)\) coeff.} & \multirow{2}{*}{\(y^{l}\simeq\left(\begin{array}{ccc|ccc}-0.424&-0.567&0.897\\ -0.482&-0.787&0.827\\ 0.141&-0.704&0.565\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc|ccc}-1.243&1.096&0.396\\ -0.898&-1.501&-3.224\\ 2.361&2.246&-1.668\end{array}\right)\)} \\ \cline{2-7} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{ccc|ccc}2.311&0.877&-1.491\\ 0.877&-1.746&0.186\\ -1.491&0.186&-0.283\end{array}\right)\)} \\ \hline \multicolumn{7}{|l|}{VEV, Value} & \multicolumn{3}{c|}{\(v_{\phi}\simeq 0.268\cdot e^{-0.166i}\,\ {\cal V}_{\rm opt}\simeq-0.720\)} \\ \hline \hline \multirow{2}{*}{Charges} & \multicolumn{3}{c|}{\({\cal Q}=\left(\begin{array}{ccc|ccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3} &H&\phi\\ \hline-3&-2&-2&4&5&2&1&-1&0&2&-1\end{array}\right)\)} \\ \hline \multirow{2}{*}{\({\cal O}\left(1\right)\) coeff.} & \multirow{2}{*}{\(y^{l}\simeq\left(\begin{array}{ccc|ccc}-1.042&1.484&1.100\\ -0.867&-0.756&1.175\\ 1.026&-0.978&-1.210\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc|ccc}-0.563&-1.584&-0.739\\ -1.078&-0.343&-0.655\\ 0.798&0.888&-0.997\end{array}\right)\)} \\ \cline{2-7} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{ccc|ccc}1.764&-1.266&-1.134\\ -1.266&0.375&-1.738\\ -1.134&-1.738&-0.823\end{array}\right)\)} \\ \hline \multicolumn{7}{|l|}{VEV, Value} & \multicolumn{3}{c|}{\(v_{\phi}\simeq 0.185\cdot e^{-0.236i}\,\ {\cal V}_{\rm opt}\simeq-0.972\)} \\ \hline \hline \multirow{2}{*}{Charges} & \multicolumn{3}{c|}{\({\cal Q}=\left(\begin{array}{ccc|ccc}L_{1}&L_{2}&L_{3}&N_{1}&N_{2}&N_{3}&l_{1}&l_{2}&l_{3} &H&\phi\\ \hline-2&-2&-2&-2&8&1&7&4&2&-1\end{array}\right)\)} \\ \hline \multirow{2}{*}{\({\cal O}\left(1\right)\) coeff.} & \multirow{2}{*}{\(y^{l}\simeq\left(\begin{array}{ccc|ccc}-1.475&0.798&2.082\\ -0.475&0.975&-1.885\\ -1.395&-1.548&-0.236\end{array}\right)\,\ y^{\nu}\simeq\left(\begin{array}{ccc|ccc}1.104&1.783&1.208\\ 0.906&0.454&-0.456\\ 1.244&-0.511&0.218\end{array}\right)\)} \\ \cline{2-7} & & \multicolumn{1}{c}{\(y^{N}\simeq\left(\begin{array}{ccc|ccc}1.836&1.151&-1.546\\ 1.151&2.194&0.400\\ -1.546&0.400&-0.853\end{array}\right)\)} \\ \hline \multicolumn{7}{|l|}{VEV, Value} & \multicolumn{3}{c|}{\(v_{\phi}\simeq 0.185\cdot e^{-0.236i}\,\ {\cal V}_{\rm opt}\simeq-1.596\)} \\ \hline \end{tabular} |
2306.15411 | Counting number fields whose Galois group is a wreath product of
symmetric groups | Let $K$ be a number field and $k\geq 2$ be an integer. Let $(n_1,n_2, \dots,
n_k)$ be a vector with entries $n_i\in \mathbb{Z}_{\geq 2}$. Given a number
field extension $L/K$, we denote by $\widetilde{L}$ the Galois closure of $L$
over $K$. We prove asymptotic lower bounds for the number of number field
extensions $L/K$ with $[L:K]=\prod_{i=1}^k n_i$, such that
$Gal(\widetilde{L}/K)$ is isomorphic to the iterated wreath product of
symmetric groups $S_{n_1}\wr S_{n_2}\wr \dots \wr S_{n_k}$. Here, the number
fields $L$ are ordered according to discriminant
$|\Delta_L|:=|Norm_{K/\mathbb{Q}} (\Delta_{L/K})|$. The results in this paper
are motivated by Malle's conjecture. When $n_1=n_2=\dots =n_k$, these wreath
products arise naturally in the study of arboreal Galois representations
associated to rational functions over $K$. We prove our results by developing
Galois theoretic techniques that have their origins in the study of dynamical
systems. | Hrishabh Mishra, Anwesh Ray | 2023-06-27T12:11:50Z | http://arxiv.org/abs/2306.15411v2 | # Counting number fields whose Galois group is a wreath product of symmetric groups
###### Abstract.
Let \(K\) be a number field and \(k\geq 2\) be an integer. Let \((n_{1},n_{2},\ldots,n_{k})\) be a vector with entries \(n_{i}\in\mathbb{Z}_{\geq 2}\). Given a number field extension \(L/K\), we denote by \(\widetilde{L}\) the Galois closure of \(L\) over \(K\). We prove asymptotic lower bounds for the number of number field extensions \(L/K\) with \([L:K]=\prod_{i=1}^{k}n_{i}\), such that \(\operatorname{Gal}(\widetilde{L}/K)\) is isomorphic to the iterated wreath product of symmetric groups \(S_{n_{1}}\wr S_{n_{2}}\wr\cdots\wr S_{n_{k}}\). Here, the number fields \(L\) are ordered according to discriminant \(|\Delta_{L}|:=|\operatorname{Norm}_{K/0}(\mathcal{D}_{L/K})|\). The results in this paper are motivated by Malle's conjecture. When \(n_{1}=n_{2}=\cdots=n_{k}\), these wreath products arise naturally in the study of _arboreal Galois representations_ associated to rational functions over \(K\). We prove our results by developing Galois theoretic techniques that have their origins in the study of dynamical systems.
Key words and phrases:arithmetic statistics, Malle's conjecture, counting number fields, arboreal Galois representations, arithmetic dynamics 2020 Mathematics Subject Classification: 11R45, 11R29 (Primary), 37P15 (Secondary)
## 1. Introduction
### Background and motivation
In this paper we prove asymptotic lower bounds for the number of number field extensions with prescribed Galois group. The Galois groups considered here are wreath products of symmetric groups and are natural to consider since they arise as splitting fields of iterates of polynomials defined over number fields. In the study of dynamical systems, such Galois extensions naturally arise, and it is convenient to leverage techniques from arithmetic dynamics.
Let \(K\) be an algebraic number field with \(d=[K:\mathbb{Q}]\). Given a number field extension \(L/K\), set \(\widetilde{L}\) to denote the Galois closure or \(L\) over \(K\). Let \(\mathcal{D}_{L/K}\) denote the relative discriminant, and let \(N_{K/\mathbb{Q}}:K\to\mathbb{Q}\) denote the norm map. We set \(\Delta_{L}:=N_{K/\mathbb{Q}}(\mathcal{D}_{L/K})\). Let \(G\) be a finite transitive subgroup of \(S_{n}\) and \(L/K\) be an extension with \([L:K]=n\). Then, \(\operatorname{Gal}(\widetilde{L}/K)\) acts on the set of embeddings of \(L\) into \(\bar{K}\). For \(X\in\mathbb{R}_{>0}\), consider the function
\[N_{n,K}(X;G):=\#\{L/K\mid L\subset\bar{K},[L:K]=n,\operatorname{Gal}(\widetilde {L}/K)\simeq G,|\Delta_{L}|\leq X\}.\]
Here, the isomorphism \(\operatorname{Gal}(\widetilde{L}/K)\simeq G\) is that of permutation subgroups of \(S_{n}\). Malle [13] made a precise conjecture regarding the asymptotic growth of \(N_{n,K}(X;G)\) as \(X\) goes to infinity. In greater detail, the conjecture predicts that
\[N_{n,K}(X;G)\sim c(K,G)X^{a(G)}(\log X)^{b(K,G)-1},\]
where \(a(G)\) is a constant that only depends on \(G\) and its permutation representation, and \(b(K,G),c(K,G)\) are constants that depend both on \(K\) as well as on \(G\). This precise asymptotic prediction is known as the strong form of Malle's conjecture and has been
proven for various groups \(G\subseteq S_{n}\). For instance, the conjecture has been proven for the abelian groups by Maki [14] and Wright [15], \(S_{n}\) for \(n\leq 5\)[1, 1, 2, 3], the dihedral group \(D_{4}\)[10], as well as finite nilpotent groups satisfying additional conditions [11]. For a more detailed and exhaustive list of results, we refer to the discussion in section 3.1. The precise asymptotic predicted by the strong form of Malle's conjecture has been shown to not hold. Recently, Kluners provided an explicit counterexample to the conjecture, cf. [14]. There is however, a weak form of the conjecture, which is still widely expected to hold for all permutation groups \(G\). Let us briefly state this conjecture. For a permutation \(g\in S_{n}\), we define its _index_ as follows
\[\operatorname{ind}(g):=n-\text{ the number of orbits of $g$ on $[n]$.}\]
Given a conjugacy class \(C\) of \(G\), let \(\operatorname{ind}(C)\) denote \(\operatorname{ind}(g)\), where \(g\in C\). For any group \(G\neq 1\), set \(G^{\#}:=G\backslash\{1\}\), and set
\[a(G):=\left(\min\{\operatorname{ind}(g)\mid g\in G^{\#}\}\right)^{-1}.\]
**Conjecture 1.1** (Malle's conjecture - weak form).: _Let \(G\) be a transitive permutation group and \(K\) be a number field. Then, for all \(\epsilon>0\), there exist constants \(c_{1}(K,G),c_{2}(K,G;\epsilon)>0\) such that_
\[c_{1}(K,G)X^{a(G)}\leq N_{n,K}(X;G)<c_{2}(K,G;\epsilon)X^{a(G)+\epsilon},\]
_for all large enough values of \(X\)._
Motivated by such developments, asymptotic lower bounds for \(N_{n,K}(X;G)\) for various families of groups \(G\subset S_{n}\) have been proven by various authors, and are listed below.
1. When the inverse Galois problem is solved for \(G\) over \(K\), \(n=|G|\), and \(\iota:G\hookrightarrow S_{n}\) is the regular representation, a general asymptotic lower bound for \(N_{n,K}(X;G)\) is proven by Kluners and Malle, cf. [13, Theorem 4.1].
2. For the identity \(\iota:S_{n}\to S_{n}\), Malle's conjecture predicts that \(N_{n}(S_{n};G)\sim c_{n}X\), for some constant \(c_{n}>0\) which depends only on \(n\). Malle [14] showed that \(N_{n,\mathbb{Q}}(X,S_{n})\gg X^{1/n}\).
3. Ellenberg and Venkatesh [11] showed that \(N_{n,K}(X;S_{n})\gg X^{\frac{1}{2}+\frac{1}{n^{2}}}\).
4. Bhargava, Shankar and Wang [15] showed that \(N_{n,\mathbb{Q}}(X;S_{n})\gg X^{\frac{1}{2}+\frac{1}{n}}\).
5. For the natural inclusion \(\iota:A_{n}\hookrightarrow S_{n}\), Pierce, Turnage-Butterbaugh and Wood [16] showed that \(N_{n,\mathbb{Q}}(X;A_{n})\gg X^{\frac{n!-2}{n!(4n-2)}}\). For \(n\geq 6\) and \(n\neq 7\), this result is further improved upon by Landesman, Lemke-Oliver and Thorne [13], who have proven that \[N_{n,K}(X;A_{n})\gg\begin{cases}X^{\frac{(n-4)(n^{2}-4)}{8(n^{3}-n^{2})}}& \text{ if $n$ is even,}\\ X^{\frac{(n-7)(n+2)}{8n^{2}}}&\text{ if $n$ is odd.}\end{cases}\]
### Main result
We state our main result. Let \(G_{1}\subseteq S_{m}\) and \(G_{2}\subseteq S_{n}\) be two transitive permutation subgroups. The _wreath product_\(G=G_{1}\wr G_{2}\) is the semidirect product \(G_{1}^{n}\rtimes G_{2}\), where \(G_{2}\) permutes the \(n\) copies of \(G_{1}\). The group \(G\) is seen to be a permutation subgroup of \(S_{mn}\). For \(\vec{n}=(n_{1},\dots,n_{k})\in\mathbb{Z}_{\geq 2}^{k}\), set
\[S(\vec{n}):=S_{n_{1}}\wr S_{n_{2}}\wr\dots\wr S_{n_{k}}.\]
We remark that taking wreath products is an associative operation. Note that \(S(\vec{n})\) is a subgroup of \(S_{N}\), where \(N:=\prod_{i=1}^{k}n_{i}\). Let \([S_{n}]^{k}\) denote the \(k\)-fold wreath power \(S(\underbrace{n,n,n,\ldots,n}_{k\text{-times}})\). These wreath powers are known to arise naturally in the study of arboreal Galois representations. For further details, we refer to section 3.3. Below is the main result of the article, and is proven by refining the method of Ellenberg and Venkatesh [10, section 3].
**Theorem A**.: _Let \(k\geq 2\) and \(\vec{n}=(n_{1},\ldots,n_{k})\in\mathbb{Z}_{\geq 2}^{k}\), denote by \(S(\vec{n})\) the symmetric wreath product \(S_{n_{1}}\wr S_{n_{2}}\wr\cdots\wr S_{n_{k}}\). Let \(K\) be a number field and \(d=[K:\mathbb{Q}]\). We set_
\[B=B(\vec{n}):=\sum_{j=1}^{k-1}\left(\frac{n_{j}-1}{2}\right) \left(\prod_{v=1}^{j}n_{v}\right)+\left(\frac{n_{k}+1}{2}\right)\left(\prod_{ v=1}^{k}n_{v}\right)\text{, and,}\] \[N=N(\vec{n}):=\prod_{v=1}^{k}n_{v}.\]
_Then, we have that_
\[N_{N,K}\left(X;S(\vec{n})\right)\gg\begin{cases}X^{\frac{B-N/2}{N^{2}-N}}& \text{if }B\geq\frac{N^{2}}{4}+N;\\ X^{\frac{(B-N)(N+2)}{N^{3}-N^{2}}}&\text{if }B\leq\frac{N^{2}}{4}+N.\end{cases}\]
In order to get a better feeling for the bounds above, we specialize to the wreath powers \([S_{n}]^{k}\).
**Theorem B**.: _Let \(k,n\in\mathbb{Z}_{\geq 2}\) and let \([S_{n}]^{k}=S(n,n,\ldots,n)\) be the \(k\)-fold wreath power of \(S_{n}\). Then, we have that_
\[N_{n^{k},K}(X;[S_{n}]^{k})\gg X^{\delta_{n,k}},\]
_where_
\[\delta_{n,k}:=\frac{n^{2k}+n^{k}-2}{2\left(n^{3k-1}-n^{2k-1}\right)}.\]
We note that \(\delta_{n,k}\geq\frac{1}{2n^{k-1}}\).
### Organization
Including the introduction, the article consists of a total of \(6\) sections. In the section 2, we set up notation that is used throughout the article. In section 3, we discuss Malle's conjecture and various generalities on arboreal Galois representations associated to a polynomial function. In greater detail, we first state the precise form of Malle's conjecture and discuss some of the known results in this area. Then, the precise form of Malle's conjecture for iterated wreath products of symmetric groups is discussed. We discuss the tree structure associated to the roots of iterates of a polynomial function, and the associated Galois representation on this tree. We discuss seminal results of Odoni, which shall be used in establishing the results in this article. In the section 4, we discuss various number field counting techniques. We begin the section by discussing a certain refinement of the Hilbert irreducibility theorem due to Cohen [11] and Landesman, Lemke-Oliver and Thorne [12]. We then outline the strategy of Ellenberg and Venkatesh in proving an asymptotic lower bound for \(N_{n,K}(X;S_{n})\). We then discuss some generalizations of this method, which can be applied to various subgroups \(G\subset S_{n}\)
In particular, we state a criterion due to Pierce, Turnage-Butterbaugh and Wood (cf. Theorem 4.4). This criterion does apply to \(K=\mathbb{Q}\), and it can be applied to give a bound that is weaker than that of Theorem A. We refer to Remark 6.2 for further details. Our method relies on a different strategy and we are able to obtain stronger results. Our key contributions are contained in the section 5, in which we make the final preparations for the proof of the Theorems A and B. In this section, we apply results of Odoni to construct a polynomial over a function field over \(K\), whose Galois group is \(S(\vec{n})\). We then extend and generalize the strategy of Ellenberg and Venkatesh, outlined in section 4.2, to prove our main result. These proofs of the Theorems A and B are given in section 6.
### Outlook
We expect that the methods introduced in this article can be sufficiently generalized to prove similar results for various wreath products of alternating and symmetric groups, i.e., to groups of the form \(G_{1}\wr G_{2}\wr\cdots\wr G_{k}\), where \(G_{k}\) is either a symmetric group, or an alternating group. Similar investigations would potentially lead to many new directions in Galois theory, arithmetic statistics and arithmetic dynamics. It is thus not only the result itself, but the method of proof that is of significance, as it leads to new questions as well as the prospect for further refinement.
### Acknowledgment
The second named author's research is supported by the CRM-Simons postdoctoral fellowship.
## 2. Notation
This short section is devoted to setting up basic notation.
* Let \(a<b\), we shall denote the set of integers \(m\) that lie in the range \(a\leq m\leq b\) by \([a,b]\).
* Let \(K\) be a number field, with \(d:=[K:\mathbb{Q}]\). We denote by \(\mathcal{O}_{K}\) its ring of integers.
* Let \(\mathbb{A}^{r}\) denote the \(r\)-dimensional affine space, and \[\mathbb{A}^{r}(\mathcal{O}_{K})=\{\alpha=(\alpha_{1},\dots,\alpha_{r})\ |\ \alpha_{i}\in\mathcal{O}_{K},\forall i\}.\]
* Fix an algebraic closure \(\bar{K}\) of \(K\) and set \(\mathrm{G}_{K}:=\mathrm{Gal}(\bar{K}/K)\).
* Given a number field extension \(L/K\), let \(\mathcal{D}_{L/K}\) denote the relative discriminant. Let \(N_{K/\mathbb{Q}}:K\to\mathbb{Q}\) denote the norm map, and set \(\Delta_{L}:=N_{K/\mathbb{Q}}(\mathcal{D}_{L/K})\).
* For \(n\geq\mathbb{Z}_{\geq 2}\), let \([n]\) be the set of numbers \(\{1,2,\dots,n\}\) and the symmetric group of permutations of \([n]\) shall be denoted by \(S_{n}\).
* Let \(X>0\) be a real variable. Given positive functions \(f(X)\) and \(g(X)\), we write \(f(X)\sim g(X)\) to mean that \[\lim_{X\to\infty}\frac{f(X)}{g(X)}=1.\]
* We write \(f(X)\gg g(X)\), if there is a constant \(C>0\), such that \(f(X)\geq Cg(X)\) for all large enough values of \(X\). In this article, the function \(f(X)\) shall be defined for a given pair \((K,n)\), where \(K\) is a number field and \(n\in\mathbb{Z}_{\geq 2}\). The implied constant \(C\) shall depend on both \(K\) and \(n\).
* We shall write \(f(X)\ll g(X)\) (or \(f(X)=O\left(g(X)\right)\)) to mean that there is a constant \(C>0\) such that \(f(X)\leq Cg(X)\).
* Let \(L\) be a field and \(f(X)\) be a non-zero polynomial with coefficients in \(L\). Let \(L_{f}\) denote the splitting field of \(f(X)\) over \(L\). The Galois group \(\operatorname{Gal}(L_{f}/L)\) is denoted \(\operatorname{Gal}(f(X)/L)\) and is referred to as the Galois group over \(L\) generated by \(f(X)\).
* For a number field \(L/K\), let \(\widetilde{L}\) be the Galois closure of \(L\) over \(K\).
* Let \(\chi:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\hat{\mathbb{Z}}^{\times}\) denote the cyclotomic character.
* For \(\alpha\in K\), set \(\|\alpha\|\) to denote the maximum archimedian valuation of \(\alpha\).
* Let \(f(x)=x^{n}+a_{1}x^{n-1}+\cdots+a_{n}\) be a monic polynomial with coefficients \(a_{i}\in\mathcal{O}_{K}\). The _height_ of \(f\) is defined as follows \[\|f\|:=\max\{\|a_{i}\|^{\frac{1}{i}}\ |\ i\in[1,n]\}.\]
* Throughout, \(C\) or \(C_{i}\) shall refer to a positive constant that depends on \(K\) and \(\vec{n}=(n_{1},\ldots,n_{k})\).
## 3. Arboreal representations and wreath products
We begin by discussing Malle's conjecture and the notion of a wreath product, as well as generalities on arboreal Galois representations.
### Malle's conjecture
Let \(K\) be a number field with ring of integers \(\mathcal{O}_{K}\), and set \(d:=[K:\mathbb{Q}]\). Let \(G\) be a finite transitive subgroup of \(S_{n}\). We shall refer to such a group \(G\) as a _permutation group_. Let \(L/K\) be a number field extension with \([L:K]=n\) and let \(\widetilde{L}\) be the Galois closure of \(L\) over \(K\). Then, \(\operatorname{Gal}(\widetilde{L}/K)\) acts on the set of embeddings of \(L\) into \(\bar{K}\). Two permutation subgroups \(G_{1}\) and \(G_{2}\) of \(S_{n}\) are isomorphic as permutation groups if there is an isomorphism \(G_{1}\xrightarrow{\sim}G_{2}\) induced by a reordering of the set \([n]\). Given a finite group \(G\), an injective homomorphism \(\iota:G\hookrightarrow S_{n}\) is referred to as a permutation representation of \(G\).
Let \(\iota:G\hookrightarrow S_{n}\) be a permutation representation. For \(X\in\mathbb{R}_{>0}\), set
\[N_{n,K}(X):= \#\{L/K\ |\ L\subset\bar{K},[L:K]=n,|\Delta_{L}|\leq X\},\] \[N_{n,K}(X;G,\iota):= \#\{L/K\ |\ L\subset\bar{K},[L:K]=n,\operatorname{Gal}(\widetilde{L}/K) \simeq G,|\Delta_{L}|\leq X\},\]
where the isomorphism of \(\operatorname{Gal}(\widetilde{L}/K)\) with \(G\) is that of permutation groups. When it is clear from the context, we suppress the dependence of \(N_{n,K}(X;G,\iota)\) on the permutation representation \(\iota\) and simply write \(N_{n,K}(X;G):=N_{n,K}(X;G,\iota)\). It is clear that
\[N_{n,K}(X)=\sum_{\iota}N_{n,K}(X;G,\iota),\]
where \(\iota\) ranges over all isomorphism classes of permutation representations \(\iota:G\hookrightarrow S_{n}\).
It is expected that \(N_{n,K}(X)\sim c_{n,K}X\), where \(c_{n,K}\) is positive constant which depends only on \(n\) and \(K\), cf. [1, p.723]. Asymptotic upper bounds for \(N_{n,K}(X)\) have been established in various works.
1. Schmidt [11] showed that \(N_{n,K}(X)\ll_{n,K}X^{(n+2)/4}\).
2. Ellenberg and Venkatesh [1] improved the above, and showed that \[N_{n,K}(X)\ll_{n,K}X^{\exp(C\sqrt{\log n})},\] where \(C>0\) is an absolute constant.
3. Couveignes [19] showed that \(N_{n,\mathbb{Q}}(X)\ll_{n}X^{c(\log n)^{3}}\), where \(c>0\) is an (unspecified) absolute constant.
4. Lemke-Oliver and Thorne [14] improved upon the above result and showed that \(N_{n,\mathbb{Q}}(X)\ll_{n}X^{c(\log n)^{2}}\). One can take \(c:=1.564\).
When \(\iota:G\hookrightarrow S_{n}\) is a fixed embedding, the conjectured asymptotic for \(N_{n}(X;G)\) is due to Malle. For a permutation \(g\in S_{n}\), we define its _index_ as follows
\[\operatorname{ind}(g):=n-\text{ the number of orbits of $g$ on $[n]$.}\]
Given a conjugacy class \(C\) of \(G\), let \(\operatorname{ind}(C)\) denote \(\operatorname{ind}(g)\), where \(g\in C\). For any group \(G\neq 1\), set \(G^{\#}:=G\backslash\{1\}\), and set
\[a(G):=\left(\min\{\operatorname{ind}(g)\mid g\in G^{\#}\}\right)^{-1}.\]
We note that \(a(G)\) depends not only on \(G\), but also on the permutation representation \(G\hookrightarrow S_{n}\). We note that if \(G\) contains a transposition, then, \(a(G)=1\). In particular, \(a(S_{n})=1\). It is easy to show that \(a(A_{n})=\frac{1}{2}\), where \(A_{n}\subset S_{n}\) is the alternating subgroup.
The Galois group \(\operatorname{G}_{K}:=\operatorname{Gal}(\bar{K}/K)\) acts on the set of conjugacy classes of \(G\) via \(\sigma\cdot C:=C^{\chi(\sigma)}\), where \(C\) is a conjugacy class and \(\sigma\in\operatorname{G}_{K}\). With respect to notation above, set
\[b(G,K):=\#\left(\{C\in\operatorname{Cl}(G)\mid\operatorname{ind}(C)=a(G)^{-1} \}/\operatorname{G}_{K}\right).\]
The conjecture stated below is often referred to as the weak form of Malle's conjecture [13, p.316].
**Conjecture 3.1** (Malle's conjecture - weak form).: _Let \(G\) be a transitive permutation group and \(K\) be a number field. Then, for all \(\epsilon>0\), there exist constants \(c_{1}(K,G),c_{2}(K,G;\epsilon)>0\) such that_
\[c_{1}(K,G)X^{a(G)}\leq N_{n,K}(X;G)<c_{2}(K,G;\epsilon)X^{a(G)+\epsilon},\]
_for all large enough values of \(X\)._
Heuristics supporting the above conjecture are discussed in [13, section 7]. In [13], Malle states the following stronger form of the conjecture, predicting the precise asymptotic growth of \(N_{n}(X)\).
**Conjecture 3.2** (Malle's conjecture - strong form).: _Let \(n\geq 2\) be an integer and \(G\) be a transitive permutation subgroup of \(S_{n}\). Then, with respect to notation above,_
\[N_{n,K}(X;G)\sim c(K,G)X^{a(G)}(\log X)^{b(K,G)-1},\]
_where \(c(K,G)>0\) is a constant which depends on \(K\) and the permutation group \(G\)._
The strong form of the conjecture is known in various cases, some of which are listed below.
1. When \(G\) is abelian and \(\iota:G\hookrightarrow S_{|G|}\) be the regular representation, Malle's conjecture is resolved by Maki [13] and Wright [14].
2. When \(G=S_{n}\) for \(n\leq 5\) and \(\iota:S_{n}\to S_{n}\) is the identity, the conjecture is resolved by Davenport, Bhargava, cf. [1, 1, 15, 16].
3. For \(D_{4}\subset S_{4}\), the conjecture is resolved by Cohen, Diaz y Diaz and Oliver [13].
4. Groups of the form \(S_{n}\times A\), where \(n=3,4,5\) and \(|A|\) is coprime to \(2,6,30\) respectively, cf. [14].
5. Let \(G\) be a nontrivial finite nilpotent group and \(p\) be the smallest prime that divides \(\#G\). Assume that all elements in \(G\) that are of order \(p\) are central. Then, under these hypotheses, Koymans and Pagano prove the strong form of the Malle conjecture for the regular representation of \(G\), cf. [12].
The above list is not exhaustive. The strong form is however now known to be false. Kluners [13] showed that for the group \(G=C_{3}\wr C_{2}\), the predicted factor of \(\log X\) is too small. The weak version of Malle's conjecture is however expected to be true.
### Wreath products
Let \(G_{1}\subseteq S_{m}\) and \(G_{2}\subseteq S_{n}\) be two transitive permutation subgroups. The _wreath product_\(G=G_{1}\wr G_{2}\) is the semidirect product \(G_{1}^{n}\rtimes G_{2}\), where \(G_{2}\) permutes the \(n\) copies of \(G_{1}\). We write \(g=g_{1}\rtimes g_{2}\), where \(g_{1}=(\sigma_{1},\ldots,\sigma_{n})\in G_{1}^{n}\). The group \(G\) acts on \([m]\times[n]=\{(i,j)\mid i\in[m],j\in[n]\}\) as follows
\[g\cdot(i,j):=(\sigma_{j}(i),g_{2}(j)).\]
We thus realize \(G\) is a subgroup of \(S_{mn}\). For \(k\in\mathbb{Z}_{\geq 1}\) we set \([S_{n}]^{k}\) to denote the \(k\)-fold wreath product of \(S_{n}\). In greater detail, for \(k=1\), we set \([S_{n}]^{1}:=S_{n}\) and \([S_{n}]^{k}:=S_{n}\wr[S_{n}]^{k-1}\) for \(k\geq 2\). Implicit to the construction, we have a natural permutation representation, \([S_{n}]^{k}\hookrightarrow S_{n^{k}}\). For \(k>d\), there is a natural quotient map \([S_{n}]^{k}\to[S_{n}]^{d}\). We let \([S_{n}]^{\infty}\) denote the inverse limit \(\varprojlim_{k}[S_{n}]^{k}\).
**Proposition 3.3**.: _With respect to notation above, we have that \(a([S_{n}]^{k})=a(S_{n})=1\)._
Proof.: According to [13, Lemma 5.1], \(a(G_{1}\wr G_{2})=a(G_{1})\). In particular,
\[a([S_{n}]^{k})=a(S_{n}\wr[S_{n}]^{k-1})=a(S_{n}).\]
It is easy to see that if \(g\) is a transposition in \(S_{n}\), then, \(\operatorname{ind}(g)=1\). This implies that \(a(S_{n})=1\).
Therefore, the weak version of Malle's conjecture predicts that
\[c_{1}X\leq N_{n,K}(X;[S_{n}]^{k})<c_{2}(\epsilon)X^{1+\epsilon},\]
for any value of \(\epsilon>0\).
### Images of arboreal Galois representations
We introduce the theory arboreal representations, for further details, we refer to [10]. We also discuss a result of Odoni [10], which shall prove to be crucial in establishing our main results in later sections. In this section, we let \(K\) be a field of characteristic \(0\), with algebraic closure \(\bar{K}\), and \(f(x)\in K[x]\) be a polynomial of degree \(n\geq 2\). For \(k\in\mathbb{Z}_{\geq 1}\), let \(f^{\circ k}:=f\circ f\circ\cdots\circ f\) be the \(k\)-th iterate of \(f\). Thus, \(f^{\circ 1}=f\), \(f^{\circ 2}=f\circ f\), \(f^{\circ 3}=f\circ f\circ f\) and so on. We let \(f^{\circ 0}(x)=x\) denote the identity polynomial. We view these iterates as functions on \(\bar{K}\), and thus the family \(\{f^{\circ k}\mid k\in\mathbb{Z}_{\geq 1}\}\) gives rise to a dynamical system on \(\bar{K}\). For \(\alpha\in\bar{K}\), refer to the set of translates \(\mathcal{O}_{f}(\alpha):=\{f^{\circ k}(\alpha)\mid k\in\mathbb{Z}_{\geq 0}\}\) as the the _orbit of \(\alpha\)_. The point \(\alpha\) is said to be _pre-periodic_ (resp. _wandering_) if \(\mathcal{O}_{f}(\alpha)\) is finite (resp. infinite). Let \(t\in K\) be an arbitrary element. We introduce an assumption which is shown to be satisfied for various examples of interest.
**Assumption 3.4**.: _Let \(t\in K\) and \(f\in K[x]\) be a polynomial. Assume that for all \(k\in\mathbb{Z}_{\geq 1}\), the polynomial \(f^{\circ k}(x)-t\in K[x]\) is irreducible._
For the rest of this section, we shall assume that the Assumption 3.4 is satisfied for the pair \((f,t)\). Note that in this case, \(t\) must be a wandering point. Thus, for all \(k\in\mathbb{Z}_{\geq 0}\), the preimage set
\[f^{-k}(t):=(f^{\circ k})^{-1}(t)=\{z\in\bar{K}\mid f^{\circ k}(z)=t\}\]
has cardinality equal to \(n^{k}\). For \(k=0,1,2\), we find that
\[f^{-0}(t) =\{t\},\] \[f^{-1}(t) =\{z\in\bar{K}\mid f(z)=t\},\] \[f^{-2}(t) =\{z\in\bar{K}\mid f(f(z))=t\},\ldots,\text{etc}.\]
We identify the splitting field \(K_{f^{\circ k}-t}\) with \(K\left(f^{-k}(t)\right)\). Since \(t\) is a wandering point, \(f^{-k}(t)\) and \(f^{-m}(t)\) are disjoint unless \(k=m\). We shall set \(T_{k}(f,t)\) to be the union \(\bigcup_{j\leq k}f^{-j}(t)\), and \(T_{\infty}(f,t)\) is defined as the infinite union \(\bigcup_{j=1}^{\infty}f^{-j}(t)\). The sets \(T_{k}(f,t)\) and \(T_{\infty}(f,t)\) have a natural tree structure. The vertices consist of the elements in \(T_{k}(f,t)\) (resp. \(T_{\infty}(f,t)\)) and the vertices \(\alpha\) and \(\beta\) are connected by an edge if \(f(\alpha)=\beta\). For an explicit example, we refer to [1, p. 417].
Let \(\mathbf{T}_{k}\) be the perfect \(n\)-ary rooted tree with height \(k\). This tree has a single root, every node of height less than \(k\) has \(n\) child nodes and all leaves are of height \(k\). We view \(\mathbf{T}_{k}\) as a subgraph of \(\mathbf{T}_{k+1}\), and we let \(\mathbf{T}_{\infty}\) be the \(n\)-regular tree with infinite height. We identify \(\mathbf{T}_{\infty}\) with the infinite union \(\bigcup_{k\geq 1}\mathbf{T}_{k}\). We have identifications of \(\mathbf{T}_{k}\) with \(T_{k}(f,t)\) and \(\mathbf{T}_{\infty}\) with \(T_{\infty}(f,t)\). The point \(\{t\}\) is the root of \(T_{k}(f,t)\), and the vertices in \(f^{-k}(t)\) are at height \(k\). The automorphism group \(\operatorname{Aut}(\mathbf{T}_{k})\) (resp. \(\operatorname{Aut}(\mathbf{T}_{\infty})\)) is isomorphic to the wreath product \([S_{n}]^{k}\) (resp. \([S_{n}]^{\infty}\)).
The Galois group \(\operatorname{G}_{K}:=\operatorname{Gal}(\bar{K}/K)\) acts on \(\mathbf{T}_{k}=T_{k}(f,t)\) and \(\mathbf{T}_{\infty}=T_{\infty}(k,t)\) via automorphisms, and let
\[\rho_{f,t,k}:\operatorname{G}_{K}\to\operatorname{Aut}(\mathbf{T}_{k}) \xrightarrow{\sim}[S_{n}]^{k},\]
\[\rho_{f,t,\infty}:\operatorname{G}_{K}\to\operatorname{Aut}(\mathbf{T}_{ \infty})\xrightarrow{\sim}[S_{n}]^{\infty},\]
be the associated Galois representations. We refer to \(\rho_{f,t,\infty}\) as the _arboreal Galois representation_ associated to \((f,t)\), and set
\[\rho_{f,k}:=\rho_{f,0,k}\text{ and }\rho_{f,\infty}:=\rho_{f,0,\infty}\]
for ease of notation. Let \(G_{k}(f,t)\) (resp. \(G_{\infty}(f,t)\)) denote the image of \(\rho_{f,t,k}\) (resp. \(\rho_{f,t,\infty}\)). Note that \(G_{\infty}(f,t)\) is identified with the inverse limit \(\varprojlim_{k}G_{k}(f,t)\).
**Question 3.5**.: _Let \(K\) be a Hilbertian field of characteristic \(0\) (cf. [1] for a characterization of such fields)._
1. _For which pairs_ \((f,t)\) _does one have that_ \([\operatorname{Aut}(\mathbf{T}_{\infty}):G_{\infty}(f,t)]<\infty\)_?_
2. _For which pairs is_ \(\rho_{f,t,\infty}\) _surjective?_
Motivated by the above question, Odoni proved an important result about the surjectivity of arboreal Galois representations in a general setting. Let \(K\) be any field of characteristic \(0\) (not necessarily Hilbertian), let \(t_{0},\ldots,t_{n-1}\) be algebraically independent over \(K\) and \(L\) denote the function field \(K(t_{0},\ldots,t_{n-1})\). In accordance with notation defined above, \(G_{\infty}(f)\) is the image of the arboreal representation
\[\rho_{f,\infty}:\operatorname{G}_{L}\to\operatorname{Aut}(\mathbf{T}_{\infty}).\]
**Theorem 3.6** (Odoni).: _Let \(K\) be a field of characteristic \(0\). With respect to notation above, set \(f:=x^{n}+t_{n-1}x^{n-1}+\cdots+t_{1}x+t_{0}\) and \(L:=K(t_{0},\ldots,t_{n-1})\). Then, the following assertions hold_
1. _the Assumption_ 3.4 _is satisfied for_ \((f,0)\)_,_
2. _the representation_ \[\rho_{f,\infty}:\mathrm{G}_{L}\to\mathrm{Aut}(\mathbf{T}_{\infty})\] _associated to_ \(f\) _and its iterates is surjective. In other words,_ \[\mathrm{Gal}\left(L\left(f^{-k}(0)\right)/L\right)\simeq[S_{n}]^{k}\] _for all_ \(k\in\mathbb{Z}_{\geq 1}\)_._
Proof.: The first part follows from [10, Lemma 2.3], and the second part is [10, Theorem 1].
The above result has the following interesting consequence.
**Corollary 3.7**.: _Let \(K\) be a Hilbertian field of characteristic \(0\). Then, for any \(k\in\mathbb{Z}_{\geq 1}\) and \(n\in\mathbb{Z}_{\geq 2}\), there are infinitely many polynomials \(f(x)\in K[x]\) of degree \(n\), such that \(\mathrm{Gal}\left(K\left(f^{-k}(0)\right)/K\right)\) is isomorphic to \([S_{n}]^{k}\)._
Proof.: The result follows from the Theorem 3.6 and the Hilbert irreducibility theorem.
The above result motivates the following conjecture, cf. [10, Conjecture 7.5].
**Conjecture 3.8** (Odoni's conjecture).: _Let \(K\) be a Hilbertian field of characteristic \(0\). Then, for any \(n\in\mathbb{Z}_{\geq 2}\), there exists a polynomial \(f(x)\in K[x]\) of degree \(n\), such that \(\mathrm{Gal}\left(K\left(f^{-k}(0)\right)/K\right)\) is isomorphic to \([S_{n}]^{k}\) for all \(k\in\mathbb{Z}_{\geq 1}\). In other words, there exists \(f\) for which the associated arboreal representation \(\rho_{f,\infty}\) is surjective._
When \(K\) is a number field and \(n\) is even, or \([K:\mathbb{Q}]\) is odd, the above conjecture is proven by Benedetto and Juul [1]. The proof of the conjecture for all values of \(n\) can be found in a preprint of Specter [11]. Note that the conjecture does not hold for a general Hilbertian field, cf. [1].
## 4. Number field counting techniques
Let \(K\) be a number field and \(d:=[K:\mathbb{Q}]\). In this section, we introduce the general strategy used in establishing asymptotic lower bounds for \(N_{n,K}(X;G)\) for a transitive subgroup \(G\) of \(S_{n}\). We find it beneficial to give a brief account of the technique of Ellenberg and Venkatesh [10], who show that \(N_{n,K}(X;S_{n})\gg X^{\frac{1}{2}+\frac{1}{n^{2}}}\).
### Hilbert irreducibility
In this section, we discuss Cohen's integral version of the Hilbert irreducibility theorem, cf. [10]. We follow the notation and conventions from [1, Appendix]. Let \(K\) be a number field, and given a subset \(S\subset\mathbb{A}^{r}(\mathcal{O}_{K})=\mathcal{O}_{K}^{r}\), and \(e_{1},\ldots,e_{r}\in\mathbb{R}_{>0}\). We set
\[S(Y;e_{1},\ldots,e_{r}):=\left\{(\alpha_{1},\ldots,\alpha_{r})\in S\mid\| \alpha_{i}\|\leq Y^{e_{i}}\right\}.\]
It is easy to see that for \(S=\mathbb{A}^{r}(\mathcal{O}_{K})\), as \(Y\to\infty\),
\[\#\left(\mathbb{A}^{r}(\mathcal{O}_{K})\right)(Y;e_{1},\ldots,e_{r})\sim cY^{d \left(\sum_{i=1}^{r}e_{i}\right)},\]
where \(c>0\) is a constant which depends on \(K\). Let
\[\operatorname{Prob}_{S}(T;e_{1},\ldots,e_{r}):=\frac{\#S(Y;e_{1},\ldots,e_{r})} {\#\left(\mathbb{A}^{r}(\mathcal{O}_{K})\right)(Y;e_{1},\ldots,e_{r})}\]
be the probability that a point in \((\alpha_{1},\ldots,\alpha_{r})\in\mathbb{A}^{r}(\mathcal{O}_{K})\) with coordinates \(\|\alpha_{i}\|\leq Y^{e_{i}}\) is in \(S\). Let \(\mathbf{L}=K(a_{1},\ldots,a_{r})\) be the function field over \(K\) in variables \(a_{1},\ldots,a_{r}\), and let \(F(x)\in K[a_{1},\ldots,a_{r}][x]\) be a polynomial, such that \(G=\operatorname{Gal}(\mathbf{L}_{F}/\mathbf{L})\). For \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in K^{r}\), let \(F_{\alpha}(x)\in K[x]\) be the specialization of \(F(x)\) upon specialization \(a_{i}\mapsto\alpha_{i}\) for \(i=1,\ldots,r\). Set \(G_{\alpha}:=\operatorname{Gal}(K_{F_{\alpha}}/K)\), and let \(S\) be the set of vectors \(\alpha\in\mathbb{A}^{r}(\mathcal{O}_{K})\) for which \(G_{\alpha}=G\).
**Theorem 4.1** (Hilbert irreducibility).: _With respect to notation above, we have that_
\[\lim_{Y\to\infty}\operatorname{Prob}_{S}(Y;e_{1},\ldots,e_{r})=1.\]
Proof.: For the proof of the result, we refer to [1, Theorem A.2].
As an aside, we obtain a quantitative refinement of Corollary 3.7, in the case when \(K\) is a number field. For \(n\in\mathbb{Z}_{\geq 2}\), let \(\operatorname{Poly}_{n}(\mathcal{O}_{K};Y)\) consist of all monic polynomials \(P(x)\) of degree \(n\) with coefficients in \(\mathcal{O}_{K}\) and \(\|P\|\leq Y\). Any such polynomial
\[P(x)=x^{n}+\alpha_{1}x^{n-1}+\alpha_{2}x^{n-2}+\cdots+\alpha_{n-1}x+\alpha_{n}\]
is associated with a tuple
\[\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\left(\mathbb{A}^{n}(\mathcal{O}_{K}) \right)(Y;1,2,3,\ldots,n).\]
Let \(T_{n,k}(Y)\) be the subset of \(P\in\operatorname{Poly}_{n}(\mathcal{O}_{K};Y)\) for which
\[\operatorname{Gal}\left(K(P^{-k}(0))/K\right)\simeq[S_{n}]^{k}.\]
**Theorem 4.2**.: _Let \(n\in\mathbb{Z}_{\geq 2}\) and \(k\in\mathbb{Z}_{\geq 1}\), then,_
\[\lim_{Y\to\infty}\left(\frac{\#T_{n,k}(Y)}{\#\operatorname{Poly}_{n}(\mathcal{ O}_{K};Y)}\right)=1.\]
_In other words, the set of polynomials \(P(x)\) of degree \(n\) with coefficients in \(\mathcal{O}_{K}\), for which the representation_
\[\rho_{f,k}:\operatorname{G}_{K}\to\operatorname{Aut}(\mathbf{T}_{k})\]
_is surjective, has density \(1\)._
Proof.: The result follows directly from Theorem 3.6 and the Hilbert irreducibility theorem (cf. Theorem 4.1).
### The method of Ellenberg and Venkatesh
Let us first briefly review the construction of Ellenberg and Venkatesh [10], who show that \(N_{K,n}(X;S_{n})\gg X^{\frac{1}{2}+\frac{1}{n^{2}}}\). Given an extension \(L/K\), set
\[\mathcal{O}_{L}^{0}:=\{z\in\mathcal{O}_{L}\mid\operatorname{Tr}_{K}^{L}(z)=0\}.\]
Let \(L/K\) be an extension for which \([L:K]=n\), and suppose that \(z\in\mathcal{O}_{L}^{0}\) is an element such that \(K(z)=L\), then \(z\) satisfies a polynomial of the form
\[f(x)=x^{n}+\alpha_{2}x^{n-2}+\alpha_{3}x^{n-3}+\cdots+\alpha_{n-1}x+\alpha_{n},\]
with \(a_{i}\in\mathcal{O}_{K}\). Consider the polynomial
\[F(x):=x^{n}+a_{2}x^{n-1}+a_{3}x^{n-2}+\cdots+a_{n-1}x+a_{n}\]
with coefficients in the polynomial ring \(A:=K[a_{2},a_{3},\ldots,a_{n-1},a_{n}]\). Let \(\mathbf{L}\) denote the function field \(K(a_{2},\ldots,a_{n})\), and recall that \(\mathbf{L}_{F}\) is the splitting field of \(F\) over \(\mathbf{L}\). Then, the Galois group \(\operatorname{Gal}(\mathbf{L}_{F}/\mathbf{L})\) is isomorphic to \(S_{n}\) (cf. _loc. cit._ for further details). Specializing the variables \(a_{i}\) to values \(\alpha_{i}\in\mathcal{O}_{K}\), allows one to construct many extensions \(L/K\) of degree \(n\), for which \(\operatorname{Gal}(\widetilde{L}/K)\simeq S_{n}\). Let \(Y>0\) be a real number and set
\[S(Y):=\{z\in\mathcal{O}_{\widetilde{K}}\mid\operatorname{Tr}_{K} ^{K(z)}(z)=0,[K(z):K]=n,\|z\|\leq Y\},\] \[S(Y;S_{n}):=\{z\in S(Y)\mid\operatorname{Gal}\left(\widetilde{K( z)}/K\right)\simeq S_{n}\}.\]
In order to estimate \(\#S(Y)\), one counts the total number of characteristic polynomials
\[f(x)=x^{n}+\alpha_{2}x^{n-2}+\alpha_{3}x^{n-3}+\cdots+\alpha_{n}\]
for which \(\|f\|\leq Y\). In other words, one counts the number of tuples \((\alpha_{2},\ldots,\alpha_{n})\in\mathcal{O}_{\widetilde{K}}^{(n-1)}\) such that \(\|a_{i}\|\leq Y^{i}\) for all \(i\). One finds that
\[\#S(Y)\gg\#\left(\mathbb{A}^{r}(\mathcal{O}_{K})\right)(Y;2,3,4,\ldots,(n-1), n)\sim cY^{\left(\frac{n(n+1)}{2}-1\right)d},\]
and it follows from the Hilbert irreducibility theorem that the same asymptotic lower bound holds for \(\#S(Y;S_{n})\). Let \(L/K\) be an extension for which \([L:K]=n\) and \(\operatorname{Gal}(\widetilde{L}/K)\simeq S_{n}\). A lower bound for \(\#S(Y;S_{n})\) gives rise to a lower bound for \(N_{n,K}(X;S_{n})\), provided one is able to estimate the number of \(z\in S(Y;S_{n})\) for which \(L=K(z)\). It is shown that
\[M_{L/K}(Y):=\#\{z\in S(Y;S_{n})\mid K(z)\simeq L\}\ll\frac{Y^{(n-1)d}}{| \Delta_{L}|^{\frac{1}{n}}},\]
cf. [10, p. 738, ll.22-27] for further details. The lower bound for \(\#S(Y;S_{n})\) and the upper bound for \(M_{L/K}(Y)\) together show that
\[N_{n,K}(X)\gg X^{\frac{1}{2}+\frac{1}{n^{2}}},\]
where \(X:=Y^{n(n-1)d}\), we refer to _loc. cit._ for further details.
### Generalizations to other groups \(G\subset S_{n}\)
Let \(G\) be a subgroup of \(S_{n}\) for which the inverse Galois problem is solved over \(K\). Moreover, suppose that there is a polynomial
\[F(x)=x^{n}+\sum_{i=1}^{n}P_{i}(a_{1},\ldots,a_{r})x^{n-i},\]
with coefficients in the polynomial ring \(A:=\mathcal{O}_{K}[a_{1},\ldots,a_{r}]\), which cuts out a \(G\)-extension of \(\mathbf{L}:=K(a_{1},\ldots,a_{r})\). This is to say that \(F(X)\) is irreducible over \(\mathbf{L}\) and \(\operatorname{Gal}(\mathbf{L}_{F}/\mathbf{L})\simeq G\) as a permutation subgroup of \(S_{n}\). Given a monomial \(a_{1}^{b_{1}}a_{2}^{b_{2}}\ldots a_{r}^{b_{r}}\), define the total degree to be the sum \(\sum_{i=1}^{r}b_{i}\). The total degree of a polynomial is then defined to be the maximal degree of monomials in its support. Let \(D\) denote the maximal total degree of the polynomials \(P_{i}\) in the variables \(\{a_{i}\mid i=1,\ldots,r\}\).
**Definition 4.3**.: _Let \(F(X)\in K[a_{1},\ldots,a_{r}][X]\) be a non-zero polynomial and \(\mathbf{L}_{F}\) be its splitting field over \(\mathbf{L}:=K(a_{1},\ldots,a_{r})\). Then, \(F\) is said to be regular if_
\[\mathbf{L}_{F}\cap\bar{K}(a_{1},\ldots,a_{r})=\mathbf{L}.\]
When \(K=\mathbb{Q}\), and \(P\) is regular, then, Pierce, Turnage-Butterbaugh and Wood [10] establish a general asymptotic lower bound for \(N_{n,\mathbb{Q}}(X;G)\).
**Theorem 4.4** (Pierce, Turnage-Butterbaugh, Wood).: _Let \(n\in\mathbb{Z}_{\geq 2}\) and \(G\) be a transitive subgroup of \(S_{n}\). Suppose that \(f\in\mathbb{Q}[X,a_{1},\ldots,a_{r}]\) is a regular polynomial with total degree \(D\) in the \(\{a_{i}\}\) and degree \(n\) in \(X\), with Galois group \(G\) over \(\mathbb{Q}(a_{1},\ldots,a_{r})\). Then, for \(X\geq 1\) and \(\epsilon>0\), we have that_
\[N_{n}(X;G)\gg_{f,\epsilon}X^{\frac{1-|G|^{-1}}{D(2n-2)}-\epsilon}.\]
Proof.: The above result it [10, Theorem 1.5].
The above result can be used in a large number of situations. All that is required is knowledge of the quantity \(D\). On the other hand, the method of Ellenberg and Venkatesh requires more precise estimates. Applying the above result to \(G=S_{n}\) yields \(N_{n,\mathbb{Q}}(X;S_{n})\gg_{n,\epsilon}X^{\frac{1-|n|^{-1}}{(2n-2)}-\epsilon}\), which is weaker than the result of Ellenberg and Venkatesh, which asserts that \(N_{n,K}(X)\gg X^{\frac{1}{2}+\frac{1}{n^{2}}}\). The exponent of \(X\) is \(O(\frac{1}{n})\), while that of Ellenberg and Venkatesh is \(\frac{1}{2}+O(\frac{1}{n^{2}})\). This latter estimate is closer to Malle's conjecture, which predicts that \(N_{n,K}(X;S_{n})\sim a_{n,K}X\).
When the number of variables \(r\) is suitably large, and it is possible to make precise estimates, it shall be more fruitful to generalize the method of Ellenberg and Venkatesh outlined in the previous section. We end the section by recalling an explicit criterion from [10], which shall prove to be especially useful. Let \(G\) be a transitive subgroup of \(S_{n}\) and \(K/\mathbb{Q}\) be a number field and set \(d:=[K:\mathbb{Q}]\). We set
\[\mathcal{F}_{n,K}(X;G):=\{L/K\mid[L:K]=n,\operatorname{Gal}(\widetilde{L}/K) \simeq G,|\Delta_{L}|\leq X\},\]
and note that \(N_{n,K}(X;G):=\#\mathcal{F}_{n,K}(X;G)\). For \(Y>0\), we let
\[\mathcal{P}_{n,K}(Y;G):=\{z\in\mathcal{O}_{\bar{K}}\mid\|z\|\leq Y,[K(z):K]=n, \operatorname{Gal}(\widetilde{K(z)}/K)\simeq G\}.\]
**Proposition 4.5**.: _With respect to the above notation, let \(C>n\) be a constant such that the following asymptotic estimate is satisfied_
\[\#\mathcal{P}_{n,K}(Y;G)\gg Y^{dC}.\]
_Then,_
\[N_{n,K}(X;G)\gg\begin{cases}X^{\frac{C-n/2}{n^{2}-n}}&\text{ if }C\geq\frac{n^{2} }{4}+n;\\ X^{\frac{(C-n)(n+2)}{(n^{3}-n^{2})}}&\text{ otherwise.}\end{cases}\]
Proof.: The above result is [11, Corollary 2.8].
## 5. Wreath products of symmetric groups
Let \(k\in\mathbb{Z}_{\geq 2}\), and \(\vec{n}=(n_{1},\dots,n_{k})\in\mathbb{Z}_{\geq 2}^{k}\). Throughout, we fix a number field \(K\) and let \(d:=[K:\mathbb{Q}]\). Set \(N:=n_{1}n_{2}\dots n_{k}\) and let \(S(\vec{n})\subset S_{N}\) be the wreath product \(S_{n_{1}}\wr S_{n_{2}}\wr\dots\wr S_{n_{k}}\). In particular, the \(k\)-fold wreath product \([S_{n}]^{k}=S(n,n,\dots,n)\subset S_{n^{k}}\). We generalize the method of Ellenberg and Venkatesh to obtain an asymptotic lower bound for \(N_{n,K}(X;S(\vec{n}))\).
**Definition 5.1**.: _Let \(f(x)=\sum_{i=1}^{d}a_{i}(T_{1},\dots,T_{u})x^{i}\) and \(g(x)=\sum_{j=1}^{e}b_{j}(S_{1},\dots,S_{v})x^{j}\), where \(a_{i}(T_{1},\dots,T_{u})\in\mathcal{O}_{K}[T_{1},\dots,T_{u}]\) and \(b_{j}(S_{1},\dots,S_{v})\in\mathcal{O}_{K}[S_{1},\dots,S_{v}]\). We define the polynomial \(g\wr f\in\mathcal{O}_{K}[T_{1},\dots,T_{u},S_{1},\dots,S_{v}][x]\) as follows_
\[g\wr f:=f(g(x))=\sum_{i=1}^{d}a_{i}(T_{1},\dots,T_{u})\left(\sum_{j=1}^{e}b_{ j}(S_{1},\dots,S_{v})x^{j}\right)^{i}.\]
Let \(f_{u,n}(x)=x^{n}+\sum_{v=1}^{n}T_{u,v}x^{n-v}\) and set
\[F:=f_{1,n_{1}}\wr f_{2,n_{2}}\wr\dots\wr f_{k,n_{k}}\in A[x], \tag{5.1}\]
where
\[A:=\mathcal{O}_{K}[\{T_{u,v}\ |\ u\in[1,k],v\in[1,n_{i}]\}]. \tag{5.2}\]
For \(i\in[1,k]\), we set \(g_{i}:=f_{i,n_{i}}\), and for \(j\in[1,k]\), set
\[F_{j}:=g_{1}\wr g_{2}\wr\dots\wr g_{j-1}\wr g_{j}\in A[x],\]
and thus, \(F=F_{k}\). We shall set \(N_{j}:=\prod_{i=1}^{j}n_{i}\) and
\[D_{j}:=\begin{cases}1&\text{ if }j=1;\\ \prod_{i=2}^{j}n_{i}&\text{ if }j>1.\end{cases}\]
We shall set \(D:=D_{k}\). In particular, when \(n=n_{1}=n_{2}=n_{3}=\dots=n_{k}\), we find that \(N_{j}=n^{j}\) and \(D_{j}=n^{j-1}\).
**Lemma 5.2**.: _With respect to notation above, the following assertions hold._
1. _The degree of_ \(F_{j}(x)\) _(as a polynomial in_ \(x\) _with coefficients in_ \(A\)_) is equal to_ \(N_{j}\)_._
2. _The maximum total degree of_ \(F_{j}\) _in the variables_ \(\{T_{u,v}\}\) _is_ \(\leq D_{j}\)_._
_In particular, the result implies that \(F(x)\) is a polynomial of degree equal to \(N\), and the maximum total degree of \(F\) in the variables \(\{T_{u,v}\}\) is \(\leq D\)._
Proof.: We prove the result by induction on \(j\). When \(j=1\), we find that
\[F_{1}=g_{1}=x^{n_{1}}+\sum_{v=1}^{n_{1}}T_{1,v}x^{n_{1}-v}\]
and the assertions are clear. Assume that \(j\geq 2\), and write \(F_{j}=F_{j-1}\wr g_{j}\). We find that
\[F_{j}(X)=g_{j}\left(F_{j-1}(x)\right)=F_{j-1}(x)^{n_{j}}+\sum_{v=1}^{n_{j}}T_{j,v}F_{j-1}(x)^{n_{j}-v}.\]
Therefore, we find that
\[\deg_{x}(F_{j})=n_{j}\deg_{x}(F_{j-1})\text{ and }\deg_{\{T_{u,v}\}}(F_{j})\leq n _{j}\deg_{\{T_{u,v}\}}(F_{j-1}).\]
Thus, the result follows by induction on \(j\).
Odoni's study of arboreal representations led to a criterion for a polynomial to give rise to an \(S(\vec{n})\)-extensions. First, we recall a general criterion.
**Theorem 5.3** (Odoni).: _Let \(\mathbf{F}\) be a field of characteristic \(0\), and let \(f(x)\in\mathbf{F}(x)\) be monic and square free, with \(G=\operatorname{Gal}(f(x)/\mathbf{F})\). For \(l\geq 2\) let \(\mathfrak{G}(x)\) be the generic monic polynomial_
\[\mathfrak{G}(x)=x^{l}+a_{1}x^{l-1}+a_{2}x^{l-2}+\cdots+a_{l-1}x+a_{l}.\]
_Then, \(f(\mathfrak{G}(x))\) is squarefree in \(\mathbf{F}(a_{1},\ldots,a_{l})[x]\) and_
\[\operatorname{Gal}(f(\mathfrak{G}(x))/\mathbf{F}(a_{1},\ldots,a_{l}))=G\wr S _{n}.\]
Proof.: The result is above is [10, Corollary 8.4].
**Corollary 5.4**.: _Let \(\vec{n}=(n_{1},\ldots,n_{k})\in\mathbb{Z}_{\geq 2}^{k}\), and recall from (5.2) that_
\[A:=\mathcal{O}_{K}[\{T_{u,v}\ |\ u\in[1,k],v\in[1,n_{i}]\}].\]
_Let \(\mathbf{L}\) be the fraction field of \(A\) and_
\[F:=g_{1}\wr g_{2}\wr\cdots\wr g_{k}=f_{1,n_{1}}\wr f_{2,n_{2}}\wr\cdots\wr f _{k,n_{k}}\in A[x],\]
_as in (5.1). Then, the following assertions hold_
1. \(F(x)\) _is an irreducible polynomial of degree_ \(N=n_{1}n_{2}\ldots n_{k}\) _over_ \(\operatorname{Gal}(F(x)/\mathbf{L})\) _is isomorphic to_ \(S(\vec{n})=S_{n_{1}}\wr S_{n_{2}}\wr\cdots\wr S_{n_{k}}\)_._
Proof.: The result follows by induction on \(k\), and is an easy consequence of Theorem 5.3.
We identify the \(\mathcal{O}_{K}\)-valued points of \(\mathfrak{X}=\operatorname{Spec}A\) with \(\mathfrak{X}(\mathcal{O}_{K})=\mathcal{O}_{K}^{m}\), where \(m:=\sum_{i=1}^{k}n_{i}\). For \(\alpha=(\alpha_{u,v})\in\mathfrak{X}(\mathcal{O}_{K})\), let \(\Phi_{\alpha}:A[x]\to\mathcal{O}_{K}[x]\) be the map induced by specializing \(T_{u,v}\mapsto\alpha_{u,v}\). We shall set
\[F_{j,\alpha}(x):=\Phi_{\alpha}(F_{j}(x)),g_{j,\alpha}(x):=\Phi_{\alpha}(g_{j}( x))\text{ and }F_{\alpha}(x):=\Phi_{\alpha}(F(x)).\]
Note that \(G_{\alpha}:=\operatorname{Gal}(F_{\alpha}(x)/K)\) is a subgroup of \(G\). By the integral version of the Hilbert irreducibility theorem, for most points \(\alpha\in\mathfrak{X}(\mathcal{O}_{K})\), we have that \(G_{\alpha}=G\).
**Definition 5.5**.: _With respect to notation above, we set_
\[\mathfrak{X}(\mathcal{O}_{K};Y):=\left\{\alpha\in\mathfrak{X}(\mathcal{O}_{K} )\ |\ \|\alpha_{u,v}\|\leq Y^{v\left(\prod_{i=1}^{u-1}n_{i}\right)}=Y^{vN_{u-1}}\text{ for all coordinates }\alpha_{i,j}\right\},\]
_where it is understood that \(\prod_{i=1}^{0}n_{i}:=1\)._
**Proposition 5.6**.: _Let \(Y>0\) and \(\alpha\in\mathfrak{X}(\mathcal{O}_{K};Y)\). Then, for some suitably large constant \(C_{1,j}>0\), which depends only on \(j\), \(\vec{n}\) and \(K\), we have that_
\[\|F_{j,\alpha}\|\leq C_{1,j}Y.\]
_Setting \(C_{1}:=\max\{C_{1,j}\mid j\in[1,k]\}\), we find that for all \(j\in[1,k]\),_
\[\|F_{j,\alpha}\|\leq C_{1}Y.\]
_In particular, \(\|F_{\alpha}\|\leq C_{1}Y\)._
Proof.: We prove the result by induction on \(j\). The case when \(j=1\) is clear and thus assume that \(j\geq 2\). For ease of notation, \(f_{j}:=g_{j,\alpha}\), and thus write \(F_{j,\alpha}=f_{1}\wr f_{2}\wr\cdots\wr f_{j}\). We express \(F_{j,\alpha}\) as \(H_{1}\wr H_{2}\), where \(H_{1}=F_{j-1,\alpha}=f_{1}\wr f_{2}\wr\cdots\wr f_{j-1}\) and \(H_{2}=f_{j}\). By inductive hypothesis, \(\|H_{1}\|\leq C_{1,j-1}Y\). We write
\[H_{1} =x^{m}+b_{1}x^{m-1}+b_{2}x^{m-2}+\cdots+b_{m-1}x+b_{m}\] \[H_{2} =x^{n}+a_{1}x^{n-1}+a_{2}x^{n-2}+\cdots+a_{n-1}x+a_{n},\]
where \(m=N_{j-1}\) and \(n=n_{j}\). Since \(\|H_{1}\|\leq C_{1,j-1}Y\), we note that \(\|b_{i}\|\leq(C_{1,j-1}Y)^{i}\) for all \(i\). On the other hand, \(a_{i}=\alpha_{j,i}\) and therefore,
\[\|a_{i}\|\leq Y^{iN_{j-1}}=Y^{im}.\]
We find that
\[F_{j,\alpha}=\sum_{i=0}^{n}a_{i}\left(b_{0}x^{m}+b_{1}x^{m-1}+b_{2}x^{m-2}+ \cdots+b_{m-1}x+b_{m}\right)^{n-i},\]
where \(a_{0}:=1\) and \(b_{0}:=1\). Let's consider the expression
\[a_{i}\left(b_{0}x^{m}+b_{1}x^{m-1}+b_{2}x^{m-2}+\cdots+b_{m-1}x+b_{m}\right)^{ n-i},\]
i.e., the \(i\)-th term in the sum above. This polynomial is of degree \(m(n-i)\) in \(x\) and is a sum of monomials of the form
\[a_{i}b_{m-t_{1}}x^{t_{1}}b_{m-t_{2}}x^{t_{2}}\ldots b_{m-t_{n-i}}x^{t_{n-i}}= a_{i}(b_{m-t_{1}}\ldots b_{m-t_{n-i}})x^{\sum_{i}t_{i}}.\]
We find that
\[\|a_{i}(b_{m-t_{1}}\ldots b_{m-t_{n-i}})\|\leq Y^{im}(C_{1,j-1}Y)^{m(n-i)-\sum _{i}t_{i}}\ll Y^{mn-\sum_{i}t_{i}}=Y^{N_{j}-\sum_{i}t_{i}}.\]
Therefore, there is a large enough constant \(C_{1,j}>0\), such that \(\|F_{j,\alpha}\|\leq C_{1,j}Y\). This completes the inductive step.
It conveniences to explicitly state an immediate consequence of the above result.
**Corollary 5.7**.: _Let \(Y>0\), \(\alpha\in\mathfrak{X}(\mathcal{O}_{K};Y)\) and let \(C_{1}\) be the constant defined according to Proposition 5.6. Then, for all \(j\in[1,k]\), we have that_
\[\|F_{j,\alpha}(0)\|\leq C_{2}Y^{N_{j}},\]
_where, \(C_{2}:=C_{1}^{N}\)._
Proof.: The above is a direct consequence of Proposition 5.6.
**Definition 5.8**.: _Let \(\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\) be the set of \(\alpha\in\mathfrak{X}(\mathcal{O}_{K};Y)\), such that \(F_{\alpha}(x)\) is irreducible and_
\[G_{\alpha}:=\operatorname{Gal}(F_{\alpha}(x)/K)\simeq S(\vec{n}).\]
We note that by construction, \(G_{\alpha}\) is identified with a subgroup of \(S(\vec{n})\). From the Hilbert irreducibility theorem, we obtain an asymptotic estimate for \(\#\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\).
**Proposition 5.9**.: _Let \(\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\) be as in Definition 5.8. Then,_
\[\#\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\gg Y^{dA},\]
_where_
\[A=A(\vec{n}):=\sum_{j=1}^{k}\left(\frac{n_{j}+1}{2}\right)\left(\prod_{v=1}^{ j}n_{v}\right).\]
Proof.: We find that
\[\#\mathfrak{X}(\mathcal{O}_{K};Y)\gg\prod_{i=1}^{k}\prod_{j=1}^{n_{i}}Y^{djN_ {i-1}}=Y^{dA},\]
where we recall that
\[N_{i}:=\begin{cases}\prod_{v=1}^{i}n_{v}&\text{ if }i>0;\\ 1&\text{ if }i=0.\end{cases}\]
It then follows from Theorem 4.1 and Corollary 5.4 that
\[\#\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\gg Y^{dA}.\]
We use the asymptotic lower bound for \(\#\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\) to obtain a lower bound for \(\#\mathcal{P}_{N,K}\left(Y;S(\vec{n})\right)\), where, we recall that
\[\mathcal{P}_{N,K}\left(Y;S(\vec{n})\right):=\{z\in\mathcal{O}_{\bar{K}}\ |\ \|z\|\leq Y,[K(z):K]=N,\operatorname{Gal}(\widetilde{K(z)}/K)\simeq S(\vec{n})\}.\]
Next, we define a map
\[\Psi:\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\to\mathcal{P}_{N,K}\left(C_{3} Y;S(\vec{n})\right),\]
where \(C_{3}>0\) is a suitably large constants which we shall specify. It conveniences us to first state a basic result that relates the height \(\|z\|\) of \(z\in\mathcal{O}_{\bar{K}}\) to the height of its minimal polynomial.
**Lemma 5.10**.: _Given \(z\in\mathcal{O}_{\bar{K}}\), let \(f(x)\) be the minimal polynomial of \(z\) over \(\mathcal{O}_{K}\), then, \(\|z\|\leq 2\|f\|\)._
Proof.: By way of contradiction, \(\|z\|>2\|f\|\), we set \(u:=\frac{\|f\|}{\|z\|}\) and note that \(u\in(0,1/2)\). Write \(f(x)=x^{n}+a_{1}x^{n-1}+\cdots+a_{n}\). We note that \(\|a_{i}\|\leq\|f\|^{i}\). Thus, we find that
\[\|z\|^{n}\leq\sum_{i=1}^{n}\|f\|^{i}\|z\|^{n-i}=\|z\|^{n}\left(u+u^{2}+\cdots+ u^{n}\right)<\|z\|^{n}\left(\frac{u}{1-u}\right).\]
Therefore, \(\left(\frac{u}{1-u}\right)>1\), i.e., \(u>1/2\), a contradiction.
Recall that from Definition 5.8 that for \(\alpha\in\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\), we have that \(F_{\alpha}:=\Phi_{\alpha}(F)\) is irreducible of degree \(N\), and \(G_{\alpha}\simeq S(\vec{n})\). Choose a root \(z_{\alpha}\) of \(F_{\alpha}(x)\). Note that \(\widetilde{K(z_{\alpha})}\) is an extension of \(K\) for which \(G_{\alpha}=\operatorname{Gal}(\widetilde{K(z_{\alpha})}/K)\). Note that the choice of root \(z_{\alpha}\) is non-canonical, however, we make one such choice for each \(\alpha\). Then, we set \(\Psi(\alpha):=z_{\alpha}\)
It follows from Proposition 5.6 that \(\|F_{\alpha}\|\leq C_{1}Y\). Setting \(C_{3}:=2C_{1}\), it follows from Lemma 5.10 that \(\|z_{\alpha}\|\leq C_{3}Y\). Thus, we obtain a map
\[\Psi:\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\to\mathcal{P}_{N,K}\left(C_{3}Y ;S(\vec{n})\right).\]
Note that the map defined above is non-canonical, since it depends on a choice of root \(z_{\alpha}\in\bar{K}\) of \(F_{\alpha}(x)\in K[x]\) for each \(\alpha\in\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\).
In order to better describe the cardinality of the fibers of the above map \(\Psi\), we write it as a composite of two maps which we shall now describe.
**Definition 5.11**.: _Let \(\mathcal{P}^{\prime}_{N,K}(Y;S(\vec{n}))\) be the set of tuples \((z;E_{0},\ldots,E_{k})\) such that \(z\in\mathcal{P}_{N,K}(Y;S(\vec{n}))\) and \(E_{i}\) are a tower of fields_
\[K=E_{k}\subset E_{k-1}\subset E_{k-2}\subset\cdots\subset E_{1}\subset E_{0}= K(z),\]
_such that for all \(j\in[0,k-1]\), there is an isomorphism_
\[\operatorname{Gal}(\widetilde{E_{j}}/K)\simeq S(n_{1},\ldots,n_{k-j}).\]
For \(\alpha\in\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\), set \(g_{i,\alpha}:=\Phi_{\alpha}(g_{i})\), where we recall that \(\Phi_{\alpha}\) is the evaluation map \(A[x]\to\mathcal{O}_{K}[x]\) induced by specializing \(T_{u,v}\mapsto\alpha_{u,v}\), and that
\[g_{i}(x)=x^{n_{i}}+\sum_{v=1}^{n_{i}}T_{i,v}x^{n_{i}-v}.\]
Thus, we have that \(g_{i,\alpha}(x)=x^{n_{i}}+\sum_{v=1}^{n}\alpha_{i,v}x^{n_{i}-v}\). We define a sequence of elements \(z_{j}(\alpha)\) inductively as follows, \(z_{0}(\alpha)=z_{\alpha}\) and \(z_{j+1}(\alpha)=g_{j+1,\alpha}\left(z_{j}(\alpha)\right)\), as illustrated below
\[z_{\alpha}=z_{0}(\alpha)\xrightarrow{g_{1,\alpha}}z_{1}(\alpha)\xrightarrow{ g_{2,\alpha}}\ldots\xrightarrow{g_{k-1,\alpha}}z_{k-1}(\alpha)\xrightarrow{g_{k, \alpha}}z_{k}(\alpha)=0.\]
Also, we set \(K_{j,\alpha}\) to denote the field \(K(z_{j}(\alpha))\). Thus, we have a tower of fields
\[K=K_{k,\alpha}\subset K_{k-1,\alpha}\subset\cdots\subset K_{0,\alpha}=K(z_{ \alpha}).\]
Note that for \(j\in[0,k-1]\), the isomorphism \(G_{\alpha}\simeq S(\vec{n})\) induces an isomorphism
\[\operatorname{Gal}(\widetilde{K_{j,\alpha}}/K)\simeq S(n_{1},n_{2},\ldots,n_{ k-j}).\]
Let \(\mathcal{B}(Y)\) be the set of tuples \((u_{1},\ldots,u_{k-1})\in\mathcal{O}_{K}^{(k-1)}\) such that \(\|u_{i}\|\leq Y^{N_{i}}\) for all coordinates \(u_{i}\). Therefore, we find that
\[\#\mathcal{B}(Y)\sim Y^{d\left(\sum_{i=1}^{k-1}N_{i}\right)}. \tag{5.3}\]
**Definition 5.12**.: _With respect to notation above, we define the map_
\[\Psi^{\prime}:\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\longrightarrow\mathcal{ P}^{\prime}_{N,K}(C_{3}Y;S(\vec{n}))\times\mathcal{B}(C_{2}Y)\]
_as follows_
\[\Psi^{\prime}(\alpha) :=\left(\Psi^{\prime}_{1}(\alpha),\Psi^{\prime}_{2}(\alpha) \right),\text{ where,}\] \[\Psi^{\prime}_{1}(\alpha) :=\left(z_{\alpha},K_{0,\alpha},K_{1,\alpha},\ldots,K_{k,\alpha} \right);\] \[\Psi^{\prime}_{2}(\alpha) :=\left(F_{1,\alpha}(0),F_{2,\alpha}(0),\ldots,F_{k-1,\alpha}(0) \right).\]
**Proposition 5.13**.: _The map \(\Psi^{\prime}\) above is well defined._
Proof.: It suffices to show that the maps \(\Psi_{1}^{\prime}\) and \(\Psi_{2}^{\prime}\) are well defined. For \(\alpha\in\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\), the polynomial \(F_{\alpha}(x)\) is irreducible and \(\operatorname{Gal}(F_{\alpha}(x)/K)\). It follows from Lemma 5.10 that
\[\|z_{\alpha}\|\leq 2\|F_{\alpha}\|.\]
On the other hand, it follows from Proposition 5.6 that
\[\|F_{\alpha}\|\leq C_{1}Y,\]
and therefore,
\[\|z_{\alpha}\|\leq C_{2}Y,\]
where we recall that \(C_{2}:=2C_{1}\). Since \(\operatorname{Gal}(F_{\alpha}(x)/K)\simeq S(\vec{n})\), it follows that
\[\operatorname{Gal}(\widetilde{K_{j,\alpha}}/K)\simeq S(n_{1},\ldots,n_{k-j}).\]
Therefore, \(\Psi_{1}^{\prime}(\alpha)\) is an element in \(\mathcal{P}_{N,K}^{\prime}(C_{3}Y;S(\vec{n}))\).
Corollary 5.7 asserts that for all \(j\in[1,k]\), we have that
\[\|F_{j,\alpha}(0)\|\leq C_{2}Y^{N_{j}}.\]
Therefore, \(\Psi_{2}^{\prime}(\alpha)\in\mathcal{B}(C_{2}Y)\) and thus the map \(\Psi\) is shown to be well defined.
**Lemma 5.14**.: _For \(\alpha\in\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\) and \(j\in[0,k-1]\), the minimal polynomial of \(z_{\alpha}\) over \(K_{j,\alpha}\) is \(G_{j,\alpha}(x):=F_{j,\alpha}(x)-z_{j}(\alpha)\)._
Proof.: Since \(F_{j,\alpha}(z_{\alpha})=z_{j}(\alpha)\), it is clear that \(z_{\alpha}\) satisfies \(G_{j,\alpha}(x)\). Let \(\gamma_{1},\ldots,\gamma_{t}\) be the roots of \(G_{j,\alpha}(x)\) (with repetitions). Since \(\operatorname{Gal}(F_{\alpha}(x)/K)\simeq S(\vec{n})\), it follows that the roots \(\gamma_{i}\) are all distinct, and \(\operatorname{Gal}(\bar{K}/K_{j,\alpha})\) acts transitively on \(\{\gamma_{1},\ldots,\gamma_{t}\}\). Therefore, \(G_{j,\alpha}(x)\) is irreducible over \(K_{j,\alpha}\), and hence, it is the minimal polynomial of \(z_{\alpha}\) over \(K_{j,\alpha}\).
**Lemma 5.15**.: _Let \(\alpha\) and \(\alpha^{\prime}\) be elements in \(\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\). Assume that for all \(j\in[1,k]\), there is an equality of polynomials \(F_{j,\alpha}(x)=F_{j,\alpha^{\prime}}(x)\). Then, we have that \(\alpha=\alpha^{\prime}\)._
Proof.: We recall that \(F_{j,\alpha}=g_{1,\alpha}\wr g_{2,\alpha}\wr\cdots\wr g_{k,\alpha}\), where
\[g_{i,\alpha}=x^{n_{i}}+\sum_{v=1}^{n_{i}}\alpha_{i,v}x^{n_{i}-v}.\]
It suffices to show that for all \(j\in[1,k]\),
\[g_{j,\alpha}(x)=g_{j,\alpha^{\prime}}(x).\]
We prove this equality by induction on \(j\). For \(j=1\), we have that
\[g_{1,\alpha}(x)=F_{1,\alpha}(x)=F_{1,\alpha^{\prime}}(x)=g_{1,\alpha^{\prime} }(x).\]
Therefore, we assume that \(j\geq 2\). Let \(H(x):=F_{j-1,\alpha}(x)=F_{j-1,\alpha^{\prime}}(x)\), then, we find that
\[g_{j,\alpha}\left(H(x)\right)=g_{j,\alpha}\left(F_{j-1,\alpha}(x)\right)=F_{j,\alpha}(x)=F_{j,\alpha^{\prime}}(x)=g_{j,\alpha^{\prime}}\left(F_{j-1,\alpha ^{\prime}}(x)\right)=g_{j,\alpha^{\prime}}\left(H(x)\right).\]
For ease of notation, set \(a_{v}:=\alpha_{j,v}\) and \(a_{v}^{\prime}:=\alpha_{j,v}^{\prime}\). Write \(H(x)=\sum_{i=0}^{m}b_{i}x^{m-i}\), and thus,
\[0=g_{j,\alpha}\left(H(x)\right)-g_{j,\alpha^{\prime}}\left(H(x)\right)=\sum_{ v=1}^{n_{j}}(a_{v}-a_{v}^{\prime})H(x)^{n_{j}-v}. \tag{5.4}\]
Assume by way of contradiction that for some \(v\in[1,n_{j}]\), we have that \(a_{v}\neq a_{v}^{\prime}\). Let \(v_{0}\) be the minimum value in the range \([1,n_{j}]\) for which \(a_{v_{0}}\neq a_{v_{0}}^{\prime}\). Then, the degree of \(\sum_{v=1}^{n_{j}}(a_{v}-a_{v}^{\prime})H(x)^{n_{j}-v}\) is equal to \((n_{j}-v_{0})\deg H(x)=(n_{j}-v_{0})N_{j-1}\). Therefore, this
implies that \(v_{0}=n_{j}\). However, (5.4) then implies that \((a_{n_{j}}-a^{\prime}_{n_{j}})=0\), a contradiction. This completes the inductive step and the proof of the result.
**Proposition 5.16**.: _The map \(\Psi^{\prime}\) above is an injection._
Proof.: Let \(\alpha\) and \(\alpha^{\prime}\) be such that
\[\Psi^{\prime}(\alpha)=\Psi^{\prime}(\alpha^{\prime}).\]
Since \(\Psi^{\prime}_{1}(\alpha)=\Psi^{\prime}_{1}(\alpha^{\prime})\), it follows that \(z_{\alpha}=z_{\alpha^{\prime}}\) and \(K_{j,\alpha}=K_{j,\alpha^{\prime}}\) for all values of \(j\). Lemma 5.14 asserts that the irreducible polynomial of \(z_{\alpha}\) over \(K_{j,\alpha}\) is
\[F_{j,\alpha}(x)-F_{j,\alpha}(z_{\alpha})=F_{j,\alpha}(x)-z_{j}(\alpha).\]
Therefore, \(\Psi^{\prime}_{1}(\alpha)\) determines all non-constant coefficients of \(F_{j,\alpha}(x)\). The equality \(\Psi^{\prime}_{1}(\alpha)=\Psi^{\prime}_{1}(\alpha^{\prime})\) implies that \(F_{j,\alpha}(x)-F_{j,\alpha^{\prime}}(x)\) is a constant. On the other hand, the equality \(\Psi^{\prime}_{2}(\alpha)=\Psi^{\prime}_{2}(\alpha^{\prime})\) implies that for all \(j\in[1,k-1]\)
\[F_{j,\alpha}(0)=F_{j,\alpha^{\prime}}(0).\]
Therefore, we have shown that for all \(j\in[1,k-1]\),
\[F_{j,\alpha}(x)=F_{j,\alpha^{\prime}}(x).\]
Since \(z_{\alpha}=z_{\alpha^{\prime}}\), it follows that the minimal polynomials \(F_{\alpha}(x)\) and \(F_{\alpha^{\prime}}(x)\) are equal. Hence, for all \(j\in[1,k]\),
\[F_{j,\alpha}(x)=F_{j,\alpha^{\prime}}(x).\]
Lemma 5.15 then implies that \(\alpha=\alpha^{\prime}\), and therefore, \(\Psi^{\prime}\) is injective.
## 6. Proof of the main result
In this short section, we prove the main result of the paper. First, we establish an asymptotic lower bound for \(\mathcal{P}_{N,K}\left(Y;S(\vec{n})\right)\).
**Proposition 6.1**.: _Let \(k\in\mathbb{Z}_{\geq 2}\) and \(\vec{n}=(n_{1},\ldots,n_{k})\in\mathbb{Z}_{\geq 2}^{k}\), we have that_
\[\mathcal{P}_{N,K}\left(Y;S(\vec{n})\right)\gg Y^{dB(\vec{n})},\]
_where_
\[B(\vec{n}):=A(\vec{n})-\sum_{j=1}^{k-1}N_{j}=\sum_{j=1}^{k-1}\left(\frac{n_{j }-1}{2}\right)\left(\prod_{v=1}^{j}n_{v}\right)+\left(\frac{n_{k}+1}{2}\right) \left(\prod_{v=1}^{k}n_{v}\right).\]
Proof.: By Proposition 5.9,
\[\#\mathfrak{X}^{\prime}(\mathcal{O}_{K};Y)\gg Y^{dA},\]
where
\[A=A(\vec{n}):=\sum_{j=1}^{k}\left(\frac{n_{j}+1}{2}\right)\left(\prod_{v=1}^{ j}n_{v}\right).\]
Recall that by (5.3), \(\#\mathcal{B}(Y)\sim Y^{d\left(\sum_{i=1}^{k-1}N_{i}\right)}\). It follows from Proposition 5.16 that \(\Psi^{\prime}\) is injective, and hence,
\[\#\mathcal{P}^{\prime}_{N,K}(Y;S(\vec{n}))\gg\frac{\#\mathfrak{X}^{\prime}( \mathcal{O}_{K};Y)}{\#\mathcal{B}(Y)}\gg Y^{d\left(A(\vec{n})-\sum_{i=1}^{k- 1}N_{i}\right)}=Y^{dB(\vec{n})}.\]
There is a constant \(C\) which depends only on \(\vec{n}\) such that
\[C\#\mathcal{P}_{N,K}(Y;S(\vec{n}))\geq\#\mathcal{P}^{\prime}_{N,K}(Y;S(\vec{n})).\]
By the Galois correspondence, \(C\) can be taken to be the number of towers \(\{1\}=H_{0}\subset H_{1}\subset H_{2}\subset\ldots H_{k-1}\subset H_{k}=S(\vec{n})\) of subgroups \(H_{i}\) of \(S(\vec{n})\). Clearly, \(C\) depends only on \(\vec{n}\). Therefore, we have shown that
\[\mathcal{P}_{N,K}\left(Y;S(\vec{n})\right)\gg Y^{dB(\vec{n})},\]
where
\[B(\vec{n}):=\sum_{j=1}^{k-1}\left(\frac{n_{j}-1}{2}\right)\left(\prod_{v=1}^{ j}n_{v}\right)+\left(\frac{n_{k}+1}{2}\right)\left(\prod_{v=1}^{k}n_{v} \right).\]
Proof of Theorem A.: It is easy to see that
\[B>\left(\frac{n_{k}+1}{2}\right)N\geq\frac{3}{2}N>N.\]
The result is a direct consequence of Proposition 4.5 and Proposition 6.1.
Proof of Theorem B.: Let \(B\) and \(N\) be the quantities defined in the statement of Theorem A. We find that
\[B=\left(\frac{n^{k+1}+2n^{k}-n}{2}\right)\text{ and }N=n^{k}.\]
Since \(n^{k+1}\leq 2n^{2k}\), we find that \(B\leq\frac{N^{2}}{4}+N\). Therefore, the Theorem A asserts that
\[N_{n^{k},K}(X;[S_{n}]^{k})\gg X^{\frac{(B-N)(N+2)}{N^{3}-N^{2}}}=X^{\frac{n^{ 2k}+n^{k}-2}{2\left(n^{3k}-1-n^{2k-1}\right)}}.\]
**Remark 6.2**.: _When \(K=\mathbb{Q}\), one may also apply the Theorem 4.4 of Pierce, Turnage-Butterbaugh and Wood and obtain that_
\[N_{n^{k},\mathbb{Q}}(X;[S_{n}]^{k})\gg_{\epsilon}X^{\beta_{n,k}-\epsilon},\]
_where \(\beta_{n,k}:=\frac{1-(n!)^{-k}}{n^{k-1}(2n^{k}-2)}\). This lower bound is weaker than the Corollary B above, and this is why we did not pursue this particular strategy. Our method is significantly more elaborate and gives better bounds, and applies to all number fields \(K\)._
|
2304.04365 | Reflection Vectors and Quantum Cohomology of Blowups | Let $X$ be a smooth projective variety with a semisimple quantum cohomology.
It is known that the blowup $\operatorname{Bl}_{\rm pt}(X)$ of $X$ at one point
also has semisimple quantum cohomology. In particular, the monodromy group of
the quantum cohomology of $\operatorname{Bl}_{\rm pt}(X)$ is a reflectiongroup.
We found explicit formulas for certain generators of the monodromy group of the
quantum cohomology of $\operatorname{Bl}_{\rm pt}(X)$ depending only on the
geometry of the exceptional divisor. | Todor Milanov, Xiaokun Xia | 2023-04-10T03:26:03Z | http://arxiv.org/abs/2304.04365v2 | # Reflection vectors and quantum cohomology of blowups
###### Abstract.
Let \(X\) be a smooth projective variety with a semi-simple quantum cohomology. It is known that the blow up \(\operatorname{Bl}_{\operatorname{pt}}(X)\) of \(X\) at one point also has semi-simple quantum cohomology. In particular, the monodromy group of the quantum cohomology of \(\operatorname{Bl}_{\operatorname{pt}}(X)\) is a reflection group. We found explicit formulas for certain generators of the monodromy group of the quantum cohomology of \(\operatorname{Bl}_{\operatorname{pt}}(X)\) depending only on the geometry of the exceptional divisor.
Key words and phrases:Frobenius structures, Gromov-Witten invariants, quantum cohomology 2000 Mathematics Subject Classification: _2000 Math. Subj. Class._ 14N35, 35Q53
###### Contents
* 1 Introduction
* 2 Frobenius manifolds
* 2.1 First and second structure connections
* 2.2 Reflection vectors
* 2.3 The anti-invariant solution
* 2.4 Gromov-Witten theory
* 2.5 Quantum cohomology of \(X\)
* 3 The geometry of blowups
* 3.1 Cohomology of the blowup
* 3.2 \(K\)-ring of the blowup
* 3.3 Quantum cohomology of the blowup
* 3.4 Twisted GW invariants of \(\mathbb{P}^{n-1}\)
* 3.5 The vanishing theorem of Gathmann
* 4 Second structure connection and blowups
* 4.1 Period vectors for the blowup
* 4.2 Quantum cohomology of the blowup
* 5 The exceptional component of a reflection vector
* 5.1 Dependence on the Novikov variables
* 5.2 Canonical coordinates
* 5.3 Twisted periods of \(\mathbb{P}^{n-1}\)
* 5.4 Periods of \(\mathbb{P}^{n-2}\)
* 5.5 Monodromy of the twisted periods of \(\mathbb{P}^{n-1}\)
* 5.6 Isomonodromic analytic continuation
* 5.7 Vanishing of the base component
* 6 Mirror model for the twisted periods
* 6.1 Contour integral
* 6.2 Oscillatory integral
* 6.3 Laplace transform
* A Bending the contour
## 1. Introduction
The notion of a Frobenius manifold was invented by Dubrovin in order to give a geometric formulation of the properties of quantum cohomology (see [8]). Later on, it was discovered by Dubrovin and Zhang (see [9]) that if the Frobenius manifold is in addition semi-simple, then the corresponding Frobenius structure has very important applications to the theory of integrable hierarchies of KdV type. Our main interest is in a certain system of vectors which we call _reflection vectors_, associated to any semi-simple Frobenius manifold. The most general problem is to obtain a classification of the set of reflection vectors corresponding to a semi-simple Frobenius manifold. In fact, the set of reflection vectors contain the information about the monodromy group of the so-called _second structure connection_, so by solving an appropriate classical Riemann-Hilbert problem the reflection vectors uniquely determine the corresponding semi-simple Frobenius structure.
Let us specialize our discussion to the two classes of examples coming respectively from complex and symplectic geometry. Let us discuss first the complex geometry case. Suppose that \(f:Y\to\mathbb{C}\) is a holomorphic function with only isolated critical points. Under some additional technical assumptions one can apply Saito's theory of primitive forms and construct a semi-simple Frobenius structure on the space of miniversal deformations of \(f\). In these settings, the set of reflection vectors is just the set of vanishing cycles embedded in the Milnor ring \(H\) of \(f\) by an appropriate period map (see [1] for some background on singularity theory). Moreover, the reflection vectors span over \(\mathbb{Z}\) a lattice \(\Lambda\subset H\) of rank equal to the Milnor number of \(f\). The Seifert form of \(f\) determines a non-degenerate integral pairing \(\langle\,\ \rangle\) on \(\Lambda\). Let us recall that the symmetrization \((a|b):=\langle a,b\rangle+\langle b,a\rangle\)\((a,b\in\Lambda)\) coincides with the intersection pairing. The set of reflection vectors satisfies all axioms of a root system with root lattice \(\Lambda\) and invariant bi-linear form \((\ |\ )\), except that the form \((\ |\ )\) is in general not positive definite. From this point of view, the set of reflection vectors can be viewed as a generalized root system. Formally, by following the analogy with the definition of Kac-Moody Lie algebras, one can introduce Lie algebras whose real roots are the reflection vectors. It is an interesting question whether the integrable hierarchies of Dubrovin and Zhang can be understood in terms of the representation theory of such Lie algebras. There is also a slightly different view point motivated by Borcherd's notion of a vertex operator algebra. Namely, one can try to understand Dubrovin and Zhang's hierarchies in terms of the lattice vertex algebra \(V_{\Lambda}\) associated with the lattice \(\Lambda\) (and the pairing \((\ |\ )\). More precisely, as it was discovered in [2] in the settings of simple singularities, it is very important to solve the screening equations, that is, determine all \(w\in V_{\Lambda}\) such that \(e^{\alpha}_{(0)}w=0\) for all reflection vectors \(\alpha\). In some sense, this problem is our main motivation to work on the classification problem for reflection vectors of semi-simple Frobenius manifolds.
Let us discuss the second class of examples. Suppose that \(X\) is a smooth projective variety with semi-simple quantum cohomology. There is no geometric interpretation of the reflection vectors in this case unless the manifold admits a mirror in the sense of Givental. Nevertheless, there is a remarkable conjectural description of the set of reflection vectors partially motivated by the examples from mirror symmetry. Let us give a precise statement. Since the quantum cohomology of \(X\) is semi-simple, the Dolbeault cohomology groups \(H^{p,q}(X)=0\) for \(p\neq q\). In particular, there exists a set of ample line bundles \(L_{1},\dots,L_{r}\), such that, the first Chern classes \(p_{i}:=c_{1}(L_{i})\)\((1\leq i\leq r)\) form a \(\mathbb{Z}\)-basis of \(H^{2}(X,\mathbb{Z})_{\mathrm{t.f.}}\) (the torsion free part). Let \(q_{1},\dots,q_{r}\) be formal variables. Following Iritani we introduce the following map (see [15]):
\[\Psi_{q}:K^{0}(X)\to H^{*}(X,\mathbb{C}) \tag{1}\]
defined by
\[\Psi_{q}(E):=(2\pi)^{\frac{1-n}{2}}\widehat{\Gamma}(X)\cup e^{-\sum_{i=1}^{r}p_{i} \log q_{i}}\cup(2\pi\mathbf{i})^{\deg}(\operatorname{ch}(E)),\]
where \(\deg\) is the complex degree operator, that is, \(\deg(\phi)=i\phi\) for \(\phi\in H^{2i}(X;\mathbb{C})\), \(\mathbf{i}:=\sqrt{-1}\), \(n=\dim_{\mathbb{C}}(X)\), and \(\widehat{\Gamma}(X)=\widehat{\Gamma}(TX)\) is the \(\Gamma\)-class of \(X\). Recall that for a vector bundle \(E\), the \(\Gamma\)-class of \(E\) is defined by
\[\widehat{\Gamma}(E):=\prod_{x:\text{ Chern roots of }E}\Gamma(1+x).\]
The map \(\Psi_{q}\) is multivalued with respect to \(q\). If \(q_{i}\) is sufficiently close to \(1\) for all \(1\leq i\leq r\), then we define \(\log q_{i}\) via the principal branch of the logarithm. In general, one has to fix a reference path in \((\mathbb{C}^{*})^{r}\) between \(q=(q_{1},\ldots,q_{r})\) and \((1,\ldots,1)\) and define \(\Psi_{q}\) via analytic continuation along the reference path. Let us introduce the following pairing
\[\langle\,\ \rangle:H^{*}(X,\mathbb{C})\otimes H^{*}(X,\mathbb{C})\to \mathbb{C},\quad\langle a,b\rangle:=\frac{1}{2\pi}\int_{X}a\cup e^{\pi \mathbf{i}\theta}\circ e^{\pi\mathbf{i}\rho}(b), \tag{2}\]
where the linear operators \(\theta\) and \(\rho\) are defined respectively by
\[\theta:H^{*}(X,\mathbb{C})\to H^{*}(X,\mathbb{C}),\quad\theta(\phi):=\frac{n \phi}{2}-\deg(\phi)\]
and
\[\rho:H^{*}(X,\mathbb{C})\to H^{*}(X,\mathbb{C}),\quad\rho(\phi):=c_{1}(TX)\cup\phi.\]
By using the Hierzebruch-Riemann-Roch formula we get
\[\langle\Psi_{q}(E),\Psi_{q}(F)\rangle=\chi(E^{\vee}\otimes F),\]
where \(\chi\) is the holomorphic Euler characteristic, that is, \(\chi(E)=\sum_{i=0}^{\infty}(-1)^{i}\dim H^{i}(X,E)\). We will refer to \(\langle\,\ \rangle\) as the _Euler pairing_. In case the manifold \(X\) admits a mirror model in the sense of Givental, the Euler pairing \(\langle\,\ \rangle\) can be identified with the Seifert form and therefore its symmetrization
\[(a|b):=\langle a,b\rangle+\langle b,a\rangle,\quad a,b\in H^{*}(X,\mathbb{C})\]
corresponds to the intersection pairing. For that reason we refer to the symmetrization \((\ |\ )\) of the Euler pairing as the _intersection pairing_.
Let us denote by \(D^{b}(X)\) the derived category of the category of bounded complexes of coherent sheaves on \(X\), that is, the bounded derived category of \(X\) (see [12] for some background on derived categories). For \(\mathcal{E},\mathcal{F}\in D^{b}(X)\) we denote by \(\mathcal{E}[i]\) the shifted complex: \((\mathcal{E}[i])^{k}:=\mathcal{E}^{k+i}\) and \(\operatorname{Ext}^{k}(\mathcal{E},\mathcal{F}):=\operatorname{Hom}(\mathcal{E },\mathcal{F}[k])\) where \(\operatorname{Hom}\) is computed in the derived category \(D^{b}(X)\). Recall that an object \(\mathcal{E}\in D^{b}(X)\) is called _exceptional_ if
\[\operatorname{Ext}^{k}(\mathcal{E},\mathcal{E})=\begin{cases}\mathbb{C}&\text{ if }k=0,\\ 0&\text{ otherwise.}\end{cases}\]
A sequence of exceptional objects \((\mathcal{E}_{1},\ldots,\mathcal{E}_{N})\) in \(D^{b}(X)\) is called an _exceptional collection_ if \(\operatorname{Ext}^{k}(\mathcal{E}_{i},\mathcal{E}_{j})=0\) for all \(i>j\) and \(k\in\mathbb{Z}\). An exceptional collection \((\mathcal{E}_{1},\ldots,\mathcal{E}_{N})\) is called _full exceptional collection_ if the smallest subcategory of \(D^{b}(X)\) that contains \(\mathcal{E}_{i}\)\((1\leq i\leq N)\) and is closed under isomorphisms, shifts, and cones, is \(D^{b}(X)\) itself.
**Conjecture 1**.: _a) If the quantum cohomology of \(X\) is convergent and semi-simple, then \(\Psi_{q}(\mathcal{E})\) is a reflection vector for every exceptional object \(\mathcal{E}\in D^{b}(X)\)._
_b) If \((\mathcal{E}_{1},\ldots,\mathcal{E}_{N})\) is a full exceptional collection in \(D^{b}(X)\), then the reflection vectors \(\alpha_{i}:=\Psi_{q}(\mathcal{E}_{i})\) (\(1\leq i\leq N\)) generate the set \(\mathcal{R}\) of all reflection vectors in the following sense:_
1. _The reflections_ \(x\mapsto x-(x|\alpha_{i})\alpha_{i}\) _(_\(1\leq i\leq N\)_) generate the monodromy group_ \(W\) _of quantum cohomology._
2. _For every_ \(\alpha\in\mathcal{R}\) _there exists_ \(w\in W\)_, such that,_ \(w(\alpha)\in\{\alpha_{1},\ldots,\alpha_{N}\}\)_._
The definition of the reflection vectors \(\mathcal{R}\) and the monodromy group \(W\) in quantum cohomology requires further background from the theory of Frobenius manifolds and Gromov-Witten theory. The precise definitions will be given in Sections 2.2 and 2.5. Conjecture 1 follows easily from the work of Iritani (see [15]) in the case when \(X\) is a weak compact Fano toric orbifold that admits a full exceptional collection consisting only of line bundles.
Let us state the main result in our paper. Let \(\operatorname{Bl}(X)\) be the blowup of \(X\) at one point, \(\pi:\operatorname{Bl}(X)\to X\) be the blowup map, and \(j:\mathbb{P}^{n-1}\to\operatorname{Bl}(X)\) be the closed embedding that identifies \(\mathbb{P}^{n-1}\) with the exceptional divisor \(E\).
**Theorem 1**.: _If the quantum cohomology of \(X\) is convergent and semi-simple and the quantum cohomology of \(\operatorname{Bl}(X)\) is convergent, then the sheaves \(\mathcal{O}_{E}(k)\coloneqq j_{*}\mathcal{O}_{\mathbb{P}^{n-1}}(k)\), \(k\in\mathbb{Z}\), are reflection vectors for the quantum cohomology of \(\operatorname{Bl}(X)\)._
Several remarks are in order. It is known by the results of Bayer (see [3]) that the blow up at a point preserves semi-simplicity of the quantum cohomology. We believe that our requirement that the quantum cohomology of \(\operatorname{Bl}(X)\) is convergent is redundant, that is, the blow up operation preserves the convergence in quantum cohomology. We will return to this problem in the near future. Furthermore, we would like to prove that Conjecture 1 is compatible with the blow up operation. Let us recall that by the work of Orlov (see [20]) if \((\mathcal{E}_{1},\ldots,E_{N})\) is a full exceptional collection of \(X\), then \((\mathcal{O}_{E}(-n+1),\ldots,\mathcal{O}_{E}(-1),\pi^{*}\mathcal{E}_{1}, \ldots,\pi^{*}\mathcal{E}_{N})\) is a full exceptional collection of \(\operatorname{Bl}(X)\). In order to complete the proof of Conjecture 1 for the blowups at finitely many points we still have to prove that \(\pi^{*}\mathcal{E}_{i}\) are reflection vectors. The methods used in the current paper, after some modification, should be sufficient to do this. Nevertheless, our attempts to modify the arguments were unsuccessful so far, so we left this problem for a separate project too.
## 2. Frobenius manifolds
Following Dubrovin [8], we recall the notion of a Frobenius manifold. Then we proceed by defining the so-called _second structure connection_ and _reflection vectors_ of a semi-simple Frobenius manifold. Finally, we would like to recall the construction of a Frobenius manifold in the settings of Gromov-Witten theory.
### First and second structure connections
Suppose that \(M\) is a complex manifold and \(\mathcal{T}_{M}\) is the sheaf of holomorphic vector fields on \(M\). The manifold \(M\) is equipped with the following structures:
1. A non-degenerate symmetric bilinear pairing \[(\,\cdot\,,\,\cdot\,):\mathcal{T}_{M}\otimes\mathcal{T}_{M}\to\mathcal{O}_{M}.\]
2. A Frobenius multiplication: commutative associative multiplication \[\,\cdot\,\bullet\,:\mathcal{T}_{M}\otimes\mathcal{T}_{M}\to\mathcal{T}_{M},\] such that \((v_{1}\bullet w,v_{2})=(v_{1},w\bullet v_{2})\,\,\,\forall v_{1},v_{2},w\in \mathcal{T}_{M}\).
* A unit vector field: global vector field \(\mathbf{1}\in\mathcal{T}_{M}(M)\) such that \[\mathbf{1}\bullet v=v,\quad\nabla^{\mathrm{L.C.}}_{v}\mathbf{1}=0,\quad\forall v \in\mathcal{T}_{M},\] where \(\nabla^{\mathrm{L.C.}}\) is the Levi-Civita connection of the pairing \((\cdot,\cdot)\).
* An Euler vector field \(E\in\mathcal{T}_{M}(M)\) such that \[E(v_{1},v_{2})-([E,v_{1}],v_{2})-(v_{1},[E,v_{2}])=(2-n)(v_{1},v_{2})\] for some constant \(n\in\mathbb{C}\).
Given the data (F1)-(F4), we define the so called _Dubrovin's connection_ on the vector bundle \(TM\times\mathbb{C}^{*}\to M\times\mathbb{C}^{*}\)
\[\nabla_{v} :=\nabla^{\mathrm{L.C.}}_{v}-z^{-1}v\bullet,\quad v\in\mathcal{T} _{M},\] \[\nabla_{\partial/\partial z} :=\frac{\partial}{\partial z}-z^{-1}\theta+z^{-2}E\bullet,\]
where \(z\) is the standard coordinate on \(\mathbb{C}^{*}=\mathbb{C}\smallsetminus\{0\}\), where \(v\bullet\) is an endomorphism of \(\mathcal{T}_{M}\) defined by the Frobenius multiplication by the vector field \(v\), and where \(\theta:\mathcal{T}_{M}\to\mathcal{T}_{M}\) is an \(\mathcal{O}_{M}\)-modules morphism defined by
\[\theta(v):=\nabla^{\mathrm{L.C.}}_{v}(E)-\Big{(}1-\frac{n}{2}\Big{)}v.\]
**Definition 1**.: _The data \(((\cdot,\cdot),\bullet,\mathbf{1},E)\), satisfying the properties \((F1)-(F4)\), is said to be a Frobenius structure of conformal dimension \(n\) if the corresponding Dubrovin connection is flat._
Let us proceed with recalling the notion of 2nd structure connection and reflection vectors. We follow the exposition from [17]. We are going to work only with Frobenius manifolds satisfying the following 4 additional conditions:
* The tangent bundle \(TM\) is trivial and it admits a trivialization given by a frame of global flat vector fields.
* Recall that the operator \[\mathrm{ad}_{E}:\mathcal{T}_{M}\to\mathcal{T}_{M},\quad v\mapsto[E,v]\] preserves the space of flat vector fields. We require that the restriction of \(\mathrm{ad}_{E}\) to the space of flat vector fields is a diagonalizable operator with eigenvalues rational numbers \(\leq 1\).
* The Frobenius manifold has a _calibration_ (see Section 2.2).
* The Frobenius manifold has a direct product decomposition \(M=\mathbb{C}\times B\) such that if we denote by \(t_{1}:M\to\mathbb{C}\) the projection along \(B\), then \(dt_{1}\) is a flat 1-form and \(\langle dt_{1},\mathbf{1}\rangle=1\).
Conditions (i)-(iv) are satisfied for all Frobenius manifolds constructed by quantum cohomology or by the primitive forms in singularity theory.
Let us fix a base point \(t^{\circ}\in M\) and a basis \(\{\phi_{i}\}_{i=1}^{N}\) of the reference tangent space \(H:=T_{t^{\circ}}M\). Furthermore, let \((t_{1},\dots,t_{N})\) be a local flat coordinate system on an open neighborhood of \(t^{\circ}\) such that \(\partial/\partial t_{i}=\phi_{i}\) in \(H\). The flat vector fields \(\partial/\partial t_{i}\) (\(1\leq i\leq N\)) extend to global flat vector fields on \(M\) and provide a trivialization of the tangent bundle \(TM\cong M\times H\). This allows us to identify the Frobenius multiplication \(\bullet\) with a family of associative commutative multiplications \(\bullet_{t}:H\otimes H\to H\) depending analytically on \(t\in M\). Modifying our choice of \(\{\phi_{i}\}_{i=1}^{N}\) and \(\{t_{i}\}_{i=1}^{N}\) if necessary we may arrange that
\[E=\sum_{i=1}^{N}((1-d_{i})t_{i}+r_{i})\partial/\partial t_{i},\]
where \(\partial/\partial t_{1}\) coincides with the unit vector field \(\mathbf{1}\) and the numbers
\[0=d_{1}\leq d_{2}\leq\cdots\leq d_{N}=n\]
are symmetric with respect to the middle of the interval \([0,n]\). The number \(n\) is known as the _conformal dimension_ of \(M\). The operator \(\theta:\mathcal{T}_{M}\rightarrow\mathcal{T}_{M}\) defined above preserves the subspace of flat vector fields. It induces a linear operator on \(H\), known to be skew symmetric with respect to the Frobenius pairing \((\,\ )\). Following Givental, we refer to \(\theta\) as the _Hodge grading operator_.
There are two flat connections that one can associate with the Frobenius structure. The first one is the _Dubrovin connection_ - defined above. The Dubrovin connection in flat coordinates takes the following form:
\[\nabla_{\partial/\partial t_{i}} = \frac{\partial}{\partial t_{i}}-z^{-1}\phi_{i}\bullet\] \[\nabla_{\partial/\partial z} = \frac{\partial}{\partial z}+z^{-1}\theta-z^{-2}E\bullet\]
where \(z\) is the standard coordinate on \(\mathbb{C}^{*}=\mathbb{C}-\{0\}\) and for \(v\in\Gamma(M,\mathcal{T}_{M})\) we denote by \(v\bullet:H\to H\) the linear operator of Frobenius multiplication by \(v\).
Our main interest is in the _2nd structure connection_
\[\nabla^{(m)}_{\partial/\partial t_{i}} = \frac{\partial}{\partial t_{i}}+(\lambda-E\bullet_{t})^{-1}( \phi_{i}\bullet_{t})(\theta-m-1/2)\] \[\nabla^{(m)}_{\partial/\partial\lambda} = \frac{\partial}{\partial\lambda}-(\lambda-E\bullet_{t})^{-1}( \theta-m-1/2),\]
where \(m\in\mathbb{C}\) is a complex parameter. This is a connection on the trivial bundle
\[(M\times\mathbb{C})^{\prime}\times H\rightarrow(M\times\mathbb{C})^{\prime},\]
where
\[(M\times\mathbb{C})^{\prime}=\{(t,\lambda)\ |\ \det(\lambda-E\bullet_{t}) \neq 0\}.\]
The hypersurface \(\det(\lambda-E\bullet_{t})=0\) in \(M\times\mathbb{C}\) is called the _discriminant_.
### Reflection vectors
The definition of a reflection vector depends on the choice of a _calibration_\(S(t,z)\) of \(M\). By definition (see [13]), the calibration is an operator series \(S=1+\sum_{k=1}^{\infty}S_{k}(t)z^{-k}\), \(S_{k}\in\operatorname{End}(H)\), such that the Dubrovin's connection has a fundamental solution near \(z=\infty\) of the form
\[S(t,z)z^{\theta}z^{-\rho},\]
where \(\rho\in\operatorname{End}(H)\) is a nilpotent operator, \([\theta,\rho]=-\rho\), and the following symplectic condition holds
\[S(t,z)S(t,-z)^{T}=1,\]
where \({}^{T}\) denotes transposition with respect to the Frobenius pairing.
Let us fix a reference point \((t^{\circ},\lambda^{\circ})\in(M\times\mathbb{C})^{\prime}\) such that \(\lambda^{\circ}\) is a sufficiently large real number. It is easy to check that the following functions provide a fundamental solution to the 2nd structure connection
\[I^{(n)}(t,\lambda)=\sum_{k=0}^{\infty}(-1)^{k}S_{k}(t)\widetilde{I}^{(n+k)}( \lambda),\]
where
\[\widetilde{I}^{(m)}(\lambda)=e^{-\rho\partial_{\lambda}\partial_{m}}\Big{(} \frac{\lambda^{\theta-m-\frac{1}{2}}}{\Gamma(\theta-m+\frac{1}{2})}\Big{)}.\]
The 2nd structure connection has a Fuchsian singularity at infinity, therefore the series \(I^{(n)}(t,\lambda)\) is convergent for all \((t,\lambda)\) sufficiently close to \((t^{\circ},\lambda^{\circ})\). Using the differential equations we extend \(I^{(n)}\) to a multi-valued analytic function on \((M\times\mathbb{C})^{\prime}\). We define the following multi-valued functions taking values in \(H\):
\[I^{(n)}_{a}(t,\lambda):=I^{(n)}(t,\lambda)\,a,\quad a\in H,\quad n\in\mathbb{Z}.\]
These functions will be called _period vectors_. Using analytic continuation we get a representation
\[\pi_{1}((M\times\mathbb{C})^{\prime},(t^{\circ},\lambda^{\circ}))\to\mathrm{ GL}(H) \tag{3}\]
called the _monodromy representation_ of the Frobenius manifold. The image \(W\) of the monodromy representation is called the _monodromy group_.
Under the semi-simplicity assumption, we may choose a generic reference point \(t^{\circ}\) on \(M\), such that the Frobenius multiplication \(\bullet_{t^{\circ}}\) is semi-simple and the operator \(E\bullet_{t^{\circ}}\) has \(N\) pairwise different eigenvalues \(u^{\circ}_{i}\) (\(1\leq i\leq N\)). The fundamental group \(\pi_{1}((M\times\mathbb{C})^{\prime},(t^{\circ},\lambda^{\circ}))\) fits into the following exact sequence
(4)
where \(p:(M\times\mathbb{C})^{\prime}\to M\) is the projection on \(M\), \(F^{\circ}=p^{-1}(t^{\circ})=\mathbb{C}\smallsetminus\{u^{\circ}_{1},\ldots,u^{ \circ}_{N}\}\) is the fiber over \(t^{\circ}\), and \(i:F^{\circ}\to(M\times\mathbb{C})^{\prime}\) is the natural inclusion. For a proof we refer to [21], Proposition 5.6.4 or [19], Lemma 1.5 C. Using the exact sequence (4) we get that the monodromy group \(W\) is generated by the monodromy transformations representing the lifts of the generators of \(\pi_{1}(M,t^{\circ})\) in \(\pi_{1}((M\times\mathbb{C})^{\prime},(t^{\circ},\lambda^{\circ}))\) and the generators of \(\pi_{1}(F^{\circ},\lambda^{\circ})\).
The image of \(\pi_{1}(F^{\circ},\lambda^{\circ})\) under the monodromy representation is a reflection group that can be described as follows. Let us introduce the bi-linear pairing
\[\langle a,b\rangle=\frac{1}{2\pi}(a,e^{\pi\mathbf{i}\theta}e^{\pi\mathbf{i} \boldsymbol{\nu}b}),\quad a,b\in H. \tag{5}\]
Motivated by the applications to mirror symmetry, we will refer to \(\langle\,\ \rangle\) as the _Euler pairing_. Its symmetrization
\[(a|b):=\langle a,b\rangle+\langle b,a\rangle,\quad a,b\in H, \tag{6}\]
also plays an important role in mirror symmetry and we will refer to it as the _intersection pairing_. It can be checked that the intersection pairing can be expressed in terms of the period vectors as follows:
\[(a|b):=(I^{(0)}_{a}(t,\lambda),(\lambda-E\bullet)I^{(0)}_{b}(t,\lambda)) \tag{7}\]
Using the differential equations of the 2nd structure connection it is easy to prove that the RHS of the above identity is independent of \(t\) and \(\lambda\). However, the fact that the constant must be \((a|b)\) requires some additional work (see [18]).
Suppose now that \(\gamma\) is a simple loop in \(F^{\circ}\), i.e., a loop that starts at \(\lambda^{\circ}\), approaches one of the punctures \(u^{\circ}_{i}\) along a path \(\gamma^{\prime}\) that ends at a point sufficiently close to \(u^{\circ}_{i}\), goes around \(u^{\circ}_{i}\), and finally returns back to \(\lambda^{\circ}\) along \(\gamma^{\prime}\). By analyzing the second structure connection near \(\lambda=u_{i}\) it is easy to see that up to a sign there exists a unique \(a\in H\) such that \((a|a)=2\) and the monodromy transformation of \(a\) along \(\gamma\) is \(-a\). The monodromy transformation representing \(\gamma\in\pi_{1}(F^{\circ},\lambda^{\circ})\) is the reflection defined by the following formula:
\[w_{a}(x)=x-(a|x)a. \tag{8}\]
Let us denote by \(\mathcal{R}\) the set of all \(a\in H\) as above determined by all possible choices of simple loops in \(F^{\circ}\). We refer to the elements of \(\mathcal{R}\) as reflection vectors.
### The anti-invariant solution
Let us recall Givental's \(R\)-matrix (see [13])
\[R(t,z)=1+R_{1}(t)z+R_{2}(t)z^{2}+\cdots,\quad R_{k}(t)\in\operatorname{End}(H)\]
defined for all semi-simple \(t\in M\) as the unique solution to the following system of differential equations:
\[\frac{\partial R}{\partial t_{a}}(t,z) =-R(t,z)\frac{\partial\Psi}{\partial t_{a}}\Psi^{-1}+z^{-1}[\phi_ {a}\bullet,R(t,z)]\] \[\frac{\partial R}{\partial z}(t,z) =-z^{-1}\theta\,R(t,z)-z^{-2}[E\bullet,R(t,z)],\]
where \(\phi_{a}\bullet\) and \(E\bullet\) are the operators of Frobenius multiplication respectively by the flat vector field \(\partial/\partial t_{a}\) and by the Euler vector field \(E\) and \(\Psi\) is the \(N\times N\)-matrix with entries
\[\Psi_{ai}:=\sqrt{\Delta_{i}}\frac{\partial t_{a}}{\partial u_{i}}.\]
Here \(\operatorname{End}(H)\) is identified with the space of \(N\times N\)-matrices via the basis \(\phi_{1},\dots,\phi_{N}\), that is, the entries \(A_{ab}\) of \(A\in\operatorname{End}(H)\) are defined by \(A(\phi_{b})=:\sum_{a}\phi_{a}A_{ab}\).
**Remark 1**.: _The matrix \(\Psi\) up to the normalization factors \(\Delta_{i}\) is the Jacobian matrix of the change from canonical to flat coordinates. The above definition of the \(R\)-matrix differes from the original definition in [13] by conjugation by \(\Psi\), that is, \(\Psi^{-1}R(t,z)\Psi\) is the \(R\)-matrix of Givental. _
Suppose that \(\alpha\in H\) is a reflection vector. Let us fix a generic semi-simple point \(t\in M\), such that, the canonical coordinates \(u_{1}(t),\dots,u_{N}(t)\) are pairwise distinct. Let us fix a reference path from \((t^{\circ},\lambda^{\circ})\) to a neighborhood of a point on the discriminant \((t,u_{i}(t))\) for some \(i\), such that, the period vector \(I_{\alpha}^{(-m)}(t,\lambda)\) transforms into \(-I_{\alpha}^{(-m)}(t,\lambda)\) under the analytic continuation in \(\lambda\) along a closed loop around \(u_{i}(t)\). We claim that the period vector has the following expansion at \(\lambda=u_{i}(t)\):
\[I_{\alpha}^{(-m)}(t,\lambda)=\sqrt{2\pi}\sum_{k=0}^{\infty}(-1)^{k}\frac{( \lambda-u_{i})^{k+m-1/2}}{\Gamma(k+m+1/2)}\,R_{k}(t)\,\Psi(t)e_{i}, \tag{9}\]
where \(e_{i}\) is the vector column with \(1\) on the \(i\)th position and \(0\) elsewhere, that is, \(\Psi e_{i}\) is the column representing the vector field \(\sum_{a=1}^{N}\sqrt{\Delta_{i}}\frac{\partial t_{a}}{\partial u_{i}}\phi_{a}= \sqrt{\Delta_{i}}\partial/\partial u_{i}\). Let us prove this claim. Using the differential equations for \(R(t,z)\) it is easy to check that the RHS of the above formula is a solution to the 2nd structure connection. Therefore, the RHS of (9) and the reference path determine a vector \(\alpha\in H\) for which formula (9) holds. Moreover,
\[(I_{\alpha}^{(0)}(t,\lambda),(\lambda-E\bullet)I_{\alpha}^{(0)}(t,\lambda))= \frac{2\pi}{\Gamma(1/2)^{2}}+O(\lambda-u_{i})=2+O(\lambda-u_{i}).\]
Since the LHS is independent of \(\lambda\) and \(u_{i}\), the higher order terms \(O(\lambda-u_{i})\) in the above formula must vanish. This proves that \((\alpha|\alpha)=2\). Finally, since the analytic continuation around \(\lambda=u_{i}\) of the RHS of (9) changes the sign of the RHS, we conclude that \(\alpha\) must be a reflection vector and that (9) is the expansion of the corresponding period vector near the discriminant.
### Gromov-Witten theory
Let us recall some basics on Gromov-Witten (GW) theory. For further details we refer to [16]. Let \(\operatorname{Eff}(X)\subset H_{2}(X,\mathbb{Z})_{\operatorname{t.f.}}\) be the monoid of all homology classes that can be represented in the form \(\sum_{i}k_{i}[C_{i}]\), where \(k_{i}\) is a non-negative integer and \([C_{i}]\) is the fundamental class of a holomorphic curve \(C_{i}\subset X\). The main object in GW theory is the moduli space of stable maps \(\overline{\mathcal{M}}_{g,k}(X,\beta)\), where \(g,k\) are non-negative integers and \(\beta\in\operatorname{Eff}(X)\). By definition, a stable map consists of the following data \((\Sigma,z_{1},\dots,z_{k},f)\):
1. \(\Sigma\) is a Riemann surface with at most nodal singular points.
2. \(z_{1},\ldots,z_{k}\) are _marked points_, that is, smooth pairwise-distinct points on \(\Sigma\).
3. \(f\colon\Sigma\to X\) is a holomorphic map, such that, \(f_{*}[\Sigma]=\beta\).
4. The map is stable, i.e., the automorphism group of \((\Sigma,z_{1},\ldots,z_{k},f)\) is finite.
Two stable maps \((\Sigma,z_{1},\ldots,z_{k},f)\) and \((\Sigma^{\prime},z_{1}^{\prime},\ldots,z_{k}^{\prime},f^{\prime})\) are called equivalent if there exists a biholomorphism \(\phi:\Sigma\to\Sigma^{\prime}\), such that, \(\phi(z_{i})=z_{i}^{\prime}\) and \(f^{\prime}\circ\phi=f\). The moduli space of equivalence classes of stable maps is known to be a proper Delign-Mumford stack with respect to the etale topology on the category of schemes (see [6]). The corresponding coarse moduli space \(\overline{M}_{g,k}(X,\beta)\) has a structure of a projective variety, which however could be very singular. We have the following diagram:
where \(\mathrm{ev}_{i}(\Sigma,z_{1},\ldots,z_{k},f)\coloneqq f(z_{i})\), \(\pi\) is the map forgetting the last marked point an contracting all unstable components, and ft is the map forgetting the holomorphic map \(f\) and contracting all unstable components. The moduli space has natural orbifold line bundles \(L_{i}\) (\(1\leq i\leq k\)) whose fiber at a point \((\Sigma,z_{1},\ldots,z_{k},f)\) is the cotangent line \(T_{z_{i}}^{*}\Sigma\) equipped with the action of the automorphism group of \((\Sigma,z_{1},\ldots,z_{k},f)\). Let \(\psi_{i}=c_{1}(L_{i})\) be the first Chern class. The most involved construction in GW theory is the construction of the so called _virtual fundamental cycle_. The construction has as an input the complex \((R\pi_{*}\mathrm{ev}_{k+1}^{*}TX)^{\vee}\) which gives rise to a perfect obstruction theory on \(\overline{\mathcal{M}}_{g,k}(X,\beta)\) relative to \(\overline{\mathcal{M}}_{g,k}\) (see [4, 5]) and yields a homology cycle in \(\overline{M}_{g,k}(X,\beta)\) of complex dimension
\[3g-3+k+n(1-g)+\langle c_{1}(TX),\beta\rangle\]
known as the virtual fundamental cycle. GW invariants are by definition the following correlators:
\[\langle a_{1}\psi^{l_{1}},\ldots,a_{k}\psi^{l_{k}}\rangle_{g,k,\beta}=\int_{[ \overline{M}_{g,k}(X,\beta)]^{\mathrm{virt}}}\mathrm{ev}_{1}^{*}(a_{1})\cdots \mathrm{ev}_{k}^{*}(a_{k})\psi_{1}^{l_{1}}\cdots\psi_{k}^{l_{k}},\]
where \(a_{1},\ldots,a_{k}\in H^{*}(X;\mathbb{C})\) and \(l_{1},\ldots,l_{k}\) are non-negative integers.
Let us recall the so-called _string_ and _divisor_ equations. Suppose that either \(\beta\neq 0\) or \(2g-2+k>0\), then
\[\langle 1,a_{1}\psi^{l_{1}},\ldots,a_{k}\psi^{l_{k}}\rangle_{g,k+1,\beta}=\sum_{i= 1}^{k}\langle a_{1}\psi^{l_{1}},\ldots,a_{i}\psi^{l_{i}-1},\ldots,a_{k}\psi^{l _{k}}\rangle_{g,k,\beta}\]
and if \(p\in H^{2}(X,\mathbb{C})\) is a divisor class, then
\[\langle p,a_{1}\psi^{l_{1}},\ldots,a_{k}\psi^{l_{k}}\rangle_{g,k+ 1,\beta}= \Big{(}\int_{\beta}p\Big{)}\langle a_{1}\psi^{l_{1}},\ldots,a_{k} \psi^{l_{k}}\rangle_{g,k,\beta}+\] \[\sum_{i=1}^{k}\langle a_{1}\psi^{l_{1}},\ldots,p\cup a_{i}\psi^{ l_{i}-1},\ldots,a_{k}\psi^{l_{k}}\rangle_{g,k,\beta},\]
where if \(l_{i}=0\), then we define \(\psi_{i}^{l_{i}-1}:=0\). We will need also the _genus_-\(0\)_topological recursion relations_, that is, if \(k\geq 2\), then the following relation holds:
\[\langle a\psi^{l_{1}+1},b_{1}\psi^{m_{1}},\ldots,b_{k}\psi^{m_{k}} \rangle_{0,k+1,\beta}=\] \[\sum_{i,I,\beta^{\prime}}\langle a\psi^{l},\phi_{i},b_{i_{1}} \psi^{m_{i_{1}}},\ldots,b_{i_{r}}\psi^{m_{i_{r}}}\rangle_{0,2+r,\beta^{\prime} }\langle\phi^{i},b_{j_{1}}\psi^{m_{j_{1}}},\ldots,b_{j_{s}}\psi^{m_{j_{s}}} \rangle_{0,1+s,\beta^{\prime\prime}},\]
where the sum is over all \(1\leq i\leq N\), all subsequences \(I=(i_{1},\ldots,i_{r})\) of the sequence \((1,2,\ldots,k)\) including the empty one, and all homology classes \(\beta^{\prime}\in\operatorname{Eff}(X)\), such that, \(\beta^{\prime\prime}:=\beta-\beta^{\prime}\in\operatorname{Eff}(X)\). The sequence \((j_{1},\ldots,j_{s})\) is obtained from \((1,2,\ldots,k)\) by removing the subsequence \(I\). In particular, \(r+s=k\).
### Quantum cohomology of \(X\)
Let us recall the notation \(L_{i}\), \(p_{i}:=c_{1}(L_{i})\), and \(q_{i}\)\((1\leq i\leq r)\) from the introduction. If \(\beta\in\operatorname{Eff}(X)\), then we put \(q^{\beta}=q_{1}^{(p_{1},\beta)}...q_{r}^{(p_{r},\beta)}\). The group ring \(\mathbb{C}[\operatorname{Eff}(X)]\) is called the _Novikov ring_ of \(X\) and the variables \(q_{i}\) are called _Novikov variables_. Note that the Novikov variables determine an embedding of the Novikov ring into the ring of formal power series \(\mathbb{C}[\![q_{1},\ldots,q_{r}]\!]\). Let us fix a homogeneous basis \(\phi_{i}\)\((1\leq i\leq N)\) of \(H^{*}(X;\mathbb{C})\), such that, \(\phi_{1}=1\) and \(\phi_{i+1}=p_{i}\) for all \(1\leq i\leq r\). Let \(t=(t_{1},\ldots,t_{N})\) be the corresponding linear coordinates. The quantum cup product \(\bullet_{t,q}\) of \(X\) is a deformation of the classical cup product defined by
\[(\phi_{a}\bullet_{t,q}\phi_{b},\phi_{c}):=\langle\phi_{a},\phi_{b},\phi_{c} \rangle_{0,3}(t)=\sum_{m=0}^{\infty}\sum_{\beta\in\operatorname{Eff}(X)}\frac{ q^{\beta}}{m!}\langle\phi_{a},\phi_{b},\phi_{c},t,\ldots,t\rangle_{0,3+m, \beta}.\]
Using string and divisor equation, we get that the structure constants of the quantum cup product, i.e., the \(3\)-point genus-\(0\) correlators in the above formula are independent of \(t_{1}\) and are formal power series in the following variables:
\[q_{1}e^{t_{2}},\ldots,q_{r}e^{t_{r}},t_{r+1},\ldots,t_{N}.\]
We are going to consider only manifolds \(X\), such that, the quantum cup product is analytic. More precisely, let us allow for the Novikov variables to take values \(0<|q_{i}|<1\)\((1\leq i\leq r)\). Then we will assume that there exists an \(\epsilon>0\), such that, the structure constants of the quantum cup product are convergent power series for all \(t\) satisfying
\[\operatorname{Re}(t_{i})<\log\epsilon\quad(2\leq i\leq r+1),\quad|t_{j}|< \epsilon\quad(r+1<j\leq N). \tag{10}\]
The inequalities (10) define an open subset \(M\subset H^{*}(X;\mathbb{C})\). The main fact about genus-\(0\) GW invariants is that \(M\) has a Frobenius structure, such that, the Frobenius pairing is the Poincare pairing, the Frobenius multiplication is the quantum cup product, the unit \(\mathbf{1}=\phi_{1}\), and the Euler vector field is
\[E=\sum_{i=1}^{N}(1-d_{i})t_{i}\tfrac{\partial}{\partial t_{i}}+\sum_{j=2}^{r+1 }(c_{1}(TX),\phi^{j})\tfrac{\partial}{\partial t_{j}},\]
where \(d_{i}\) is the complex degree of \(\phi_{i}\), that is, \(\phi_{i}\in H^{2d_{i}}(X;\mathbb{C})\) and \(\phi^{j}\)\((1\leq j\leq N)\) is the basis of \(H^{*}(X;\mathbb{C})\) dual to \(\phi_{i}\)\((1\leq i\leq N)\) with respect to the Poincare pairing. Let us point out that in case the quantum cup product is semi-simple we have \(H^{\mathrm{odd}}(X;\mathbb{C})=0\). Otherwise, in general \(M\) has to be given the structure of a super-manifold (see [16]). The conformal dimension of \(M\) is \(n=\dim_{\mathbb{C}}(X)\) and the Hodge grading operator takes the form
\[\theta(\phi_{i})=\left(\tfrac{n}{2}-d_{i}\right)\phi_{i},\quad 1\leq i\leq N. \tag{11}\]
Finally, there is a standard choice for a calibration \(S(t,q,z)=1+\sum_{k=1}^{\infty}S_{k}(t,q)z^{-k}\), where \(S_{k}(t,q)\in\operatorname{End}(H^{*}(X;\mathbb{C}))\) is defined by
\[(S_{k}(t,q)\phi_{i},\phi_{j})=\sum_{m=0}^{\infty}\sum_{\beta\in\operatorname{ Eff}(X)}\frac{q^{\beta}}{m!}\langle\phi_{i}\psi^{k-1},\phi_{j},t,\dots,t\rangle_{ 0,2+m,\beta}.\]
Suppose that the Frobenius manifold \(M\) corresponding to quantum cohomology is semi-simple. Recalling the construction from Section 2.2 we get the notion of a reflection vector.
## 3. The geometry of blowups
Let \(\operatorname{Bl}(X)\) be the blowup of \(X\) at a point \(\operatorname{pt}\in X\), \(\pi:\operatorname{Bl}(X)\to X\) be the corresponding blowup map, and \(E:=\pi^{-1}(\operatorname{pt})\) the exceptional divisor. Put \(e=c_{1}\big{(}\mathcal{O}(E)\big{)}=\operatorname{P.D.}(E)\). We would like to recall some well known facts about \(\operatorname{Bl}(X)\) which will be used later on.
### Cohomology of the blowup
Using a Mayer-Vietories sequence argument, it is easy to prove the following two facts:
1. The pullback map \(\pi^{*}:\;H^{*}(X;\mathbb{C})\rTo H^{*}(\operatorname{Bl}(X);\mathbb{C})\) is injective, so we can view the cohomology \(H^{*}(X;\mathbb{C})\) as a subvector space of \(H^{*}(\operatorname{Bl}(X);\mathbb{C})\).
2. We have a direct sum decomposition \[H^{*}(\operatorname{Bl}(X);\mathbb{C})=H^{*}(X;\mathbb{C})\bigoplus\widetilde{ H}^{*}(E),\] where \(\widetilde{H}^{*}(E)=\bigoplus_{i=1}^{n-1}\mathbb{C}\,e^{i}\) is the reduced cohomology of \(E\).
The Poincare pairing of \(\operatorname{Bl}(X)\) can be computed as follows. Let us choose a basis \(\phi_{i}\)\((1\leq i\leq N)\) of \(H^{*}(X;\mathbb{C})\), such that,
1. \(\phi_{1}=1\) and \(\phi_{N}=\operatorname{P.D.}(\operatorname{pt})\),
2. \(\phi_{i+1}=p_{i}=c_{1}(L_{i})\)\((1\leq i\leq r)\), where \(L_{i}\)\((1\leq i\leq r)\) is a set of ample line bundles on \(X\), such that, \(p_{i}\)\((1\leq i\leq r)\) form a \(\mathbb{Z}\)-basis of \(H^{2}(X,\mathbb{Z})_{\operatorname{t.f.}}\).
**Lemma 1**.: _Let \((\;\;,\;)^{\operatorname{Bl}(X)}\) and \((\;\;,\;)^{X}\) be the Poincare pairings on respectively \(\operatorname{Bl}(X)\) and \(X\). Then we have_
_a) \((\phi_{i},\phi_{j})^{\operatorname{Bl}(X)}=(\phi_{i},\phi_{j})^{X}\) for all \(1\leq i,j\leq N\)._
_b) \((\phi_{i},e^{k})^{\operatorname{Bl}(X)}=0\) for \(1\leq i\leq N\) and \(1\leq k\leq n-1\)._
_c) \(e^{n}=(-1)^{n-1}\phi_{N}\) and \((e^{k},e^{n-k})^{\operatorname{Bl}(X)}=(-1)^{n-1}\)._
Proof.: Parts a) and b) follow easily by the projection formula and Poincare duality. The second part of c) is a consequence of the first part, so we need only to prove that \(e^{n}=(-1)^{n-1}\phi_{N}\). We have \(e^{n}=c\phi_{N}\) for dimension reasons. Note that \(E\cong\mathbb{P}^{n-1}\) and \(\mathcal{O}(E)|_{E}=\mathcal{O}_{\mathbb{P}^{n-1}}(-1)\). Therefore, \(e|_{E}=c_{1}(O(E)|_{E})=-p\), where \(p=c_{1}(\mathcal{O}_{\mathbb{P}^{n-1}}(1))\) is the standard hyperplane class of \(\mathbb{P}^{n-1}\). We get
\[c=\int_{[\operatorname{Bl}(X)]}e^{n}=\int_{[E]}e^{n-1}=\int_{[\mathbb{P}^{n-1} ]}(-p)^{n-1}=(-1)^{n-1}.\]
The ring structure of \(H^{*}(\operatorname{Bl}(X);\mathbb{C})\) with respect to the cup product is also easy to compute. We have
1. \(H^{*}(X;\mathbb{C})\) is a subring of \(H^{*}(\operatorname{Bl}(X);\mathbb{C})\).
2. \(\phi_{1}\cup e^{k}=e^{k}\) and \(\phi_{i}\cup e^{k}=0\), \(2\leq i\leq N\), \(1\leq k\leq n-1\).
3. \[e^{k}\cup e^{l}=\begin{cases}e^{k+l}&\text{if }k+l<n,\\ (-1)^{n-1}\phi_{N}&\text{if }k+l=n,\\ 0&\text{if }k+l>n.\end{cases}\]
Property (1) follows from the fact that pullback in cohomology is a ring homomorphism. The formulas in (3) follow from Lemma 1, part c). Finally, (2) follows from (1), (3) and Lemma 1, part b).
### \(K\)-ring of the blowup
Let us compute the topological \(K\)-ring of \(\operatorname{Bl}(X)\). We will be interested only in manifolds \(X\), such that, the corresponding quantum cohomology is semi-simple. Such \(X\) are known to have cohomology classes of Hodge type \((p,p)\) only. In particular, \(K^{1}(X)\otimes\mathbb{Q}=0\). To simplify the exposition, let us assume that \(K^{1}(X)=0\). In our arguments below we will have to work with non-compact manifolds. However, in all cases the non-compact manifolds are homotopy equivalent to finite CW-complexes, so we define the corresponding K-groups by taking the K-groups of the corresponding finite CW-complexes, i.e., in the case of non-compact manifolds we choose the homotopical version of topological K-theory.
**Proposition 1**.: _a) The \(K\)-theoretic pullback \(\pi^{*}:K^{0}(X)\to K^{0}(\operatorname{Bl}(X))\) is injective._
_b) We have_
\[K^{0}(\operatorname{Bl}(X))=K^{0}(X)\bigoplus\bigoplus_{j=1}^{n-1}\mathbb{Z} \,\mathcal{O}_{E}^{j},\]
_where \(K^{0}(X)\) is viewed as a subring of \(K^{0}(\operatorname{Bl}(X))\) via the K-theoretic pullback \(\pi^{*}\) and \(\mathcal{O}_{E}:=\mathcal{O}-\mathcal{O}(-E)\) is the structure sheaf of the exceptional divisor._
Proof.: Let \(U\subset X\) be a small open neighborhood of the center of the blow up pt and \(V:=X\setminus\{\operatorname{pt}\}\). Note that \(\{U,V\}\) is a covering of \(X\). Put \(\widetilde{U}=\pi^{-1}(U)\) and \(\widetilde{V}:=\pi^{-1}(V)\), then \(\{\widetilde{U},\widetilde{V}\}\) is a covering of \(\operatorname{Bl}(X)\). Let us compare the reduced \(K\)-theoretic Mayer-Vietories sequences of these two coverings. We have the following commutative diagram:
where the vertical arrows in the above diagram are induced by the K-theoretic pullback \(\pi^{*}\). Note that \(\widetilde{K}^{\operatorname{ev}}(U\setminus\operatorname{pt})=\widetilde{K }^{0}(\widetilde{U}\setminus E)=0\) because \(\widetilde{U}\setminus E\cong U\setminus\operatorname{pt}\) is homotopic to \(\mathbb{S}^{2n-1}\) - the \((2n-1)\)-dimensional sphere. Therefore, the horizontal arrows in the first and the last square of the above diagram are respectively injections and surjections. Furthermore, \(\widetilde{K}^{-1}(U)=\widetilde{K}^{0}(U)=0\) because \(U\) is contractible and \(\widetilde{K}^{-1}(\widetilde{U})=0\) because \(\widetilde{U}\) is homotopy equivalent to \(E\cong\mathbb{P}^{n-1}\). We get that the second vertical arrow is an isomorphism (\(V\cong\widetilde{V}\)) and hence, recalling the 5-lemma or by simple diagram chasing, we get \(\widetilde{K}^{-1}(\operatorname{Bl}(X))\cong\widetilde{K}^{-1}(X)\). By assumption \(\widetilde{K}^{-1}(X)=0\), so \(\widetilde{K}^{-1}(\operatorname{Bl}(X))=0\). A straightforward diagram chasing shows that the 4th vertical arrow is injective, i.e., we proved a).
Note that the above diagram yields the following short exact sequence
where the map \(|_{E}\) is the restriction to the exceptional divisor \(E\cong\mathbb{P}^{n-1}\). The above exact sequence splits because \(\widehat{K}^{0}(\mathbb{P}^{n-1})\cong\mathbb{Z}^{n-1}\) is a free module. Note that \(\mathcal{O}_{E}|_{E}=\mathcal{O}_{\mathbb{P}^{n-1}}-\mathcal{O}_{\mathbb{P}^{n -1}}(1)\) is the generator of \(\widehat{K}^{0}(\mathbb{P}^{n-1})\), so part b) follows from the exactness of (12).
Let us compute the K-theoretic product of the torsion free part \(K^{0}(\operatorname{Bl}(X))_{\operatorname{t.f.}}\). Note that \(\pi_{*}(\mathcal{O}_{\operatorname{Bl}(X)})=\mathcal{O}_{X}\). Therefore, \(\pi_{*}\pi^{*}(F)=F\) for every \(F\in K^{0}(X)\). Let us compute \(\mathcal{O}_{E}\otimes\pi^{*}F\) for \(F\in\widetilde{K}^{0}(X)\). The restriction of \(\mathcal{O}_{E}\otimes\pi^{*}F\) to \(E\) is \(0\). Recalling the exact sequence (12) we get \(\mathcal{O}_{E}\otimes\pi^{*}F=\pi^{*}G\) for some \(G\in\widetilde{K}^{0}(X)\). Taking pushforward, we get
\[G=\pi_{*}(\mathcal{O}_{E}\otimes\pi^{*}F)=\pi_{*}(\mathcal{O}_{E})\otimes F= \mathbb{C}_{\operatorname{pt}}\otimes F=\operatorname{rk}(F)\mathbb{C}_{ \operatorname{pt}}=0,\]
where \(\mathbb{C}_{\operatorname{pt}}\) is the skyscraper sheaf on \(X\) and in the 3rd equality we used the exact sequence
where \(j:\mathbb{P}^{n-1}\to\operatorname{Bl}(X)\) is the embedding whose image is the exceptional divisor. This sequence implies \(\mathcal{O}_{E}=j_{*}\mathcal{O}_{\mathbb{P}^{n-1}}\Rightarrow\pi_{*}\mathcal{ O}_{E}=(\pi\circ j)_{*}\mathcal{O}_{\mathbb{P}^{n-1}}=\mathbb{C}_{\operatorname{pt}}.\) We proved that
\[\mathcal{O}_{E}\otimes\pi^{*}F=0,\quad\forall F\in\widetilde{K}^{0}(X).\]
It remains only to compute \(\mathcal{O}_{E}^{n}\). The restriction of \(\mathcal{O}_{E}^{n}\) to \(E\) is \((1-\mathcal{O}_{\mathbb{P}^{n-1}}(-1))^{n}=0\). Therefore, \(\mathcal{O}_{E}^{n}=\pi^{*}F\). The Chern character \(\operatorname{ch}(\mathcal{O}_{E}^{n})=(1-\exp(-c_{1}(\mathcal{O}(E))))^{n}=e^ {n}=(-1)^{n-1}\phi_{N}\), where we used Lemma 1, part c). On the other hand, the Chern character of the skyscraper sheaf can be computed easily with the Grothendieck-Riemann-Roch formula. Namely, we have
\[\operatorname{ch}(\iota_{*}^{\circ}(\mathbb{C}))\cup\operatorname{td}(X)= \iota_{*}^{\circ}(\operatorname{ch}(\mathbb{C})\cup\operatorname{td}( \operatorname{pt}))=\iota_{*}^{\circ}(1)=\operatorname{P.D.}(\operatorname{pt} )=\phi_{N},\]
where \(\iota^{\circ}:\operatorname{pt}\to X\) is the natural inclusion of the point pt. The above formula implies \(\operatorname{ch}(\mathbb{C}_{\operatorname{pt}})=\phi_{N}\). Comparing with the formula for \(\operatorname{ch}(\mathcal{O}_{E}^{n})\), we get
\[\mathcal{O}_{E}^{n}=(-1)^{n-1}\,\mathbb{C}_{\operatorname{pt}}\mod\ker( \operatorname{ch}).\]
Finally, let us finish this section by quoting the formula for the K-theoretic class of the tangent bundle (see [10], Lemma 15.4):
\[T\operatorname{Bl}(X)=TX-n-1+n\mathcal{O}(-E)+\mathcal{O}(E). \tag{13}\]
### Quantum cohomology of the blowup
Let us first compare the effective curve cones \(\operatorname{Eff}(X)\) and \(\operatorname{Eff}(\operatorname{Bl}(X)).\) We have an exact sequence
where \(j:\mathbb{P}^{n-1}\to\operatorname{Bl}(X)\) is the natural closed embedding of the exceptional divisor. The proof of the exactness is similar to the proof of (12). In particular, since the torsion free part of the above sequence splits, we get
\[H_{2}(\operatorname{Bl}(X);\mathbb{Z})_{\operatorname{t.f.}}=H_{2}(X;\mathbb{ Z})_{\operatorname{t.f.}}\oplus\mathbb{Z}\,\ell,\]
where \(\ell\in H_{2}(E;\mathbb{Z})\) is the class of a line in the exceptional divisor. The cone of effective curve classes \(\operatorname{Eff}(\operatorname{Bl}(X))\subset\operatorname{Eff}(X)\oplus \mathbb{Z}\,\ell\). The Novikov variables of the blowup will be fixed to be the Novikov variables of \(X\) and an extra variable corresponding to the line bundle \(\mathcal{O}(E)\). In other words, for \(\widetilde{\beta}=\beta+d\ell\in\operatorname{Eff}(\operatorname{Bl}(X))\), put
\[q^{\widetilde{\beta}}=q^{\beta}q_{r+1}^{(c_{1}(O(E)),\widetilde{\beta})}=q_{1}^ {(\phi_{2},\beta)}...q_{r}^{(\phi_{r+1},\beta)}q_{r+1}^{-d}\]
Note that \(\mathcal{O}(E)\) is not an ample line bundle: for example, \(\ell\cdot E=-1<0\). Our choice of \(q_{r+1}\) makes the structure constants formal Laurent (not power) series in \(q_{r+1}\). Following Bayer (see [3]) we
write \(q_{r+1}=Q^{n-1}\) for some formal variable \(Q\). Let us recall the basis \(\phi_{i}\) (\(1\leq i\leq N\)) of \(H^{*}(X;\mathbb{C})\). Put \(\phi_{N+k}=e^{k}\) (\(1\leq k\leq n-1\)). Then \(\phi_{i}\) (\(1\leq i\leq\widehat{N}:=N+n-1\)) is a basis of \(H^{*}(\operatorname{Bl}(X);\mathbb{C})\). Let \(t=(t_{1},\ldots,t_{\widehat{N}})\) be the corresponding linear coordinate system on \(H^{*}(\operatorname{Bl}(X);\mathbb{C})\). The structure constants of the quantum cohomology of \(\operatorname{Bl}(X)\) take the form
\[(\phi_{a}\bullet_{t,q}\phi_{b},\phi_{c}):=\langle\phi_{a},\phi_{b},\phi_{c} \rangle_{0,3}(t)=\sum_{m=0}^{\infty}\sum_{\widetilde{\beta}=(\beta,d)}\frac{q ^{\beta}Q^{-d(n-1)}}{m!}\langle\phi_{a},\phi_{b},\phi_{c},t,\ldots,t\rangle_{0,3+m,\overline{\beta}}.\]
### Twisted GW invariants of \(\mathbb{P}^{n-1}\)
It turns out that genus-0 GW invariants of \(\operatorname{Bl}(X)\) whose degree \(\widetilde{\beta}=d\ell\) with \(d\neq 0\) can be identified with certain twisted GW invariants of \(\mathbb{P}^{n-1}\). Suppose that \((C,z_{1},\ldots,z_{k},f)\) is a stable map representing a point in \(\overline{\mathcal{M}}_{0,k}(\operatorname{Bl}(X),d\ell)\). Let \(\pi:\operatorname{Bl}(X)\to X\) be the blowup map. Since \(\pi_{*}\circ f_{*}[C]=0\) and \(\pi\) induces a biholomorphism between \(\operatorname{Bl}(X)\setminus E\) and \(X\setminus\{\operatorname{pt}\}\), we get that \(f(C)\) is contained in \(E\). Therefore, we have a canonical identification
\[\overline{\mathcal{M}}_{0,k}(\operatorname{Bl}(X),d\ell)=\overline{\mathcal{ M}}_{0,k}(E,d),\]
where \(E\cong\mathbb{P}^{n-1}\) is the exceptional divisor. Let us compare the virtual tangent spaces of the two moduli spaces at \((C,z_{1},\ldots,z_{k},f)\). For the LHS, we have
\[\mathcal{T}_{0,k,d\ell}= H^{1}(C,\mathcal{T}_{C}(-z_{1}-\cdots-z_{k}))-H^{0}(C,\mathcal{T}_{C}(-z _{1}-\cdots-z_{k}))+\] \[H^{0}(C,f^{*}T_{\operatorname{Bl}(X)})-H^{1}(C,f^{*}T_{ \operatorname{Bl}(X)}).\]
while for the RHS we have
\[\mathcal{T}_{0,k,d}= H^{1}(C,\mathcal{T}_{C}(-z_{1}-\cdots-z_{k}))-H^{0}(C, \mathcal{T}_{C}(-z_{1}-\cdots-z_{k}))+\] \[H^{0}(C,f^{*}T_{E})-H^{1}(C,f^{*}T_{E}),\]
where \(\mathcal{T}_{C}\) is the tangent sheaf of \(C\) and \(\mathcal{T}_{C}(-z_{1}-\cdots-z_{k})\) is the sub sheaf of \(\mathcal{T}_{C}\) consisting of sections vanishing at \(z_{1},\ldots,z_{k}\). On the other hand, we have an exact sequence
where we used that \(\mathcal{O}_{E}(-1)\) is the normal bundle to the exceptional divisor in \(\operatorname{Bl}(X)\). Pulling back the exact sequence to \(C\) via the stable map and taking the long exact sequence in cohomology we get
Note that \(H^{0}(C,f^{*}\mathcal{O}_{E}(-1))=0\) because \(C\) is a rational curve. Indeed, if \(C^{\prime}\) is an irreducible component of \(C\) and \(d^{\prime}=f_{*}[C^{\prime}]\) is its contribution to the degree of \(f\), then \(C^{\prime}\cong\mathbb{P}^{1}\) and \(f^{*}\mathcal{O}_{E}(-1)|_{C^{\prime}}=\mathcal{O}_{\mathbb{P}^{1}}(-d^{ \prime})\). Therefore, \(H^{0}(C^{\prime},f^{*}\mathcal{O}_{E}(-1))=0\) and we get that the restrictions of the sections of \(f^{*}\mathcal{O}_{E}(-1)\) to the irreducible components of \(C\) are \(0\) which implies that there are no non-zero global sections. Let us recall the Riemann-Roch formula for nodal curves (easily proved by induction on the number of nodes)
\[\dim H^{0}(C,\mathcal{L})-\dim H^{1}(C,\mathcal{L})=1-g+\int_{[C]}c_{1}( \mathcal{L}),\]
where \(\mathcal{L}\) is a holomorphic line bundle on \(C\) and \(g\) is the genus of \(C\). Applying the Riemann-Roch formula to \(f^{*}\mathcal{O}_{E}(-1)\) we get that
\[\dim H^{1}(C,f^{*}\mathcal{O}_{E}(-1))=-1-\int_{f_{*}[C]}c_{1}(\mathcal{O}_{E}( -1))=d-1.\]
The cohomology group \(H^{1}(C,f^{*}\mathcal{O}_{E}(-1))\) is the fiber of a holomorphic vector bundle \(\mathbb{N}_{0,k,d}\) on \(\overline{\mathcal{M}}_{0,k}(E,d)\) of rank \(d-1\). The virtual tangent bundles are related by \(\mathcal{T}_{0,k,d\ell}=\mathcal{T}_{0,k,d}-\mathbb{N}_{0,k,d}\). Recalling the construction of the virtual fundamental cycle [5], we get
\[[\overline{\mathcal{M}}_{0,k}(\operatorname{Bl}(X),d\ell)]^{\operatorname{ virt}}=[\overline{\mathcal{M}}_{0,k}(E,d)]^{\operatorname{vir}}\cap e( \mathbb{N}_{0,k,d}).\]
The GW invariants
\[(\alpha_{1}\psi^{m_{1}},\dots,\alpha_{k}\psi^{m_{k}})_{0,k,d\ell}=\int_{[ \overline{\mathcal{M}}_{0,k}(E,d)]^{\operatorname{vir}}}\prod_{i=1}^{k} \operatorname{ev}_{i}^{*}(\alpha_{i}|_{E})\psi_{i}^{m_{i}}\cup e(\mathbb{N}_{ 0,k,d}).\]
Later on we will need the 3-point GW invariants with \(d=1\). Let us compute them. If \(d=1\), then \(e(\mathbb{N}_{0,k,d})=1\) and the above formula implies that the GW invariants of the blow up coincide with the GW invariants of the exceptional divisor, that is,
\[\langle\alpha_{1}\psi^{m_{1}},\dots,\alpha_{k}\psi^{m_{k}}\rangle_{0,k,\ell}= \langle\alpha_{1}|_{E}\psi^{m_{1}},\dots,\alpha_{k}|_{E}\psi^{m_{k}}\rangle_{0,k,1}^{E},\]
where we used the superscripts \(\operatorname{Bl}(X)\) and \(E\) in order to specify that the correlators are GW invariants of respectively \(\operatorname{Bl}(X)\) and \(E\). Note that if \(p=c_{1}\mathcal{O}_{E}(1)\) is the hyperplane class, then \(e|_{E}=-p\). The quantum cohomology of \(\mathbb{P}^{n-1}\) is well known to be \(\mathbb{C}[p]/(p^{n}-Q)\). In particular, the 3-point correlators
\[\langle p^{i},p^{j},p^{k}\rangle_{0,3,1}=\begin{cases}1&\text{ if }i+j+k=2n-1 \\ 0&\text{ otherwise,}\end{cases}\quad\forall 0\leq i,j,k\leq n-1.\]
Therefore,
\[\langle e^{i},e^{j},e^{k}\rangle_{0,3,\ell}=\begin{cases}-1&\text{ if }i+j+k=2n-1,\\ 0&\text{ otherwise.}\end{cases} \tag{14}\]
Let us specialize \(k=1\). Using the divisor equation (recall that \(\int_{\ell}e=-1\)) we get
\[\langle e^{i},e^{j}\rangle_{0,2,\ell}=\begin{cases}1&\text{ if }i=j=n-1,\\ 0&\text{ otherwise.}\end{cases} \tag{15}\]
### The vanishing theorem of Gathmann
Gathmann discovered a very interesting vanishing criteria for the GW invariants of the blowup (see [11]). We need a slight generalization of his result which can be stated as follows. Following Gathmann, we assign a weight to each basis vector
\[\operatorname{wt}(\phi_{a})=\begin{cases}0&\text{ if }1\leq a\leq N,\\ a-N-1&\text{ if }N<a\leq N+n-1.\end{cases}\]
In other words, the exceptional class \(e^{k}\) has weight \(k-1\) for all \(1\leq k\leq n-1\) and in all other cases the weight is \(0\).
**Proposition 2**.: _Suppose that we have a GW invariant_
\[\langle\phi_{a}\psi^{k},\phi_{b_{1}},\dots,\phi_{b_{m}},e^{l_{1}},\dots,e^{l_ {s}}\rangle_{0,\beta+d\ell,1+m+s}, \tag{16}\]
_where \(1\leq a\leq\widetilde{N}\), \(1\leq b_{1},\dots,b_{m}\leq N\), and \(2\leq l_{1},\dots,l_{s}\leq n-1\), satisfying the following 3 conditions:_
1. \(\beta\neq 0\)
_
2. \(\operatorname{wt}(\phi_{a})+\sum_{i=1}^{s}(l_{i}-1)>0\) _or_ \(d>0\)_._
3. \(\operatorname{wt}(\phi_{a})+\sum_{i=1}^{s}(l_{i}-1)<(d+1)(n-1)-k\)_._
_Then the GW invariant (16) must be 0._
Proof.: The proof is done by induction on \(k\). Gathmann's result is the case when \(k=0\). The inductive step uses the genus-0 Topological Recursion Relations (see Section 2.4). Suppose that the proposition is proved for \(k\) and let us prove it for \(k+1\). Using the TRRs we write the correlator (16) with \(k\) replaced by \(k+1\) in the following form:
\[\sum_{c=1}^{N+n-1}\ \sum\,\langle\phi_{a}\psi^{k},\phi_{c},\phi_{B^{\prime}},e^{L ^{\prime}}\rangle_{\beta^{\prime}+d^{\prime}\ell}\,\langle\phi^{c},\phi_{B^{ \prime\prime}},e^{L^{\prime\prime}}\rangle_{\beta^{\prime\prime}+d^{\prime \prime}\ell},\]
where the second sum is over all possible splittings \(B^{\prime}\sqcup B^{\prime\prime}=\{b_{1},\ldots,b_{m}\}\), \(L^{\prime}\sqcup L^{\prime\prime}=\{l_{1},\ldots,l_{s}\}\), \(\beta^{\prime}+\beta^{\prime\prime}=\beta\) and \(d^{\prime}+d^{\prime\prime}=d\). The notation is as follows. We dropped the genus and the number of marked points from the correlator notation because the genus is always 0 and the number of marked points is the same as the number of insertions. \(\phi_{B^{\prime}}\) denotes the insertion of all \(\phi_{b^{\prime}}\) with \(b^{\prime}\in B^{\prime}\) while \(e^{L^{\prime}}\) the insertions of \(e^{l^{\prime}}\) with \(l^{\prime}\in L^{\prime}\). Similar conventions apply for \(\phi_{B^{\prime\prime}}\) and \(e^{L^{\prime\prime}}\) in the second correlator. The first correlator has \(2+m^{\prime}+s^{\prime}\) insertions while the second one \(1+m^{\prime\prime}+s^{\prime\prime}\), where \(m^{\prime}\), \(m^{\prime\prime}\), \(s^{\prime}\), and \(s^{\prime\prime}\) are respectively the number of elements of respectively \(B^{\prime}\), \(B^{\prime\prime}\), \(L^{\prime}\), and \(L^{\prime\prime}\). We have to prove that if the 3 conditions in the proposition are satisfied where \(k\) should be replaced by \(k+1\), then the above sum is 0. We will refer to the correlator involving \(B^{\prime}\) and \(L^{\prime}\) as the first correlator and to the correlator involving \(B^{\prime\prime}\) and \(L^{\prime\prime}\) as the second correlator. We will prove that for each term in the above sum either the first or the second correlator vanishes. The proof will be divided into 4 cases.
_Case 1:_ if \(\beta^{\prime}=0\) and the second correlator does not satisfy condition (ii), that is, \(\operatorname{wt}(\phi^{c})+\sum_{l^{\prime\prime}\in L^{\prime\prime}}(l^{ \prime\prime}-1)\leq 0\) and \(d^{\prime\prime}\leq 0\). Note that since \(\beta^{\prime\prime}=\beta\neq 0\), the second correlator satisfies condition (i). Since \(\beta^{\prime}=0\) we need to consider only \(c\), such that, \(\phi_{c}|_{E}\neq 0\Rightarrow\phi_{c}\in\{1,e,\ldots,e^{n-1}\}\). Moreover, the weight of \(\phi^{c}\) is 0 so \(\phi_{c}\in\{1,e^{n-1}\}\) and \(\phi^{c}\in\{\phi_{N},e\}\). Since \(l^{\prime\prime}\geq 2\) for all \(l^{\prime\prime}\in L^{\prime\prime}\) the set \(L^{\prime\prime}\) must be empty. The corresponding term in the sum in this case takes the form
\[\langle\phi_{a}\psi^{k},\phi_{c},1,\ldots,1,e^{l_{1}},\ldots,e^{l_{s}}\rangle_ {d^{\prime}\ell}\,\langle\phi^{c},\phi_{B^{\prime\prime}}\rangle_{\beta+d^{ \prime\prime}\ell},\]
where the insertions from \(\phi_{B^{\prime}}\) all must be 1 otherwise \(\phi_{b}|_{E}=0\) and the correlator vanishes. Using the dimension formula we get
\[\deg(\phi_{a})+k+\deg(\phi_{c})+\sum_{i=1}^{s}l_{i}=(d^{\prime}+1)(n-1)+s+m^{ \prime}.\]
Note that \(\phi_{a}\) must satisfy \(\phi_{a}|_{E}\neq 0\), otherwise the correlator is 0. Therefore, \(\phi_{a}\in\{1,e,\ldots,e^{n-1}\}\) which implies that \(\deg(\phi_{a})\leq\operatorname{wt}(\phi_{a})+1\) with inequality only if \(\phi_{a}=1\). We get
\[(d^{\prime}+1)(n-1)+m^{\prime}=\deg(\phi_{a})+k+\deg(\phi_{c})+\sum_{i=1}^{s}( l_{i}-1)\leq\operatorname{wt}(\phi_{a})+1+k+\sum_{i=1}^{s}(l_{i}-1)+\deg( \phi_{c}).\]
On the other hand, let us recall that the correlator (16) (with \(k+1\) instead of \(k\)) satisfies condition (iii), that is,
\[\operatorname{wt}(\phi_{a})+\sum_{i=1}^{s}(l_{i}-1)<(d+1)(n-1)-k-1.\]
We get \((d^{\prime}+1)(n-1)+m^{\prime}<\deg(\phi_{c})+(d+1)(n-1)\). Recall that there are two possibilities for \(\phi_{c}\): \(\phi_{c}=1\) or \(\phi_{c}=e^{n-1}\). In the first case we get \(0\leq m^{\prime}<d^{\prime\prime}(n-1)\Rightarrow d^{\prime\prime}>0\) contradicting our assumption that \(d^{\prime\prime}\leq 0\). In the second case we get \(0\leq m^{\prime}<(d^{\prime\prime}+1)(n-1)\). This implies that \(d^{\prime\prime}>-1\) which
together with \(d^{\prime\prime}\leq 0\) implies that \(d^{\prime\prime}=0\). However, since \(\phi^{c}=e\), we get that the second correlator vanishes by the divisor equation. This completes the proof of our claim in Case 1.
_Case 2:_\(\beta^{\prime}=0\) and the second correlator satisfies condition (ii). Since \(\beta^{\prime\prime}=\beta\neq 0\), the second correlator satisfies condition (i) too, so it will vanish unless condition (iii) fails, that is,
\[\operatorname{wt}(\phi^{c})+\sum_{l^{\prime\prime}\in L^{\prime\prime}}(l^{ \prime\prime}-1)\geq(d^{\prime\prime}+1)(n-1).\]
On the other hand, similarly to Case 1, we must have \(\phi_{b^{\prime}}=1\) for all \(b^{\prime}\in B^{\prime}\), so the dimension formula applied to the first correlator yields
\[\deg(\phi_{a})+k+\deg(\phi_{c})+\sum_{l^{\prime}\in L^{\prime}}(l^{\prime}-1) =(d^{\prime}+1)(n-1)+m^{\prime}.\]
Adding up the above inequality and identity we get
\[\deg(\phi_{a})+k+\deg(\phi_{c})+\operatorname{wt}(\phi^{c})+\sum_{i=1}^{s}(l _{i}-1)\geq(d+1)(n-1)+n-1+m^{\prime}\]
Again \(\deg(\phi_{a})\leq\operatorname{wt}(\phi_{a})+1\) so
\[m^{\prime}+n-1-\deg(\phi_{c})-\operatorname{wt}(\phi^{c})\leq\operatorname{ wt}(\phi_{a})+1+k+\sum_{i=1}^{s}(l_{i}-1)-(d+1)(n-1).\]
Recalling again condition (iii) we get that the RHS of the above inequality is \(<0\Rightarrow m^{\prime}+n-1<\deg(\phi_{c})+\operatorname{wt}(\phi^{c}).\) Similarly to Case 1, we may assume that \(\phi_{c}|_{E}\neq 0\), that is, \(\phi_{c}\in\{1,e,\ldots,e^{n-1}\}\) which implies that \(\deg(\phi_{c})+\operatorname{wt}(\phi^{c})\leq n-1\). This is a contradiction with \(m^{\prime}+n-1<\deg(\phi_{c})+\operatorname{wt}(\phi^{c})\).
_Case 3:_ if \(\beta^{\prime}\neq 0\) and the first correlator does not satisfy condition (ii), that is, \(\operatorname{wt}(\phi_{a})+\operatorname{wt}(\phi_{c})+\sum_{l^{\prime}\in L ^{\prime}}(l^{\prime}-1)=0\) and \(d^{\prime}\leq 0\). Note that we must have \(L^{\prime}=\varnothing\) and either \(d^{\prime\prime}>0\) or \(\sum_{i=1}^{s}(l_{i}-1)>0\). Therefore, the second correlator satisfies condition (ii).
Suppose that \(\beta^{\prime\prime}=0\) (\(\Leftrightarrow\) condition (i) fails). We must have \(\phi_{b^{\prime\prime}}|_{E}\neq 0\) for all \(b^{\prime\prime}\in B^{\prime\prime}\Rightarrow\phi_{b^{\prime\prime}}=1\) for all \(b^{\prime\prime}\in B^{\prime\prime}\). Recalling the dimension formula for the second correlator we get
\[\deg(\phi^{c})+\sum_{i=1}^{s}(l_{i}-1)=(d^{\prime\prime}+1)(n-1)+m^{\prime \prime}-1.\]
On the other hand, since \(\operatorname{wt}(\phi_{a})=0\) for the case under consideration, condition (iii) implies that \(\sum_{i=1}^{s}(l_{i}-1)<(d+1)(n-1)-k-1\). Combining this estimate with the above equality we get \(m^{\prime\prime}+k<\deg(\phi^{c})+d^{\prime}(n-1).\) If \(m^{\prime\prime}>0\), then the second correlator has at least one insertion by \(1\) (\(\because B^{\prime\prime}\neq\varnothing\)). Since the second correlator does not have descendants it will vanish unless \(d^{\prime\prime}=0\). However, if \(\beta^{\prime\prime}=d^{\prime\prime}=0\) the second correlator is non-zero only if the number of insertion is \(3\) because the moduli space is \(\overline{\mathcal{M}}_{0,1+m^{\prime\prime}+s}\times\operatorname{Bl}(X)\), that is, \(m^{\prime\prime}=s=1\). Moreover \(\phi^{c}\cup e^{l_{1}}\) up to a constant must be \(\phi_{N}\Rightarrow\phi^{c}=e^{n-l_{1}}\) and \(\phi_{c}=e^{l_{1}}\). However, \(l_{1}\geq 2\) by definition, so \(\operatorname{wt}(\phi_{c})=l_{1}-1>0\) - contradicting the assumption that the first correlator does not satisfy condition (ii). We get \(m^{\prime\prime}=0\) and the estimate that we did above yields \(k<\deg(\phi^{c})+d^{\prime}(n-1).\) Note that \(\deg(\phi^{c})\leq n-1\Rightarrow d^{\prime}>-1\). Recall that we are assuming that \(d^{\prime}\leq 0\), so \(d^{\prime}=0\). Recalling the divisor equation we get \(\phi_{c}\neq e\). Since \(\beta^{\prime\prime}=0\) the restriction \(\phi^{c}|_{E}\neq 0\Rightarrow\phi^{c}\in\{1,e,\ldots,e^{n-1}\}\). Moreover, \(\phi^{c}\neq 1\) thanks to the string equation. We get \(\phi_{c}=e^{l}\) for some \(l\geq 2\) contradicting the assumption that \(\operatorname{wt}(\phi_{c})=0\).
Suppose now that \(\beta^{\prime\prime}\neq 0\). Then the second correlator satisfies both conditions (i) and (ii) \(\Rightarrow\) condition (iii) must fail, that is,
\[\operatorname{wt}(\phi^{c})+\sum_{i=1}^{s}(l_{i}-1)\geq(d^{\prime\prime}+1)(n- 1). \tag{17}\]
Using that \(\operatorname{wt}(\phi^{c})\leq n-2\) and \(\sum_{i}(l_{i}-1)<(d+1)(n-1)-k-1\) we get
\[(d^{\prime\prime}+1)(n-1)<n-2+(d+1)(n-1)-k-1\]
which implies that \(k+1<d^{\prime}(n-1)+n-2.\) In particular, \(d^{\prime}>-1\) and since by assumption \(d^{\prime}\leq 0\) we get \(d^{\prime}=0\). If \(\phi_{c}=e\), then using the divisor equation we get
\[\langle\phi_{a}\psi^{k},\phi_{c},\phi_{B^{\prime}}\rangle_{\beta^{\prime}}= \langle e\cup\phi_{a}\psi^{k-1},\phi_{B^{\prime}}\rangle_{\beta^{\prime}}.\]
Since \(\operatorname{wt}(\phi_{a})=0\) the cup product \(e\cup\phi_{a}\neq 0\) only if \(\phi_{a}=e\). This however implies that \(e\cup\phi_{a}=e^{2}\) has positive weight and hence the correlator on the RHS of the above identity satisfies both conditions (i) and (ii). Condition (iii) must fail, so \(1\geq n-1-(k-1)=n-k\), that is, \(k\geq n-1\). On the other hand, recall that we already have the estimate \(k+1<d^{\prime}(n-1)+n-2=n-2\) which contradicts the inequality in the previous sentence. We get \(\phi_{c}\neq e\) which together with \(\operatorname{wt}(\phi_{c})=0\) implies that \(\phi^{c}\notin\{e^{2},\ldots,e^{n-1}\}\Rightarrow\operatorname{wt}(\phi^{c})=0\). Recalling (17) we get
\[(d^{\prime\prime}+1)(n-1)\leq\sum_{i=1}^{s}(l_{i}-1)<(d+1)(n-1)-k-1.\]
Since \(d^{\prime}=0\), we get \(0<-k-1\) which is clearly a contradiction. This completes the proof of the vanishing claim in Case 3.
_Case 4:_ if \(\beta^{\prime}\neq 0\) and the first correlator satisfies condition (ii). Then condition (iii) for the first correlator must fail, that is,
\[\operatorname{wt}(\phi_{a})+\operatorname{wt}(\phi_{c})+\sum_{l^{\prime}\in L ^{\prime}}(l^{\prime}-1)\geq(d^{\prime}+1)(n-1)-k. \tag{18}\]
We claim that the second correlator also satisfies conditions (i) and (ii). Indeed, suppose that (i) is not satisfied, that is, \(\beta^{\prime\prime}=0\). All insertions in \(\phi_{B^{\prime\prime}}\) must be \(1\). Recalling the dimension formula we get
\[\deg(\phi^{c})+\sum_{l^{\prime\prime}\in L^{\prime\prime}}(l^{\prime\prime}-1 )=(d^{\prime\prime}+1)(n-1)-1+m^{\prime\prime}.\]
Adding up the above identity and the inequality (18) we get
\[\operatorname{wt}(\phi_{a})+\deg(\phi^{c})+\operatorname{wt}(\phi_{c})+\sum_ {i=1}^{s}(l_{i}-1)\geq(d+2)(n-1)-k-1+m^{\prime\prime}=(d+1)(n-1)-k-1+n-1+m^{ \prime\prime}\]
which is equivalent to
\[n-1+m^{\prime\prime}-\deg(\phi^{c})-\operatorname{wt}(\phi_{c})\leq \operatorname{wt}(\phi_{a})+\sum_{i=1}^{s}(l_{i}-1)+k+1-(d+1)(n-1)<0,\]
where for the last inequality we used that the correlator whose vanishing we want to prove satisfies condition (iii). We get \(m^{\prime\prime}+n\leq\deg(\phi^{c})+\operatorname{wt}(\phi_{c})\). On the other hand, since \(\phi^{c}|_{E}\neq 0\), we have \(\phi^{c}\in\{1,e,\ldots,e^{n-1}\}\) which implies that \(\deg(\phi^{c})+\operatorname{wt}(\phi_{c})\leq n-1\) - contradiction. This proves that \(\beta^{\prime\prime}\neq 0\).
Suppose that the second correlator does not satisfy condition (ii). Then, \(d^{\prime\prime}\leq 0\), \(\operatorname{wt}(\phi^{c})=0\), and \(L^{\prime\prime}=\emptyset\). Since \(L^{\prime\prime}=\emptyset\), the inequality (18) takes the form
\[\operatorname{wt}(\phi_{a})+\operatorname{wt}(\phi_{c})+\sum_{i=1}^{s}(l_{i}- 1)\geq(d^{\prime}+1)(n-1)-k.\]
On the other hand, recalling again condition (iii) for the correlator whose vanishing we wish to prove we get \(\operatorname{wt}(\phi_{a})+\sum_{i=1}^{s}(l_{i}-1)<(d+1)(n-1)-k-1\). Combining with the above estimate we get
\[(d^{\prime}+1)(n-1)-k<\operatorname{wt}(\phi_{c})+(d+1)(n-1)-k-1\]
which becomes \((-d^{\prime\prime})(n-1)<\operatorname{wt}(\phi_{c})-1\). Since \(\operatorname{wt}(\phi^{c})=0\) and \(d^{\prime\prime}\leq 0\) the above inequality is possible only if \(\phi^{c}=e\). Then we get \(d^{\prime\prime}\neq 0\) thanks to the divisor equation, that is, \(d^{\prime\prime}\leq-1\Rightarrow\operatorname{wt}(\phi_{c})-1>n-1\). This is a contradiction because the maximal possible value of \(\operatorname{wt}(\phi_{c})\) is \(n-2\). This completes the proof of our claim that the second correlator satisfies conditions (i) and (ii).
Finally, in order for the second correlator to be non-zero, condition (iii) must fail. We get
\[\operatorname{wt}(\phi^{c})+\sum_{l^{\prime\prime}\in L^{\prime\prime}}(l^{ \prime\prime}-1)\geq(d^{\prime\prime}+1)(n-1).\]
Adding up the above inequality and (18) we get
\[\operatorname{wt}(\phi_{a})+\operatorname{wt}(\phi_{c})+\operatorname{wt}( \phi^{c})+\sum_{i=1}^{s}(l_{i}-1)\geq(d+2)(n-1)-k=(d+1)(n-1)-k-1+n.\]
On the other hand, recalling the inequality \(\operatorname{wt}(\phi_{a})+\sum_{i=1}^{s}(l_{i}-1)<(d+1)(n-1)-k-1\), we get
\[n-\operatorname{wt}(\phi_{c})-\operatorname{wt}(\phi^{c})<0\]
The inequality clearly does not hold if one of the weights is \(0\). If both weights are non-zero, then we will have \(\operatorname{wt}(\phi_{c})+\operatorname{wt}(\phi^{c})=n-2\) which again contradicts the above inequality. The conclusion is that either the first or the second correlator satisfies condition (iii) and hence one of the two correlators must vanish according to the inductive assumption.
## 4. Second structure connection and blowups
Let us recall the notation already fixed in Sections 3.1 and 3.3. From now on, for a complex variety \(Y\), we denote by \(H(Y):=H^{*}(Y,\mathbb{C})\) and \(\widetilde{H}(Y)\) respectively the cohomology and the reduced cohomology of \(Y\) with complex coefficients. Using the direct sum decomposition \(H(\operatorname{Bl}(X))=H(X)\oplus\widetilde{H}(E)\) we define the \(H(X)\)-component (resp. \(\widetilde{H}(E)\)-component) of a vector \(v\in H(\operatorname{Bl}(X))\) to be the projection of \(v\) on \(H(X)\) (resp. \(\widetilde{H}(E)\)).
We will view quantum cohomology of \(\operatorname{Bl}(X)\) as a family of Frobenius manifolds parametrized by the Novikov variables \(q=(q_{1},\dots,q_{r+1})\in(\mathbb{C}^{*})^{r+1}\) defined in Section 3.3. Recall that \(q_{r+1}=Q^{n-1}\). We will be interested in the Laurent series expansion of the second structure connection of \(\operatorname{Bl}(X)\) with respect to \(Q\) at \(Q=0\), while the remaining parameters \(q_{1},\dots,q_{r}\) remain fixed. The main goal in this section is to determine the leading order terms of this expansion.
### Period vectors for the blowup
Let us denote by \(\widetilde{\rho}\) and \(\rho\) the operators of classical cup product multiplications by respectively \(c_{1}(T\operatorname{Bl}(X))\) and \(c_{1}(TX)\). Let \(\widetilde{\theta}\) and \(\theta\) be the grading operators of the Frobenius structures underlying the quantum cohomologies of respectively \(\operatorname{Bl}(X)\) and \(X\) (see (11)).
**Lemma 2**.: _a) The following formula holds:_
\[\widetilde{I}^{(-m)}(\lambda):=e^{\overline{\rho}\partial_{\lambda}\partial_{m }}\left(\frac{\lambda^{\overline{\theta}+m-1/2}}{\Gamma(\widetilde{\theta}+m +1/2)}\right)=\left(\frac{\lambda^{\overline{\theta}+m-1/2}}{\Gamma( \widetilde{\theta}+m+1/2)}\right)e^{\overline{\rho}\,\overline{\theta}\,_{m}},\]
_where the first identity is just a definition and \(\overline{\partial}\,_{m}\) denotes the right action by a derivation with respect to \(m\)._
_b) The following identity holds:_
\[\widetilde{I}^{(-m)}(Q^{-1}\lambda)=\left(\frac{\lambda^{\overline{\theta}+m- 1/2}}{\Gamma(\widetilde{\theta}+m+1/2)}\right)e^{Q\overline{\rho}\,\overleftarrow {\partial}\,_{m}}\,Q^{-(\overline{\theta}+m-1/2)}\,Q^{-\overline{\rho}}.\]
Proof.: a) By definition
\[e^{\overline{\rho}\partial_{\lambda}\partial_{m}}\left(\frac{\lambda^{\overline{ \theta}+m-1/2}}{\Gamma(\overline{\theta}+m+1/2)}\right)=\sum_{k=0}^{\infty} \frac{1}{k!}\,\widehat{\rho}^{k}\,\partial_{m}^{k}\left(\frac{\lambda^{ \overline{\theta}+m-k-1/2}}{\Gamma(\overline{\theta}+m-k+1/2)}\right)=\sum_{k =0}^{\infty}\frac{1}{k!}\,\partial_{m}^{k}\left(\frac{\lambda^{\overline{ \theta}+m-1/2}}{\Gamma(\overline{\theta}+m+1/2)}\right)\widehat{\rho}^{k}\,,\]
where we used that \(\widehat{\rho}\,\overline{\theta}=(\overline{\theta}+1)\widehat{\rho}\). The above expression is by definition the right action of \(e^{\overline{\rho}\,\overline{\partial}\,_{m}}\) on \(\frac{\lambda^{\overline{\theta}_{*}m-1/2}}{\Gamma(\overline{\theta}+m+1/2)}\).
b) Using the formula from part a), we get that the identity that we have to prove is equivalent to the following conjugation formulas:
\[Q^{\overline{\theta}+m}Q^{-Q\overline{\rho}}Q^{-(\overline{\theta}+m)}=Q^{- \overline{\rho}}\]
and
\[Q^{-(\overline{\theta}+m)}\,e^{\overline{\rho}\,\overline{\partial}\,_{m}} \,Q^{\overline{\theta}+m}=Q^{-Q\,\overline{\rho}\,}e^{Q\,\overline{\rho}\, \overline{\partial}\,_{m}}. \tag{19}\]
The first identity follows easily from \([\overline{\theta},\overline{\rho}]=-\widehat{\rho}\). Let us prove (19). To begin with, note that this is an identity between operators acting from the right. We will use the following fact. Suppose that we have an associative algebra \(\mathcal{A}\) acting on a vector space \(V\) from the right, that is, \(v\cdot(AB)=(v\cdot A)\cdot B\) for all \(A,B\in\mathcal{A}\) and \(v\in V\). Then the following formula holds:
\[v\cdot(e^{A}Be^{-A})=v\cdot(e^{\mathrm{ad}_{A}}(B)), \tag{20}\]
where \(\mathrm{ad}_{A}(X)=AX-XA\). In our case, \(\mathcal{A}\) is the algebra of differential operators in \(m\) with coefficients in \(\mathrm{End}(H^{*}(\mathrm{Bl}(X)))\), that is, as a vector space \(\mathcal{A}\) consists of elements of the form
\[\sum_{k=0}^{k_{0}}c_{k}(m)\overleftarrow{\partial\,_{m}}^{k},\quad c_{k}(m) \in\mathrm{End}(H^{*}(\mathrm{Bl}(X)))\]
and the product in \(\mathcal{A}\) is determined by the natural composition of endomorphisms of \(H^{*}(\mathrm{Bl}(X))\) and the commutation relation \([m,\overleftarrow{\partial\,_{m}}]=1\). Let us apply the conjugation formula (20) to (19). The main difficulty is to prove that
\[\mathrm{ad}_{\overline{\theta}+m}^{k}(\widehat{\rho}\,\overleftarrow{\partial \,_{m}})=(-1)^{k}\widetilde{\rho}\,\overleftarrow{\partial\,_{m}}+(-1)^{k-1}k \widetilde{\rho} \tag{21}\]
for all \(k\geq 0\). We argue by induction on \(k\). For \(k=0\), the identity is true. Suppose that it is true for \(k\). Then we get
\[[\overline{\theta}+m,(-1)^{k}\widetilde{\rho}\,\overleftarrow{\partial\,_{m}} +(-1)^{k-1}k\widetilde{\rho}]=(-1)^{k}(-\widetilde{\rho}\,\overleftarrow{ \partial\,_{m}}+\widetilde{\rho})+(-1)^{k}k\widetilde{\rho}=(-1)^{k+1} \widetilde{\rho}\,\overleftarrow{\partial\,_{m}}+(-1)^{k}(k+1)\widetilde{\rho},\]
where we used that \([\overline{\theta},\widetilde{\rho}]=-\widetilde{\rho}\) and \([m,\overleftarrow{\partial\,_{m}}]=1\). Using the conjugation formula (20) and formula (21), we get that the LHS of (19) is equal to
\[\exp\left(\sum_{k=0}^{\infty}\frac{1}{k!}(-\log Q)^{k}\Big{(}(-1)^{k} \widetilde{\rho}\,\overleftarrow{\partial\,_{m}}+(-1)^{k-1}k\widetilde{\rho} \Big{)}\right)=e^{Q\,\overline{\rho}\,\overleftarrow{\partial\,_{m}}}\,e^{-( \log Q)\,Q\,\overline{\rho}},\]
which is the same as the RHS of (19).
Put \(q=(q_{1},\ldots,q_{r})\) and let \(S(t,q,Q,z)\) be the calibration of the blow up \(\mathrm{Bl}(X)\). Let us recall the fundamental solution of the 2nd strcuture connection for the quantum cohomology of \(\mathrm{Bl}(X)\)
\[I^{(-m)}(t,q,Q,\lambda)=\sum_{k=0}^{\infty}(-1)^{k}S_{k}(t,q,Q)\partial_{ \lambda}^{k}\vec{I}^{(-m)}(\lambda).\]
Recalling Lemma 2 we get
\[I^{(-m)}(t,q,Q,Q^{-1}\lambda)Q^{\overline{\rho}}Q^{\overline{\theta} +m-1/2}=\] \[\sum_{i,l=0}^{\infty}(-1)^{l}Q^{l}S_{l}(t,q,Q)\frac{\partial_{m}^{ i}}{i!}\left(\frac{\lambda^{\overline{\theta}+m-l-1/2}}{\Gamma(\widetilde{ \theta}+m-l+1/2)}\left(Q\,\widetilde{\rho}\right)^{i}\right). \tag{22}\]
Let us extend the Hodge grading operator \(\theta\) of \(X\) to \(H^{*}(\operatorname{Bl}(X))\) in such a way that \(\theta(e^{k})=\frac{n}{2}\,e^{k}\) for all \(1\leq k\leq n-1\). Let \(\Delta:=\widetilde{\theta}-\theta\), then in the basis
\[\phi_{i}\quad(1\leq i\leq N),\quad e^{k}\quad(1\leq k\leq n-1) \tag{23}\]
of \(H^{*}(\operatorname{Bl}(X))\), the operator \(\Delta\) takes the form
\[\Delta(\phi_{i}) =0\quad(1\leq i\leq N),\] \[\Delta(e^{k}) =-k\,e^{k}\quad(1\leq k\leq n-1).\]
Let us point out that the basis of \(H^{*}(\operatorname{Bl}(X))\) dual to the basis (23) with respect to the Poincare pairing is given by \(\phi^{i}\)\((1\leq i\leq N)\), \(e_{k}\)\((1\leq k\leq n-1)\), where \(\phi^{i}\)\((1\leq i\leq N)\) is a basis of \(H^{*}(X)\) dual to \(\phi_{i}\)\((1\leq i\leq N)\) with respect to the Poincare pairing (on \(X\)) and \(e_{k}:=(-1)^{n-1}e^{n-k}\).
**Lemma 3**.: _Suppose that \(t=\sum_{b=2}^{N}t_{b}\phi_{b}\in H^{*}(X)\) and that \(l\geq 1\). Then_
\[Q^{\Delta}Q^{k+l}(S_{l}(t,q,Q)e^{k})= \sum_{k^{\prime\prime}=1}^{n-1}\sum_{d=0}^{\infty}\langle\psi^{l -1}e^{k},e^{k^{\prime\prime}}\rangle_{0,2,d\ell}e_{k^{\prime\prime}}+\langle \psi^{l-1}e^{k},1\rangle_{0,2,d\ell}\,Q^{n}\,(-1)^{n-1}\phi_{N}+\] \[+O(Q\widetilde{H}(E)+Q^{n+1}H(X)),\]
_where the \(O\)-term denotes a power series in \(Q\) with values in \(H(\operatorname{Bl}(X))\) whose \(\widetilde{H}(E)\)-component involves only positive powers of \(Q\) and its \(H(X)\)-components involves only powers of \(Q\) of degree \(\geq n+1\)._
Proof.: Recall that every \(\widetilde{\beta}\in\operatorname{Eff}(\operatorname{Bl}(X))\) has the form \(\widetilde{\beta}=\beta+d\ell\) for some \(\beta\in\operatorname{Eff}(X)\) and \(d\in\mathbb{Z}\). Recalling the definition of the calibration we get
\[Q^{\Delta}Q^{k+l}(S_{l}(t,q,Q)e^{k})=\] \[Q^{\Delta}Q^{k+l}\sum_{\widetilde{\beta}\in\operatorname{Eff}( \operatorname{Bl}(X))}\left(\sum_{b=1}^{N}(e^{k}\psi^{l-1},\phi_{b})_{0,2, \beta+d\ell}(t)\phi^{b}+\sum_{k^{\prime\prime}=1}^{n-1}(e^{k}\psi^{l-1},e^{k^ {\prime\prime}})_{0,2,\beta+d\ell}(t)e_{k^{\prime\prime}}\right)q^{\beta}Q^{-d (n-1)}.\]
Let us examine first the correlator
\[\langle e^{k}\psi^{l-1},\phi_{b}\rangle_{0,2,\beta+d\ell}(t)=\sum_{r=0}^{ \infty}\sum_{b_{1},\ldots,b_{r}=2}^{N}(e^{k}\psi^{l-1},\phi_{b},\phi_{b_{1}}, \ldots,\phi_{b_{r}})_{0,2+r,\beta+d\ell}\,t_{b_{1}}\cdots t_{b_{r}}.\]
There are \(3\) cases.
_Case 1:_ if \(\beta=0\). Then the correlator is a twisted GW invariant of \(E\) and since \(\phi_{b}|_{E}=0\) for \(2\leq b\leq N\), we may assume that \(b=1\), that is, \(\phi_{b}=\phi_{1}=1\). For similar reasons we may assume that \(r=0\). The dimension of the virtual fundamental cycle of \(\overline{\mathcal{M}}_{0,2}(\operatorname{Bl}(X),d\ell)\) is \((d+1)(n-1)\). Therefore, \(k+l-1=(d+1)(n-1)\Rightarrow k+l-d(n-1)=n\), that is, in this case the correlator coincides with the term of order \(Q^{n}\) on the RHS of the formula that we want to prove.
_Case 2:_ if \(\beta\neq 0\) and the correlator does not satisfy condition (ii) in Gathmann's vanishing theorem. Then \(d\leq 0\) and \(k-1\leq 0\), that is, \(k=1\). If \(d\leq-1\), then \(k+l-d(n-1)\geq k+l+n-1\geq n+1\), so the correlator contributes to the terms of order \(O(Q^{n+1}H(X))\). It remains to consider the case when \(k=1\) and \(d=0\). We will prove that if the correlator \(\langle e\psi^{l-1},\phi_{b}\rangle_{0,2,\beta}(t)\) is non-zero, then \(l\geq n\)
First, by using the divisor equation we may reduce to the cases when \(2\leq b\leq N\). Indeed, if \(b=1\) then by using the string equation \(\langle e\psi^{l-1},1\rangle_{0,2,\beta}(t)=\langle e\psi^{l-2}\rangle_{0,1,\beta }(t)\). Since \(\beta\neq 0\), there exists a divisor class \(p\in H^{2}(X)\), such that, \(\int_{\beta}p\neq 0\). Recalling the divisor equation we get \(\langle e\psi^{l-2},p\rangle_{0,2,\beta}=\int_{\beta}p(e\psi^{l-2})_{0,1,\beta}\), that is, the correlator for \(b=1\) can be expressed in terms of correlators involving only \(2\leq b\leq N\).
Suppose now that \(2\leq b\leq N\). Recalling the divisor equation we get
\[\langle e\psi^{l-1},\phi_{b}\rangle_{0,2,\beta}(t)=\langle e,\psi^{l},\phi_{b }\rangle_{0,3,\beta}(t)\]
which according to the Topological Recursion Relations (TRR) can be written as
\[\sum_{\beta^{\prime}+\beta^{\prime\prime}=\beta}\sum_{d\in\mathbb{Z}} \big{(}\sum_{j=1}^{n-1}\langle\psi^{l-1},e^{j}\rangle_{0,2,\beta^ {\prime}+d\ell}(t)(e_{j},e,\phi_{b})_{0,3,\beta^{\prime\prime}-d\ell}(t)+\] \[\sum_{a=1}^{N}\langle\psi^{l-1},\phi_{a}\rangle_{0,2,\beta^{ \prime}+d\ell}(t)(\phi^{a},e,\phi_{b})_{0,3,\beta^{\prime\prime}-d\ell}(t) \big{)}\]
Let us consider the two correlators that involve \(\beta^{\prime}\). If \(\beta^{\prime}=0\), then in both correlators \(d>0\) and in the 2nd correlator \(\phi_{a}=1\). Since \(t|_{E}=0\) we may assume that \(t=0\). By the dimension formula we get \(l-1+j=(d+1)(n-1)\) for the 1st correlator and \(l-1=(d+1)(n-1)\) for the 2nd correlator. In both cases, since \(d\geq 1\) and \(n\geq 2\), we have \(l\geq n\). Suppose now that \(\beta^{\prime}\neq 0\) and that the 1st (resp. 2nd) correlator does not satisfy condition (ii) in Gathmann's vanishing theorem, that is \(j=1\) and \(d\leq 0\). Note that for the correlators involving \(\beta^{\prime\prime}\) we must have \(\beta^{\prime\prime}\neq 0\) because \(\phi_{b}|_{E}=0\) and \(d\neq 0\) due to the divisor equation for the divisor class \(e\). We get \(d<0\Rightarrow\) the correlator involving \(\beta^{\prime\prime}\) satisfies both conditions (i) and (ii) in Gathmann's vanishing theorem. Therefore, condition (iii) must fail, that is, \(n-j-1\geq(-d+1)(n-1)\) and \(0\geq(-d+1)(n-1)\). Since \(-d\geq 1\), both inequalities lead to a contradiction. It remains only the possibility that \(\beta^{\prime}\neq 0\) and that the correlators involving \(\beta^{\prime}\) satisfy condition (ii) in Gathmann's vanishing theorem. Then condition (iii) must fail, so \(j-1+l-1\geq(d+1)(n-1)\) and \(l-1\geq(d+1)(n-1)\). If \(d\geq 1\), then these inequalities will imply that \(l\geq n+1\). If \(d=0\), then \(\beta^{\prime\prime}=0\) otherwise the correlator involving \(\beta^{\prime\prime}\) will be \(0\) by the divisor equation. But then the correlator becomes \(\int_{\text{BI}(X)}e_{j}\cup e\cup\phi_{b}\) which is \(0\) because \(\phi_{b}\cup e=0\). Finally, if \(d\leq-1\), then, since \(\phi_{b}|_{E}=0\) we must have \(\beta^{\prime\prime}\neq 0\), so the correlator involving \(\beta^{\prime\prime}\) satisfies conditions (i) and (ii) in Gathmann's vanishing theorem. The 3rd condition must fail, that is, \(n-j-1\geq(-d+1)(n-1)\) and \(0\geq(-d+1)(n-1)\). Both inequalities are impossible and this completes the analysis in the 2nd case.
_Case 3:_ if \(\beta\neq 0\) and the correlator does satisfy condition (ii). Then condition (iii) in Gathmann's vanishing theorem does not hold, that is,
\[k-1\geq(d+1)(n-1)-l+1=d(n-1)+n-l.\]
This inequality is equivalent to \(k+l-d(n-1)\geq n+1\). We get that the correlators in this case contribute to the terms of order \(O(Q^{n+1})\).
Similarly, let us examine the correlators
\[\langle e^{k}\psi^{l-1},e^{k^{\prime\prime}}\rangle_{0,2,\beta+d\ell}(t)=\sum _{r=0}^{\infty}\sum_{b_{1},\ldots,b_{r}=2}^{N}\langle e^{k}\psi^{l-1},e^{k^{ \prime\prime}},\phi_{b_{1}},\ldots,\phi_{b_{r}}\rangle_{0,2+r,\beta+d\ell}\,t _{b_{1}}\cdots t_{b_{r}}.\]
Since \(\Delta(e_{k^{\prime\prime}})=-(n-k^{\prime\prime})e_{k^{\prime\prime}}\), we get that we have to prove that if the above correlator is non-zero, then \(k+l-d(n-1)-(n-k^{\prime\prime})\geq 0\) and that if the equality holds, then \(r=0\) and \(\beta=0\). Again we will consider 3 cases.
_Case 1_: if \(\beta=0\). Just like above, \(r=0\) because we can identify the correlator with a twisted GW invariant of the exceptional divisor and the resptriction of \(\phi_{b_{i}}\) to \(E\) is \(0\). Using the dimension formula for the virtual fundamental cycle, we get
\[k+l-1+k^{\prime\prime}=\dim\big{[}\overline{\mathcal{M}}_{0,2}(\text{Bl}(X),d \ell)\big{]}^{\text{virt}}=n-1+(n-1)d.\]
We get
\[k+l-d(n-1)-(n-k^{\prime\prime})=k+l+k^{\prime\prime}-n-d(n-1)=0.\]
_Cases 2:_ if \(\beta\neq 0\) and condition (ii) does not hold. Then \(d\leq 0\) and \(k-1+k^{\prime\prime}-1\leq 0\), that is, \(k=k^{\prime\prime}=1\). If \(d\leq-1\), then
\[k+l-d(n-1)-(n-k^{\prime\prime})\geq k+l+n-1-n+k^{\prime\prime}=1+l\geq 2.\]
The correlator contributes to the terms of order \(O(Q^{2})\). Suppose that \(d=0\). The insertion \(e^{k^{\prime\prime}}=e\) can be removed via the divisor equation, that is, the correlator in front of \(t_{b_{1}}\cdots t_{b_{r}}\) takes the form
\[(e^{k}\psi^{l-1},e^{k^{\prime\prime}},\phi_{b_{1}},\ldots,\phi_{b_{r}})_{0,2+r,\beta}=\langle e^{2}\psi^{l-2},\phi_{b_{1}},\ldots,\phi_{b_{r}}\rangle_{0,1+ r,\beta}.\]
The above correlator does satisfy condition (ii) of Proposition 2. Therefore, in order to have a non-trivial contribution, condition (iii) in Gathmann's vanishing theorem must fail, that is, \(1\geq n-1-l+2\) or equivalently \(l\geq n\). We get
\[k+l-d(n-1)-(n-k^{\prime\prime})=1+l-(n-1)=2+l-n\geq 2>0,\]
so the equality that we need to prove holds.
_Case 3:_ if \(\beta\neq 0\) and condition (ii) holds. In other words, conditions (i) and (ii) in Gathmann's vanishing theorem (see Proposition 2) hold for the correlators
\[\langle e^{k}\psi^{l-1},e^{k^{\prime\prime}},\phi_{b_{1}},\ldots,\phi_{b_{r}} \rangle_{0,2+r,\beta+d\ell}.\]
Again, in order to have a non-trivial contribution, condition (iii) must fail, so
\[k-1+k^{\prime\prime}-1\geq(d+1)(n-1)-l+1=d(n-1)+n-l,\]
or equivalently \(k+l+k^{\prime\prime}\geq 2+n+d(n-1)\). We get
\[k+l-d(n-1)-(n-k^{\prime\prime})=k+l+k^{\prime\prime}-n-d(n-1)\geq 2>0.\]
This completes the proof of the lemma.
Note that
\[\widetilde{\rho}^{i}e^{k}=(c_{1}(TX)-(n-1)e)^{i}e^{k}=(-n+1)^{i}e^{k+i}\]
and that \(\widetilde{\theta}(e^{k+i})=\left(\frac{n}{2}-k-i\right)\,e^{k+i}\). Therefore, using formula (22) and Lemma 3 we get the following proposition
**Proposition 3**.: _The following formula holds:_
\[\left(Q^{\Delta}\,I^{(-m)}(t,q,Q,Q^{-1}\lambda)\,Q^{\widetilde{ \rho}}\,Q^{\widetilde{\theta}+m-1/2}\,Q^{-\Delta}\right)e^{k}=\] \[\sum_{l,d=0}^{\infty}\,\sum_{i=0}^{n-1-k}\,\sum_{k^{\prime\prime}= 1}^{n-1}\langle e^{k+i}\psi^{l-1},e^{k^{\prime\prime}}\rangle_{0,2,d\ell}\,e _{k^{\prime\prime}}\,(-\partial_{\lambda})^{l}\frac{(-(n-1)\partial_{m})^{i}} {i!}\left(\frac{\lambda^{\frac{n-1}{2}+m-k-i}}{\Gamma(\frac{n-1}{2}+m-k-i+1)} \right)+\] \[\sum_{l,d=0}^{\infty}\,\sum_{i=0}^{n-1-k}\langle e^{k+i}\psi^{l- 1},1\rangle_{0,2,d\ell}\,(-1)^{n-1}Q^{n}\,\phi_{N}\,(-\partial_{\lambda})^{l} \frac{(-(n-1)\partial_{m})^{i}}{i!}\left(\frac{\lambda^{\frac{n-1}{2}+m-k-i}}{ \Gamma(\frac{n-1}{2}+m-k-i+1)}\right)+\] \[O(Q\widetilde{H}(E)+Q^{n+1}H(X)),\]
_where \(1\leq k\leq n-1\) and the notation involving \(O\) is the same as in Lemma 3._
**Proposition 4**.: _If \(2\leq a\leq N\), then the following formula holds:_
\[\left(Q^{\Delta}\,I^{(-m)}(t,q,Q,Q^{-1}\lambda)\,Q^{\widetilde{\rho}}\,Q^{ \widetilde{\theta}+m-1/2}\,Q^{-\Delta}\right)\phi_{a}=\frac{\lambda^{\theta+m-1 /2}}{\Gamma(\theta+m+1/2)}\,\phi_{a}+O(Q).\]
_Moreover, in the above expansion, the \(H(X)\)-component of the coefficient in front of \(Q^{m}\) for \(0\leq m\leq n-1\) is a Laurent polynomial in \(\lambda^{1/2}\) (with coefficients in \(H(X)\))._
Proof.: Using formula (22), we get that the LHS of the formula that we want to prove is equal to
\[Q^{\Delta}\sum_{i,l=0}^{\infty}(-1)^{l}Q^{l}S_{l}(t,q,Q)\frac{\partial_{m}^{i} }{i!}\left(\frac{\lambda^{\overline{\theta}+m-l-1/2}}{\Gamma(\overline{ \theta}+m-l+1/2)}\left(Q\widetilde{\rho}\right)^{i}\right)\phi_{a}.\]
Since \(\widetilde{\rho}\phi_{a}=\rho\phi_{a}\), the above formula takes the form
\[\sum_{l=0}^{\infty}\sum_{i=0}^{\infty}Q^{\Delta+l+i}S_{l}(t,q,Q)\rho^{i}\phi_{ a}(-\partial_{\lambda})^{l}\frac{\partial_{m}^{i}}{i!}\left(\frac{\lambda^{(n-1)/2+m -i-\deg(\phi_{a})}}{\Gamma((n+1)/2+m-i-\deg(\phi_{a}))}\right). \tag{24}\]
Note that the term in the above double sum corresponding to \(l=i=0\) coincides with the leading order term on the RHS of the formula that we have to prove. Therefore, recalling the definition of the calibration, we get that we have to prove the following two statements. First, if \(l+i>0\), then the following expression
\[\sum_{b=1}^{N}\langle\rho^{i}\phi_{a}\psi^{l-1},\phi_{b}\rangle_{0,2,\beta+d \ell}(t)\phi^{b}Q^{l+i-d(n-1)}+\sum_{k=1}^{n-1}\langle\rho^{i}\phi_{a}\psi^{l -1},e^{k}\rangle_{0,2,\beta+d\ell}(t)e_{k}Q^{k+l+i-d(n-1)-n} \tag{25}\]
has order at least \(O(Q)\) for all \(\beta+d\ell\in\operatorname{Eff}(\operatorname{Bl}(X))\). Second, there are only finitely many \(d\) and \(l\), such that, in the first sum the coefficient in front of \(Q^{m}\) for \(0\leq m\leq n-1\) is non-zero. Let us consider the correlators in the first sum.
_Case 1_: if \(\beta=0\). Then the correlator is a twisted GW invariant of the exceptional divisor \(E\) and since \(\phi_{a}|_{E}=0\), we get that the correlator must be \(0\).
_Case 2_: if \(\beta\neq 0\) and condition (ii) in Gathmann's vanishing theorem does not hold. Then \(d\leq 0\Rightarrow l+i-d(n-1)\geq l+i>0\). In order for the correlator to contribute to the coefficient in front of \(Q^{m}\) for some \(0\leq m\leq n-1\), we must have \(-1\leq d\leq 0\) and \(0\leq l<n\). Clearly, there are only finitely many \(d\) and \(l\) satisfying these inequalities.
_Case 3_: if \(\beta\neq 0\) and condition (ii) holds. Then condition (iii) in Gathmann's vanishing theorem must fail, that is, \(0\geq(d+1)(n-1)-l+1\) or equivalently \(l-d(n-1)\geq n\). The power of \(Q\) is \(l+i-d(n-1)\geq n+i\). Therefore, the correlators satisfying the conditions of this case contribute only to the coefficients in front of \(Q^{m}\) with \(m\geq n\).
The argument for the correlators in the second sum is similar.
_Case 1_: if \(\beta=0\). Then the correlator is a twisted GW invariant of the exceptional divisor \(E\) and since \(\phi_{a}|_{E}=0\), we get that the correlator must be \(0\).
_Case 2_: if \(\beta\neq 0\) and condition (ii) in Gathmann's vanishing theorem does not hold. Then \(d\leq 0\) and \(k=1\). The divisor equation implies that if \(d=0\), then the correlator vanishes \(\Rightarrow d\leq-1\). We get \(k+l+i-d(n-1)-n=l+i-(d+1)(n-1)\geq l+i>0\).
_Case 3_: if \(\beta\neq 0\) and condition (ii) holds. Then condition (iii) in Gathmann's vanishing theorem must fail, that is, \(k-1\geq(d+1)(n-1)-l+1\) or equivalently \(k+l-d(n-1)\geq n+1\). The power of \(Q\) is \(k+l+i-d(n-1)-n\geq i+1\geq 1\)
**Proposition 5**.: _The following formula holds:_
\[\sum_{d,l\geq 0}\sum_{i=0}^{n-1}\sum_{k=1}^{n-1}((-(n-1)e)^{i}\psi^{l-1 },e^{k})_{0,2,d\ell}\,e_{k}\,(-\partial_{\lambda})^{l}\frac{\partial_{m}^{i}}{i!} \left(\frac{\lambda^{(n-1)/2+m-i}}{\Gamma((n+1)/2+m-i)}\right)+ \tag{27}\] \[\sum_{l=1}^{\infty}\sum_{\beta\in\operatorname{Eff}(X)}\langle \psi^{l-1},e\rangle_{0,2,\beta}(t)\,e_{1}\,q^{\beta}Q^{l-n+1}\,(-\partial_{ \lambda})^{l}\left(\frac{\lambda^{(n-1)/2+m}}{\Gamma((n+1)/2+m)}\right)+O(Q), \tag{26}\]
_where \(e_{k}=(-1)^{n-1}e^{n-k}\) and the correlator_
\[\langle(-(n-1)e)^{i}\psi^{l-1},e^{k}\rangle_{0,2,d\ell}=\langle(-(n-1)e)^{i} \psi^{l},e^{k},1\rangle_{0,3,d\ell}\]
_can be defined also for \(l=0\)._
Proof.: Note that if \(i>0\),then \(\widetilde{\rho}^{i}=\rho^{i}+(-(n-1)e)^{i}\). Just like in the proof of Proposition 4, we get that the LHS of the identity that we would like to prove is equal to the sum of (24) with \(a=1\) and
\[\sum_{l=0}^{\infty}\sum_{i=1}^{\infty}Q^{\Delta+l+i}S_{l}(t,q,Q)(-(n-1)e)^{i}( -\partial_{\lambda})^{l}\frac{\partial_{m}^{i}}{i!}\left(\frac{\lambda^{(n-1)/ 2+m-i}}{\Gamma((n+1)/2+m-i)}\right). \tag{28}\]
Let us discuss first the contribution of (24). The same argument as in the proof of Proposition 4 yields that if \(i>0\), then the corresponding terms in the sum have order at least \(O(Q)\). If \(i=0\) and \(l=0\), then the corresponding term in the sum becomes \(\lambda^{\theta+m-1/2}/\Gamma(\theta+m+1/2)\) which is precisely the first term on the RHS of the formula that we would like to prove. Finally, we are left with the case \(i=0\) and \(l\geq 1\). By definition \(Q^{\Delta+l}S_{l}(t,q,Q)\phi_{1}\) is
\[\sum_{\beta+d\ell}\sum_{b=1}^{N}\langle\psi^{l-1},\phi_{b}\rangle_{0,2,\beta+ d\ell}(t)\phi^{b}Q^{l-d(n-1)}+\sum_{k=1}^{n-1}\langle\psi^{l-1},e^{k}\rangle_{0,2, \beta+d\ell}(t)e_{k}\,q^{\beta}\,Q^{k+l-d(n-1)-n}, \tag{29}\]
where the first sum is over all effective curve classes \(\beta+d\ell\in\operatorname{Eff}(\operatorname{Bl}(X))\). Let us consider the following 3 cases for the correlators in (29)
_Case 1:_ if \(\beta=0\). we may assume that \(t=0\) because \(t|_{E}=0\). The sum over \(b\) is non-zero only if \(\phi_{b}=1\), that is, \(b=1\). Recalling the formula for the dimension of the moduli space, we get
\[l-1=-1+n+d(n-1)\quad\Rightarrow\quad l-d(n-1)=n.\]
Therefore, in this case the contribution has order \(O(Q^{n})\). The sum over \(k\) in (29) (when \(\beta=0\)) is independent of \(Q\) because by matching the degree of the correlator insertion with the dimension of the virtual fundamental cycle we get \(k+l-1=-1+n+d(n-1)\). Therefore, the contribution to the sum (24) with \(a=1\) of the terms with \(i=0\), \(l\geq 1\), and degree \(\beta=0\) is
\[\sum_{l=1}^{\infty}\sum_{d=0}^{\infty}\langle\psi^{l-1},e^{k}\rangle_{0,2,d \ell}\,e_{k}\,(-\partial_{\lambda})^{l}\left(\frac{\lambda^{(n-1)/2+m}}{ \Gamma((n+1)/2+m)}\right).\]
Note that the above sum coincides with the \(i=0\) component of (26).
_Case 2:_ if \(\beta\neq 0\) and condition (ii) in Gathmann's vanishing theorem does not hold. Since \(d\leq 0\) and \(l\geq 1\) the sum over \(b\) has order at least \(O(Q)\). For the sum over \(k\), only for \(k=1\) condition (ii) does not hold and if \(d\leq-1\) then the term has order at least \(O(Q)\). Therefore, only the terms with \(k=1\) and \(d=0\) satisfy the conditions of this case and do not have order \(O(Q)\). The corresponding
contribution to the sum (24) with \(a=1\) becomes
\[\sum_{l\geq 1}\langle\psi^{l-1},e\rangle_{0,2,\beta}(t)e_{1}\,q^{\beta}\,Q^{1+l- n}\,(-\partial_{\lambda})^{l}\left(\frac{\lambda^{(n-1)/2+m}}{\Gamma((n+1)/2+m)} \right).\]
Note that the above sum coincides with the sum in (27).
_Case 3:_ if \(\beta\neq 0\) and condition (ii) holds. Then condition (iii) does not hold. For the correlators in the sum over \(b\) we get \(0\geq(d+1)(n-1)-l+1=d(n-1)+n-l\). This inequality implies that the sum over \(b\) has order \(O(Q^{n})\). Similarly, for the correlators in the sum over \(k\) we get
\[k-1\geq(d+1)(n-1)-l+1=d(n-1)+n-l\quad\Rightarrow\quad k+l-d(n-1)-n\geq 1.\]
In other words, the sum over \(k\) has order at least \(O(Q)\).
This completes the analysis of the contributions from the sum (24) with \(a=1\). It remains to analyze the contributions from the sum (28). This is done in a similar way. To begin with, note that the sum of the terms with \(l=0\) is equal to
\[\sum_{i=1}^{n}Q^{\Delta+i}(-(n-1)e)^{i}(-\partial_{\lambda})^{l}\frac{\partial _{m}^{i}}{i!}\left(\frac{\lambda^{(n-1)/2+m-i}}{\Gamma((n+1)/2+m-i)}\right).\]
Note that only the term with \(i=n\) depends on \(Q\), that is, it has order \(O(Q^{n})\). Therefore, up to terms of order \(O(Q)\) the above sum coincides with the sum of the terms in (26) with \(l=0\) and \(i\geq 1\). Suppose that \(l>0\). By definition, \(Q^{\Delta+l+i}S_{l}(t,q,Q)e^{i}\) is equal to
\[\sum_{\beta+d\ell}\sum_{b=1}^{N}\langle e^{i}\psi^{l-1},\phi_{b}\rangle_{0,2, \beta+d\ell}(t)\phi^{b}Q^{l+i-d(n-1)}+\sum_{k=1}^{n-1}\langle e^{i}\psi^{l-1}, e^{k}\rangle_{0,2,\beta+d\ell}(t)e_{k}\,q^{\beta}\,Q^{k+l+i-d(n-1)-n}. \tag{30}\]
Let us consider the following 3 cases for the correlators in the above sum.
_Case 1:_ if \(\beta=0\). Again since \(t|_{E}=0\), we may assume that \(t=0\). Let us consider first the correlators in the sum over \(b\). Since \(\phi_{b}|_{E}=0\) for \(b>1\), the only non-trivial contribution will come from the term with \(b=1\). The dimension constraint now yields \(i+l-1=-1+n+d(n-1)\Rightarrow l+i-d(n-1)=n\). Therefore, the contribution to (28) has order \(O(Q^{n})\). Let us consider now the correlatorsin the sum over \(k\). The dimension constraint takes the form \(i+l-1+k=-1+n+d(n-1)\)\(\Rightarrow k+l+i-d(n-1)-n=0\). Therefore, the contribution of these terms to the sum (28) is
\[\sum_{l=1}^{\infty}\sum_{i,k=1}^{n-1}\sum_{d=0}^{\infty}((-(n-1)e)^{i}\psi^{l -1},e^{k})_{0,2,d\ell}\,e_{k}\,(-\partial_{\lambda})^{l}\frac{\partial_{m}^{i }}{i!}\left(\frac{\lambda^{(n-1)/2+m-i}}{\Gamma((n+1)/2+m-i)}\right).\]
The above sum coincides with the sum of the terms in (26) with \(l\geq 1\) and \(i\geq 1\).
Note that at this point all terms in the formula that we would like to prove are already matched with contributions from (24) and (28). It remains only to check that in the remaining two cases the contributions have order \(O(Q)\).
_Case 2:_ if \(\beta\neq 0\) and condition (ii) does not hold. For the sum over \(b\), since \(d\leq 0\) and \(l\geq 1\), the powers of \(Q\) are positive. For the sum over \(k\), in addition to \(d\leq 0\) we also have \(i-1+k-1=0\), that is, \(i=k=1\). If \(d\leq-1\), then the power of \(Q\) is positive. Suppose that \(d=0\). Using the divisor equation we get \(\langle e\psi^{l-1},e\rangle_{0,2,\beta}(t)=\langle e^{2}\psi^{l-2}\rangle_{0, 1,\beta}(t)\). The latter satisfies both conditions (i) and (ii) of Gathmann's vanishing theorem. In order for the correlator to be non-zero, condition (iii) must fail \(1\geq(d+1)(n-1)-l+2=d(n-1)+n-l+1\). The power of \(Q\) becomes
\[k+l+i-d(n-1)-n=2+l-d(n-1)-n\geq 2.\]
_Case 3:_ if \(\beta\neq 0\) and condition (ii) holds. Then condition (iii) must fail. For the correlators in the sum over \(b\) we get \(i-1\geq(d+1)(n-1)-l+1=d(n-1)+n-l\Rightarrow\) the power of \(Q\) is \(l+i-d(n-1)\geq n+1\).
For the correlators in the sum over \(k\) we get \(i-1+k-1\geq d(n-1)+n-l\Rightarrow\) the power of \(Q\) is \(k+l+i-d(n-1)-n\geq 2\).
### Quantum cohomology of the blowup
Let us recall the result of Bayer [3]. Suppose that \(t\in\widetilde{H}^{*}(X)\subset H^{*}(\operatorname{Bl}(X))\), that is, \(t_{1}=t_{N+1}=\cdots=t_{N+n-1}=0\). Let us denote by \(\widetilde{\Omega}_{i}(t,q,Q)\) (\(1\leq i\leq N+n-1\)) the linear operator in \(H^{*}(\operatorname{Bl}(X))\) defined by quantum multiplication \(\phi_{i}\bullet_{t,q,Q}\) for \(1\leq i\leq N\) and by quantum multiplication by \(e^{k}\bullet_{t,q,Q}\) for \(i=N+k\), \(1\leq k\leq n-1\). Slightly abusing the notation let us denote by the same letters \(\widetilde{\Omega}_{i}\) the matrices of the corresponding linear operators with respect to the basis \(\phi_{i}\) (\(1\leq i\leq N+n-1\)), where recall that \(\phi_{N+k}:=e^{k}\) (\(1\leq k\leq n-1\)). Note that the matrix of \(\Delta\) is diagonal with diagonal entries \(0,\ldots,0,-1,-2,\ldots,-n+1\) (\(0\) appears \(N\) times). The main observation of Bayer (see Section 3.4 in [3]) can be stated as follows.
**Proposition 6**.: _The matrices of the linear operators \(\,\widetilde{\Omega}_{i}\) (\(1\leq i\leq N+n-1\)) with respect to the basis \(Q^{-\Delta}\phi_{i}\) (\(1\leq i\leq N+n-1\)) have the following Laurent series expansions at \(Q=0\):_
\[Q^{\Delta}\,\widetilde{\Omega}_{i}(t,q,Q)\,Q^{-\Delta}=\begin{bmatrix}\Omega _{i}(t,q)+O(Q^{n-1})&O(Q^{n})\\ O(1)&\delta_{i,1}\operatorname{Id}_{n-1}+O(Q)\end{bmatrix},\quad 1\leq i\leq N, \tag{31}\]
_where \(\operatorname{Id}_{n-1}\) is the identity matrix of size \((n-1)\times(n-1)\) and_
\[Q^{\Delta}\,\widetilde{\Omega}_{N+a}(t,q,Q)\,Q^{-\Delta}=Q^{-a}\begin{bmatrix} O(Q^{n})&O(Q^{n})\\ O(1)&\epsilon^{a}+O(Q^{2})\end{bmatrix},\quad 1\leq a\leq n-1, \tag{32}\]
_where \(\Omega_{i}(t,q)\) is the matrix of the linear operator in \(H^{*}(X)\) defined by quantum multiplication by \(\phi_{i}\bullet_{t,q}\) with respect to the basis \(\phi_{i}\) (\(1\leq i\leq N\)) and \(\epsilon\) is the following \((n-1)\times(n-1)\) -matrix:_
\[\epsilon=\begin{bmatrix}0&0&\cdots&0&(-1)^{n}\\ 1&0&\cdots&&0\\ \vdots&\vdots&\ddots&&\vdots\\ 0&0&\cdots&0&0\\ 0&0&\cdots&1&0\end{bmatrix}.\]
Proof.: The proof is based on Gathmann's vanishing theorem and it is very similar to the proof of Lemma 3. Since the proofs of (31) and (32) are similar, let us prove only (32). We have
\[Q^{\Delta}(e^{a}\bullet(Q^{-\Delta}e^{k}))=\sum_{\widetilde{\beta}=\beta+d \ell}\left(\sum_{j=1}^{N}\langle e^{a},e^{k},\phi^{j}\rangle_{0,3,\beta+d \ell}(t)\,\phi_{j}+\sum_{l=1}^{n-1}\langle e^{a},e^{k},e_{l}\rangle_{0,3, \beta+d\ell}(t)\,e^{l}\,Q^{-l}\right)\!q^{\beta}Q^{k-d(n-1)}.\]
Let us examine the correlators in the sum over \(j\), that is,
\[\langle e^{a},e^{k},\phi^{j}\rangle_{0,3,\beta+d\ell}(t)\,Q^{k-d(n-1)}.\]
There are \(3\) cases.
_Case 1:_ if \(\beta=0\). The correlator is a twisted GW invariant of the exceptional divisor \(E\). The restriction \(\phi^{j}|_{E}\) is non-zero only if \(\phi^{j}=1\), that is, \(j=N\). Recalling the string equation we get that the correlator is non-zero only if \(\phi^{j}=1\) and \(d=0\). Therefore, the contribution takes the form
\[\int_{\operatorname{Bl}(X)}e^{a}\cup e^{k}\cup 1\ \phi_{N}\,Q^{k}=(-1)^{n-1}Q^{n-a} \delta_{a+k,n}\ \phi_{N}.\]
_Case 2:_ if \(\beta\neq 0\) and condition (ii) in Gathmann's vanishing theorem does not hold. Here we have in mind the correlator \(\langle e^{a},e^{k},\phi^{j}\rangle_{0,3,\beta+d\ell}(t)\). Note that the weight of this correlator is \(a-1+k-1\). If condition (ii) does not hold, then \(a-1+k-1\leq 0\) and \(d\leq 0\). Since \(a,k\geq 1\), this case is possible only if \(a=k=1\). Moreover, if \(d=0\), then the correlator vanishes by the divisor equation. Therefore,
we may assume that \(d\leq-1\). The power of \(Q\) becomes \(k-d(n-1)\geq 1+n-1=n\), that is, the contribution in this case has order \(O(Q^{n})=O(Q^{n+1-a})\).
_Case 3:_ if \(\beta\neq 0\) and condition (ii) holds. According to Gathmann's vanishing theorem, condition (iii) does not hold, that is,
\[a-1+k-1\geq(d+1)(n-1)=d(n-1)+n-1\quad\Rightarrow\quad k-d(n-1)\geq n+1-a.\]
We get that the contribution in this case has order \(O(Q^{n+1-a})\).
Combining the results of the 3 cases, we get that the sum over \(j\) has the form
\[Q^{-a}\Big{(}(-1)^{n-1}\delta_{a+k,n}\,Q^{n}\phi_{N}+O(Q^{n+1})\Big{)}.\]
Let us examine the correlators in the sum over \(l\). Just like above, there are 3 cases.
_Case 1:_ if \(\beta=0\). The correlator \((e^{a},e^{k},e_{l})_{0,3,d\ell}(t)\) can be computed explicitly. Indeed, such a correlator is a twisted GW invariant of the exceptional divisor, so it is independent of \(t\in H^{*}(X)\), that is, we may substitute \(t=0\). Moreover, since \(d\ell\) must be an effective curve class in \(E\), we have \(d\geq 0\). Recall that \(e_{l}=(-1)^{n-1}e^{n-l}\) and note that the dimension of the virtual fundamental cycle of \(\overline{\mathcal{M}}_{0,3}(\operatorname{Bl}(X),d\ell)\) is \(d(n-1)+n\). Therefore, \(a+k-l=d(n-1)\). We conclude that \(d=0\) or \(d=1\), that is,
\[\langle e^{a},e^{k},e_{l}\rangle_{0,3,d\ell}(t)=\begin{cases}(-1)^{n}&\text{ if }d=1\text{ and }l=a+k-n+1,\\ 1&\text{ if }d=0\text{ and }l=a+k,\\ 0&\text{ otherwise.}\end{cases}\]
The contribution to the sum over \(l\) becomes
\[\begin{cases}(-1)^{n}e^{a+k-n+1}Q^{-a}\text{ if }a+k>n-1,\\ e^{a+k}Q^{-a}\text{ if }a+k\leq n-1.\end{cases} \tag{33}\]
Note that the matrix \(\epsilon^{a}\) has entries
\[\epsilon^{a}_{ij}=\begin{cases}(-1)^{n}&\text{ if }j=i+n-1-a,\\ 1&\text{ if }i=j+a,\\ 0&\text{ otherwise.}\end{cases}\]
Comparing with formula (33), we get that the contribution in this case to formula (32) coincides with the matrix \(Q^{-a}\epsilon^{a}\).
_Case 2:_ if \(\beta\neq 0\) and condition (ii) does not hold. Then \(d\leq 0\) and \(a-1+k-1+n-l-1\leq 0\). Since \(a,k,n-l\geq 1\), this case is possible only if \(a=k=1\) and \(l=n-1\). Since \(\beta\neq 0\), the divisor equation implies that \(d\neq 0\), that is, \(d\leq-1\). In other words, if condition (ii) in Gathmann's vanishing theorem does not hold, then the power of \(Q\), must be \(k-l-d(n-1)\geq 1-(n-1)+n-1=1=-a+2\).
_Case 3:_ if \(\beta\neq 0\) and condition (ii) holds. Then condition (iii) must fail, that is, \(a-1+k-1+n-l-1\geq(d+1)(n-1)=d(n-1)+n-1\), or equivalently \(k-l-d(n-1)\geq-a+2\).
Combining the results of these 3 cases, we get that the contribution of the sum over \(l\) matches the (2,2)-block of the matrix on the RHS in formula (32) with the factor \(Q^{-a}\) inserted.
In order to complete the argument, we have to repeat the above discussion by replacing \(e^{k}\) with \(\phi_{i}\)\((1\leq i\leq N)\), that is, we have to determine the contribution to the RHS of (32) of the following expression:
\[Q^{\Delta}\big{(}e^{a}\bullet(Q^{-\Delta}\phi_{i})\big{)}=\sum_{\vec{\beta}= \beta+d\ell}\left(\sum_{j=1}^{N}\langle e^{a},\phi_{i},\phi^{j}\rangle_{0,3, \beta+d\ell}(t)\,\phi_{j}+\sum_{l=1}^{n-1}\langle e^{a},\phi_{i},e_{l}\rangle_ {0,3,\beta+d\ell}(t)\,e^{l}\,Q^{-l}\right)q^{\beta}Q^{-d(n-1)}.\]
First, let us determine the contribution of the correlators in the sum over \(j\).
_Case 1:_ if \(\beta=0\). The correlator could be non-zero only if \(\phi^{j}=1\) and \(d=0\). In the latter case, since \(\int_{\operatorname{Bl}(X)}e^{a}\cup\phi_{i}\cup 1=0\), we get that the correlator still vanishes. There is no contribution in this case.
_Case 2:_ if \(\beta\neq 0\) and condition (ii) does not hold. Then \(d\leq 0\) and the weight \(a-1\leq 0\), that is, \(a=1\). Due to divisor equation, \(d\neq 0\), so \(d\leq-1\Rightarrow-d(n-1)\geq n-1=n-a\). We get that the contribution in this case has order \(O(Q^{n-1})\).
_Case 3:_ if \(\beta\neq 0\) and condition (ii) does hold. Then condition (iii) does not hold, so \(a-1\geq(d+1)(n-1)\Rightarrow-d(n-1)\geq n-a\). We get that the contribution in this case is still of order \(O(Q^{n-a})\).
Combining the results of the 3 cases we get that the order of the elements in the (1,1)-block of the matrix \(Q^{\Delta}\widetilde{\Omega}_{N+a}Q^{-\Delta}\) is \(O(Q^{n-a})\), that is, the same as in formula (32).
Finally, it remains to determine the contribution of the correlators in the sum over \(l\).
_Case 1:_ if \(\beta=0\). The correlator could be non-zero only if \(\phi_{i}=1\), that is, \(i=1\) and \(d=0\). We get that the contribution in this case is \(\delta_{i,1}e^{a}Q^{-a}\).
_Case 2:_ if \(\beta\neq 0\) and condition (ii) does not hold. Then \(d\leq 0\) and the weight \(a-1+n-l-1\leq 0\), that is, \(a=n-l=1\). Due to divisor equation, \(d\neq 0\), so \(d\leq-1\Rightarrow-l-d(n-1)\geq-l+n-1=0=-a+1\). The contribution in this case has order \(O(Q^{-a+1})\).
_Case 3:_ if \(\beta\neq 0\) and condition (ii) does hold. Then condition (iii) must fails. We get
\[a-1+n-l-1\geq(d+1)(n-1)\quad\Rightarrow\quad-l-d(n-1)\geq-a+1.\]
The contribution in this case also has order \(O(Q^{-a+1})\).
Combining the results in the 3 cases we get that the elements in the (2,1)-block of the matrix \(Q^{\Delta}\widetilde{\Omega}_{N+a}Q^{-\Delta}\) have the form \(Q^{-a}(E_{a,1}+O(Q))\), where \(E_{a,1}\) denotes the matrix whose \((a,1)\)-entry is \(1\) and the remaining entries are \(0\).
## 5. The exceptional component of a reflection vector
Suppose that \(\alpha\in H^{*}(\operatorname{Bl}(X))\) is a reflection vector. Let us decompose \(\alpha=\alpha_{e}+\alpha_{b}\), where \(\alpha_{e}\in\widetilde{H}^{*}(E)\) and \(\alpha_{b}\in H^{*}(X)\). We will refer to \(\alpha_{e}\) and \(\alpha_{b}\) as respectively the _exceptional_ and the _base_ components of \(\alpha\). Using Proposition 3 we would like to classify the exceptional components of the reflection vectors.
### Dependence on the Novikov variables
Since the quantum cohomology is a Frobenius manifold depending on the parameters \(q:=(q_{1},\ldots,q_{r})\) and \(q_{r+1}:=Q^{n-1}\), the reflection vectors depend on \(q_{i}\) too. We claim that if \(\alpha\) is a reflection vector, then
\[\alpha=q_{1}^{-p_{1}}\cdots q_{r}^{-p_{r}}q_{r+1}^{-e}\beta, \tag{34}\]
where \(\beta\in H^{*}(\operatorname{Bl}(X))\) is independent of \(q_{i}\) (\(1\leq i\leq r+1\)). To proof this fact, we will make use of the divisor equation. Suppose that the basis of divisor classes is part of the basis \(\{\phi_{i}\}_{1\leq i\leq N+n-1}\), such that, \(p_{i}=\phi_{i+1}\) for \(1\leq i\leq r\) and \(p_{r+1}=e=\phi_{N+1}\). Let \(\tau_{i}\) (\(1\leq i\leq r+1\)) be the linear coordinates corresponding to the divisor classes \(p_{i}\), that is, \(\tau_{i}:=t_{i+1}\) for \(1\leq i\leq r\) and \(\tau_{r+1}=t_{N+1}\). Using the divisor equation we get that the calibration satisfies the following differential equations:
\[z\frac{\partial}{\partial\tau_{i}}S(t,q,Q,z)=p_{i}\bullet S(t,q,Q,z)\] \[zq_{i}\frac{\partial}{\partial q_{i}}S(t,q,Q,z)=z\frac{\partial} {\partial\tau_{i}}S(t,q,Q,z)-S(t,q,Q,z)p_{i}\cup.\]
Therefore, \(S(t,q,Q,z)=T(t,q,Q,z)e^{\sum_{i=1}^{r+1}\tau_{i}p_{i}\cup/z}\), where for fixed \(z\) the operator series \(T(t,q,Q,z)\) is a function on the variables
\[t_{1},q_{1}e^{t_{2}},\ldots,q_{r}e^{t_{r+1}},t_{r+2},\ldots,t_{N},q_{r+1}e^{t_ {N+1}},t_{N+2},\ldots,t_{N+n-1}. \tag{35}\]
As we already pointed out before (see Section 2.5), due to the divisor equation, the operators of quantum multiplication \(\phi_{i}\bullet_{t,q,Q}\) are represented by matrices whose entries are functions in the variables (2.5) too. Since the canonical coordinates \(u_{i}(t,q,Q)\) are eigenvalues of \(E\bullet_{t,q,Q}\), it follows that they have the same property. Moreover, using the chain rule, we get that the partial derivatives \(\frac{\partial u_{j}}{\partial t_{a}}\) are also functions in (35). On the other hand, if \(\alpha\) is a reflection vector, then the Laurent series expansion of \(I_{\alpha}^{(-m)}(t,q,Q,\lambda)\) at a point \(\lambda=u_{i}(t,q,Q)\) has coefficients that are rational functions in the canonical coordinates \(u_{j}(t,q,Q)\) and their partial derivatives \(\frac{\partial u_{j}}{\partial t_{a}}\) (see Section 2.3). Therefore,
\[\Big{(}\frac{\partial}{\partial\tau_{i}}-q_{i}\frac{\partial}{\partial q_{i} }\Big{)}I_{\alpha}^{(-m)}(t,q,Q,\lambda)=0 \tag{36}\]
By definition
\[I^{(-m)}(t,q,Q,\lambda) =S(t,q,Q,-\partial_{\lambda}^{-1})\,\widetilde{I}^{-m}(\lambda)\] \[=T(t,q,Q,-\partial_{\lambda}^{-1})\,e^{-\sum_{i=1}^{r+1}\tau_{i}p _{i}\cup\partial_{\lambda}}\,\widetilde{I}^{(-m)}(\lambda)\] \[=T(t,q,Q,-\partial_{\lambda}^{-1})\,\widetilde{I}^{-m}(\lambda) \,e^{-\sum_{i=1}^{r+1}\tau_{i}p_{i}},\]
where for the last equality we used the following relation (see also the proof of Lemma 2, part a):
\[-p\cup\partial_{\lambda}\,\frac{\lambda^{\theta+\alpha-1}}{\Gamma(\theta+ \alpha)}=\frac{\lambda^{\theta+\alpha-1}}{\Gamma(\theta+\alpha)}\,(-p)\]
Since \(I_{\alpha}^{(-m)}(t,q,Q,\lambda)=I^{(-m)}(t,q,Q,\lambda)\alpha\), from equation (36) we get
\[q_{i}\frac{\partial\alpha}{\partial q_{i}}+p_{i}\cup\alpha=0,\quad\forall 1 \leq i\leq r+1.\]
Our claim that the reflection vector has the form (34) follows.
### Canonical coordinates
We would like to determine the dependence of the canonical coordinates \(u_{i}(t,q,Q)\) (\(1\leq i\leq N+n-1\)) on \(Q\), where the parameter \(t\in\widetilde{H}(X)\), that is, \(t_{1}=t_{N+1}=\cdots=t_{N+n-1}=0\). Using the identity \(u_{i}=\widetilde{E}(u_{i})\) we get
\[u_{i}(t,q,Q)=\sum_{a=2}^{N}(1-\deg\phi_{a})t_{a}\frac{\partial u_{i}}{ \partial t_{a}}(t,q,Q)+\sum_{j=1}^{r}\rho_{j}\frac{\partial u_{i}}{\partial \tau_{j}}(t,q,Q)-(n-1)\frac{\partial u_{i}}{\partial t_{N+1}}(t,q,Q), \tag{37}\]
where \(\rho_{j}\) are the coefficients in the decomposition \(c_{1}(TX)=\sum_{j=1}^{N}\rho_{j}p_{j}\) and \(\tau_{j}=t_{j+1}\). The above formula allows us to reduce the problem to investigating the dependence on \(Q\) of the partial derivatives \(\frac{\partial u_{i}}{\partial t_{j}}\) (\(1\leq i\leq N+n-1\), \(1\leq j\leq N+1\)). The advantage now is that the eigenvalues of the operator \(\widetilde{\Omega}_{j}(t,q,Q)=\phi_{j}\bullet_{t,q,Q}\) of quantum multiplication by \(\phi_{j}\) are precisely \(\frac{\partial u_{i}}{\partial t_{j}}\) (\(1\leq i\leq N+n-1\)).
**Lemma 4**.: _Suppose that \(U(Q)\) is a square matrix of size \(k\times k\) whose entries are functions holomorphic at \(Q=0\)._
_a)There exists an integer \(b>0\), such that, every eigenvalue of \(U(Q)\) has an expansion of the form \(\lambda_{0}+\sum_{i=1}^{\infty}\lambda_{i}Q^{i/b}\)._
_b) If \(\lambda_{0}\) is an eigenvalue of \(U(0)\) of multiplicity \(1\), then \(U(Q)\) has a unique eigenvalue of multiplicity one of the form \(\lambda_{0}+\sum_{i=1}^{\infty}\lambda_{i}Q^{i}\)._
Proof.: The eigenvalues are roots of the characteristic polynomial \(\det(\lambda-U(Q))\). This is a monic polynomial in \(\lambda\) of degree \(k\) with coefficients in \(C\{Q\}\) - the ring of convergent power series in \(Q\). Therefore, in order to prove a), it is sufficient to prove the following statement. Let
\(C\{Q\}[\lambda]\) be a monic polynomial. Then the roots of \(f(Q,\lambda)\) have the expansion stated in the lemma. Let us decompose
\[f(0,\lambda)=(\lambda-w_{1})^{b_{1}}\cdots(\lambda-w_{s})^{b_{s}}\]
where \(w_{i}\neq w_{j}\) for \(i\neq j\). Recalling Hensel's lemma (see [14], Chapter 2, Section 2), we get that \(f(Q,\lambda)=f_{1}(Q,\lambda)\cdots f_{r}(Q,\lambda)\), where \(f_{i}(Q,\lambda)\in C\{Q\}[\lambda]\) is a monic polynomials of degree \(b_{i}\), such that, \(f_{i}(0,\lambda)=(\lambda-w_{i})^{b_{i}}\). Note that if \(b_{i}=1\) for some \(i\), then the unique zero of \(f_{i}(Q,\lambda)=0\) is a holomorphic at \(Q=0\) and its value at \(Q=0\) is \(w_{i}\). Therefore, part b) is an elementary consequence of Hensel's lemma. If \(s>1\), then the lemma follows from the inductive assumption. Suppose that \(s=1\), that is,
\[f(Q,\lambda)=\lambda^{k}+a_{1}(Q)\lambda^{k-1}+\cdots+a_{k}(Q),\]
where \(a_{i}(0)=0\). We may assume that the sub-leading coefficient \(a_{1}(Q)=0\). Indeed, using the substitution \(\lambda\mapsto\lambda-a_{1}(Q)/k\) we can transform the polynomial to one for which the sub-leading coefficient is \(0\). The roots of the two polynomials are related by a shift of \(a_{1}(Q)/k\), so it is sufficient to prove our claim for one of them. Let \(\operatorname{ord}(a_{i})\) be the order of vanishing of \(a_{i}\) at \(Q=0\). If \(a_{i}(Q)=0\), then we define the order of vanishing to be \(+\infty\). Put
\[\nu:=\min_{1\leq i\leq k}\ \frac{\operatorname{ord}(a_{i})}{i}.\]
Substituting \(\lambda=Q^{\nu}\mu\) in the equation \(f(Q,\lambda)=0\) and dividing by \(Q^{\nu k}\) we get
\[\mu^{k}+\sum_{i=2}^{k}a_{i}(Q)Q^{-\nu i}\mu^{k-i}=0.\]
Since \(\operatorname{ord}(a_{i})\geq\nu i\) with equality for at leats one \(i\), we get that the LHS of the above equation is a monic polynomial \(g(Q^{1/b},\mu)\) in \(\mathbb{C}\{Q^{1/b}\}[\mu]\) for some integer \(b>0\). Note that \(g(0,\mu)\) has at least two different zeroes because its sub-leading coefficient is \(0\). Therefore, just like above, we can use Hensel's lemma to reduce the proof to a case in which the inductive assumption can be applied. This completes the proof.
**Proposition 7**.: _Let \(u_{j}(t,q,Q)\) (\(1\leq j\leq N+n-1\)) be the canonical coordinates of the quantum cohomology of \(\operatorname{Bl}(X)\), where the parameter \(t\in\widetilde{H}(X)\). After renumbering, the canonical coordinates split into two groups_
\[u_{j}(t,q,Q)\in\mathbb{C}\{Q\},\quad 1\leq j\leq N,\]
_and_
\[u_{j}(t,q,Q)=-(n-1)v_{k}Q^{-1}+O(1),\quad j=N+k,\quad 1\leq k\leq n-1,\]
_where \(v_{k}\) (\(1\leq k\leq n-1\)) are the solutions of the equation \(\lambda^{n-1}=(-1)^{n}\)._
Proof.: Let us apply the above Lemma to the matrix of the linear operator
\[\sum_{a=2}^{N}(1-\deg\phi_{a})t_{a}\widetilde{\Omega}_{a}(t,q,Q)+\sum_{j=1}^{ r}\rho_{j}\widetilde{\Omega}_{j+1}(t,q,Q)+Q\widetilde{\Omega}_{N+1}(t,q,Q) \tag{38}\]
with respect to the basis \(Q^{-\Delta}\phi_{i}\) (\(1\leq i\leq N+n-1\)). Recalling Proposition 6 we get that the entries of the matrix of the operator (38) are holomorphic at \(Q=0\) and that its specialization to \(Q=0\) has the form
\[\begin{bmatrix}E\bullet_{t,q}&0\\ \ast&\epsilon\end{bmatrix}.\]
The eigenvalues of the above matrix are the canonical coordinates \(u_{i}^{X}(t,q)\) (\(1\leq i\leq N\)) of the quantum cohomology of \(X\) and the solutions \(v_{k}\) (\(1\leq k\leq n-1\)) of the equation \(\lambda^{n-1}=(-1)^{n}\). Note that for a generic choice of \(t\) the eigenvalues are pairwise distinct. On the other hand, the canonical vector fields \(\frac{\partial}{\partial u_{j}}\) (\(1\leq j\leq N+n-1\)) form an eigenbasis for the operator (38). Let us enumerate the canonical coordinates in such a way that the eigenvalues corresponding to \(\frac{\partial}{\partial u_{j}}\) for \(1\leq j\leq N\) and \(j=N+k\) with \(1\leq k\leq n-1\) are respectively \(u_{j}^{X}(t,q)+O(Q)\) and \(v_{k}+O(Q)\). Recall that the eigenvalues of the operators \(\widetilde{\Omega}_{a}(t,q,Q)\) are \(\frac{\partial u_{j}}{\partial t_{a}}(t,q,Q)\) (\(1\leq j\leq N+n-1\)). Recalling Lemma 4, b), we get that the functions
\[E(u_{j})+Q\frac{\partial u_{j}}{\partial t_{N+1}}\quad(1\leq j\leq N+n-1)\]
are holomorphic at \(Q=0\), where \(E\coloneqq\sum_{a=2}^{N}(1-\deg(\phi_{a}))t_{a}\partial/\partial t_{a}+\sum_{ j=1}^{r}\rho_{j}\partial/\partial t_{j+1}\). Moreover, the restriction to \(Q=0\) satisfies
\[\Big{(}E(u_{j})+Q\frac{\partial u_{j}}{\partial t_{N+1}}\Big{)}\Big{|}_{Q=0}= \begin{cases}u_{j}^{X}(t,q)&\text{ if }1\leq j\leq N,\\ v_{k}&\text{ if }j=N+k.\end{cases}\]
On the other hand, note that \(E(u_{j})\) are the the eigenvalues of the matrix
\[\sum_{a=2}^{N}(1-\deg\phi_{a})t_{a}\widetilde{\Omega}_{a}(t,q,Q)+\sum_{j=1}^{r }\rho_{j}\widetilde{\Omega}_{j+1}(t,q,Q)\]
and that the restriction of the above matrix at \(Q=0\) is
\[\begin{bmatrix}E\bullet_{t,q}&0\\ *&0\end{bmatrix}.\]
Recalling Lemma 4, we get that \(N\) of the eigenvalues \(E(u_{j})\) (\(1\leq j\leq N+n-1\)) are holomorphic at \(Q=0\) and have the form \(u_{i}^{X}(t,q)+O(Q)\) (\(1\leq i\leq N\)), while the remaining \(n-1\) ones have order \(O(Q^{\alpha})\) for some rational number \(\alpha>0\). Similarly, by applying Lemma 4 to the matrix \(Q\widetilde{\Omega}_{N+1}\), we get that its eigenvalues \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) split into two groups. The first group consist of \(n-1\) functions holomorphic at \(Q=0\) with an expansion of the form \(v_{k}+O(Q)\), while the second group consist of \(N\) functions that have an expansion in possibly fractional powers of \(Q\) of order \(O(Q^{\beta})\) for some \(\beta>0\). Let \((t,q)\) be generic, such that, the canonical coordinates \(u_{i}^{X}(t,q)\) (\(1\leq i\leq N\)) are pairwise distinct and non-zero. Then for every \(1\leq j\leq N+n-1\), the sum \(E(u_{j})+Q\frac{\partial u_{j}}{\partial t_{N+1}}\neq 0\Rightarrow\) the two numbers \(E(u_{j})\) and \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) can not be vanishing at \(Q=0\), that is, either \(E(u_{j})\) is holomorphic at \(Q=0\) of the form \(u_{i}^{X}(t,q)+O(Q)\) or \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) is holomorphic at \(Q=0\) of the form \(v_{k}+O(Q)\). In the first case, since \(E(u_{j})\) is holomorphic at \(Q=0\) and the sum \(E(u_{j})+Q\frac{\partial u_{j}}{\partial t_{N+1}}\) is also holomorphic at \(Q=0\), we get that \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) is holomorphic at \(Q=0\). Similarly, the holomorphicity of \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) implies that \(E(u_{j})\) is holomorphic. Therefore, \(E(u_{j})\) and \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) are holomorphic at \(Q=0\) for all \(j\). In particular, the numbers \(\alpha\) and \(\beta\) must be integral. Note that since \(E(u_{j})\) and \(Q\frac{\partial u_{j}}{\partial t_{N+1}}\) can not vanish simultaneously at \(Q=0\) we get that for every \(1\leq j\leq N+n-1\) either
\[E(u_{j}) =u_{i}^{X}(t,q)+O(Q)\] \[Q\frac{\partial u_{j}}{\partial t_{N+1}} =O(Q)\]
for some \(i\) or
\[E(u_{j}) =O(Q)\] \[Q\frac{\partial u_{j}}{\partial t_{N+1}} =v_{k}+O(Q)\]
for some \(k\). In the first case, we will get that
\[u_{j}(t,q,Q)=E(u_{j})-(n-1)\frac{\partial u_{j}}{\partial t_{N+1}}\in\mathbb{C} \{Q\}\]
while in the second case
\[u_{j}(t,q,Q)=E(u_{j})-(n-1)\frac{\partial u_{j}}{\partial t_{N+1}}=-(n-1)v_{k }Q^{-1}+O(1).\qed\]
### Twisted periods of \(\mathbb{P}^{n-1}\)
Let us recall the reduced cohomology \(\widetilde{H}(E)\) of the exceptional divisor. It has a basis given by \(e^{i}\)\((1\leq i\leq n-1)\). The Poincare pairing on \(H(\operatorname{Bl}(X))\) induces a non-degenerate pairing on \(\widetilde{H}(E)\):
\[(e^{i},e^{j})=(-1)^{n-1}\delta_{i+j,n},\quad 1\leq i,j\leq n-1.\]
The _twisted periods_ will be multi-valued analytic functions with values in \(\widetilde{H}(E)\). Let us define the following linear operators on \(\widetilde{H}(E)\):
\[{}^{tw}\theta(e^{i}) :=\left(\frac{n}{2}-i\right)e^{i},\] \[{}^{tw}\rho(e^{i}) :=\begin{cases}-(n-1)e^{i+1}&\text{if }1\leq i<n-1\\ 0&\text{if }i=n-1.\end{cases}\]
Let us define first the calibrated twisted periods:
\[{}^{tw}\widetilde{T}^{(-m)}_{\beta}(\lambda)=e^{{}^{tw}\rho\partial_{\lambda }\,\partial_{m}}\left(\frac{\lambda^{{}^{tw}\theta+m-1/2}}{\Gamma({}^{tw} \theta+m+1/2)}\right)\beta,\quad\beta\in\widetilde{H}(E),\]
and the twisted calibration
\[{}^{tw}S(Q,z)=\sum_{k=0}^{\infty}{}^{tw}S_{k}(Q)z^{-k}\quad\in\operatorname{ End}(\widetilde{H}(E))[\![z^{-1}]\!],\]
where \({}^{tw}S_{0}(Q)=1\) and
\[\left({}^{tw}S_{k}(Q)e^{i},e^{j}\right)=\sum_{d=0}^{\infty}(e^{i}\psi^{k-1},e ^{j})_{0,2,d\ell}Q^{-d(n-1)},\quad 1\leq i,j\leq n-1.\]
Note that in the above sum only one value of \(d\) contributes, because the degree of the cohomology class in the correlator, that is, \(k-1+i+j\) must be equal to the dimension of the virtual fundamental cycle of \(\overline{\mathcal{M}}_{0,2}(\operatorname{Bl}(X),d\ell)\) which is \((n-1)(d+1)\Rightarrow d(n-1)=k+i+j-n\). The twisted periods are defined by
\[{}^{tw}I^{(-m)}_{\beta}(Q,\lambda):=\sum_{l=0}^{\infty}{}^{tw}S_{l}(Q)(- \partial_{\lambda})^{l}\,{}^{tw}\widetilde{T}^{(-m)}_{\beta}(\lambda),\quad \beta\in\widetilde{H}(E).\]
The twisted periods satisfy a system of ODEs with respect to \(Q\) and \(\lambda\). Let us derive these differential equations.
**Lemma 5**.: _We have_
\[(\lambda-{}^{tw}\rho)\partial_{\lambda}{}^{tw}\widetilde{T}^{(-m)}_{\beta}( \lambda)=\left({}^{tw}\theta+m-\frac{1}{2}\right)^{tw}\widetilde{T}^{(-m)}_{ \beta}(\lambda).\]
The proof is straightforward and it is left as an exercise.
**Lemma 6**.: _We have_
\[Q\partial_{Q}{}^{tw}S_{l}+{}^{tw}\theta{}^{tw}S_{l}-{}^{tw}S_{l}{}^{tw}\theta=- l^{tw}S_{l}.\]
Proof.: Let us apply the operator on the LHS to \(e^{i}\) and compute the pairing with \(e^{j}\) for an arbitrary \(1\leq i,j\leq n-1\). We get
\[Q\partial_{Q}({}^{tw}S_{l}e^{i},e^{j})+({}^{tw}\theta{}^{tw}S_{l}e^{i},e^{j})- ({}^{tw}S_{l}{}^{tw}\theta e^{i},e^{j}).\]
Since \({}^{tw}\theta e^{i}=\left(\frac{n}{2}-i\right)e^{i}\) and \({}^{tw}\theta\) is skew-symmetric with respect to the pairing \((\,\ )\), the above expression becomes
\[Q\partial_{Q}({}^{tw}S_{l}e^{i},e^{j})-(n-i-j)({}^{tw}S_{l}e^{i},e^{j}).\]
We saw above that the expression \(({}^{tw}S_{l}e^{i},e^{j})\) is proportional to \(Q^{-d(n-1)}\) where \(d(n-1)=l+i+j-n\). Therefore, the above expression becomes \(-l({}^{tw}S_{l}e^{i},e^{j})\).
In order to state the next result we need to introduce the linear operator
\[e\bullet_{tw}:\widehat{H}(E)\to\widehat{H}(E),\quad e^{i}\mapsto e\bullet_{tw }e^{i},\]
where the quantum product is defined by
\[(e\bullet_{tw}e^{i},e^{j})=\sum_{d=0}^{\infty}(e,e^{i},e^{j})_{0,3,d\ell}Q^{-d (n-1)}\]
For dimensional reasons, that is, \(1+i+j=n+d(n-1)\), we get that the contributions to the quantum product could be non-trivial only in degree \(d=0\) and \(d=1\). Recalling our computations from Section 3.4 we get the following formulas:
\[e\bullet_{tw}e^{i}=\begin{cases}e^{i+1}&\text{ if }1\leq i\leq n-2,\\ (-1)^{n}Q^{-(n-1)}e&\text{ if }i=n-1.\end{cases}\]
In other words, the matrix of \(e\bullet_{tw}\) with respect to the basis \(e,e^{2},\dots,e^{n-1}\) is
\[e\bullet_{tw}=\begin{bmatrix}0&0&\cdots&0&(-1)^{n}Q^{-(n-1)}\\ 1&0&\cdots&0&0\\ \vdots&\vdots&\vdots&&\vdots\\ 0&0&\cdots&1&0\end{bmatrix}.\]
**Lemma 7**.: _We have_
\[Q\partial_{Q}{}^{tw}S_{l}=(n-1)e\bullet_{tw}{}^{tw}S_{l-1}+{}^{tw}S_{l-1}{}^{ tw}\rho,\quad\forall l\geq 1.\]
Proof.: The lemma is an easy consequence of the divisor equation and the topological recursion relations for the GW invariants of the blow up \(\operatorname{Bl}(X)\). We have, by the divisor equation,
\[(e,e^{i}\psi^{l},e^{j})_{0,3,d\ell}=-d(e^{i}\psi^{l},e^{j})_{0,2,d\ell}+(e\cup e ^{i}\psi^{l-1},e^{j})_{0,3,d\ell}.\]
The LHS, according to the topological recursion relations is equal to
\[\sum_{d^{\prime}+d^{\prime\prime}=d}\sum_{k=1}^{n-1}(e^{i}\psi^{l-1},e_{k})_{ 0,2,d^{\prime}\ell}(e^{k},e,e^{j})_{0,3,d^{\prime\prime}\ell}\]
Multiplying the above identity by \((n-1)Q^{-d(n-1)}\) and summing over all \(d\geq 0\) we get
\[(S_{l}e^{i},(n-1)e\bullet_{tw}e^{j})=Q\partial_{Q}(S_{l+1}e^{i},e^{j})+(S_{l }(n-1)e\cup e^{i},e^{j}).\]
Note that the above expression is \(0\) for \(i=n-1\) because \(e^{n}=(-1)^{n-1}\phi_{N}\) is a cohomology class on \(\operatorname{Bl}(X)\) whose restriction to the exceptional divisor \(E\) is \(0\). Therefore, we may replace \((n-1)e\cup e^{i}\) with \(-\,^{tw}\rho(e^{i})\). The lemma follows.
Using Lemmas 5, 6, and 7, we get that the twisted periods satisfy the following system of differential equations
\[(\lambda+(n-1)e\bullet_{tw})\partial_{\lambda}\,\,^{tw}I_{\alpha}^{(-m)}(Q, \lambda) =\left({}^{tw}\theta+m-\frac{1}{2}\right)\,^{tw}I_{\alpha}^{(-m)}( Q,\lambda), \tag{39}\]
\[Q\partial_{Q}\,\,^{tw}I_{\alpha}^{(-m)}(Q,\lambda) =-(n-1)e\bullet_{tw}\partial_{\lambda}\,\,^{tw}I_{\alpha}^{(-m)} (Q,\lambda), \tag{40}\]
where \(\alpha=Q^{tw}\rho\beta\) with \(\beta\in\widetilde{H}(E)\) independent of \(Q\) and \(\lambda\). Note that the determinant
\[\det(\lambda+(n-1)e\bullet_{tw})=\lambda^{n-1}+\Big{(}(n-1)Q^{-1}\Big{)}^{n-1}.\]
We get that the twisted periods are multivalued analytic functions on the complement of the hypersurface in \(\mathbb{C}^{*}\times\mathbb{C}\) defined by the equation \((Q\lambda)^{n-1}+(n-1)^{n-1}=0\).
### Periods of \(\mathbb{P}^{n-2}\)
We would like to compute the monodromy of the system of differential equations (39)-(40). We will do this by identifying the twisted periods with the periods of \(\mathbb{P}^{n-2}\). To begin with, let us recall the definition of the periods of \(\mathbb{P}^{n-2}\). We have \(H^{*}(\mathbb{P}^{n-2})=\mathbb{C}[p]/p^{n-1}\), where \(p=c_{1}(\mathcal{O}(1))\) is the hyperplane class. We have an isomorphism of vector spaces
\[\widetilde{H}(E)\cong H(\mathbb{P}^{n-2}),\quad e^{i}\mapsto p^{i-1}.\]
Note that under this isomorphism \({}^{tw}\theta\) coincides with the grading operator \(\theta_{\mathbb{P}^{n-2}}\) and \({}^{tw}\rho\) coincides with \(-c_{1}(T\mathbb{P}^{n-2})\cup\). Therefore, the calibrated periods in the twisted GW theory of \(\mathbb{P}^{n-1}\) and the GW theory of \(\mathbb{P}^{n-2}\) are related by
\[e^{{}^{tw}\theta\,\pi{\bf i}\,\,tw}\widetilde{I}_{\beta}^{(-m)}(\lambda)= \widetilde{I}_{\sigma(\beta)}^{(-m)}(\lambda),\]
where \(\sigma(\beta)\coloneqq e^{\pi{\bf i}\,\theta}\beta\) where \(\theta\) is the grading operator of \(\mathbb{P}^{n-2}\).
Let us compare the \(S\)-matrices. In the GW theory of \(\mathbb{P}^{n-2}\) we have
\[S(q,z)^{-1}1=1+\sum_{d=1}^{\infty}\frac{q^{d}}{\prod_{m=1}^{d}(p-mz)^{n-1}},\]
where \(q\) is the Novikov variable corresponding to \(\mathcal{O}(1)\). Using the divisor equation \((-zq\partial_{q}+p\cup)S(q,z)^{-1}=S(q,z)^{-1}p\bullet\), where \(p\bullet\) is the operator of quantum multiplication by \(p\), we get
\[S(q,z)^{-1}p^{i}=p^{i}+\sum_{d=1}^{\infty}\frac{q^{d}(p-dz)^{i}}{\prod_{m=1}^{ d}(p-mz)^{n-1}},\quad 0\leq i\leq n-2. \tag{41}\]
On the other hand, the twisted \(S\)-matrix \({}^{tw}S(Q,z)\) can be computed from the \(S\)-matrix of the blow up \(\operatorname{Bl}(\mathbb{P}^{n})\) of \(\mathbb{P}^{n}\) at one point which is known explicitly. Namely, let us recall that \(\operatorname{Bl}(\mathbb{P}^{n})\) is the submanifold of \(\mathbb{P}^{n-1}\times\mathbb{P}^{n}\) defined by the quadratic equations \(x_{i}y_{j}=x_{j}y_{i}\)\((0\leq i,j\leq n-1)\), where \(x=[x_{0},\dots,x_{n-1}]\) and \(y=[y_{0},\dots,y_{n}]\) are the homogeneous coordinate systems on repsectively \(\mathbb{P}^{n-1}\) and \(\mathbb{P}^{n}\). We have two projection maps \(\pi_{1}:\operatorname{Bl}(\mathbb{P}^{n})\to\mathbb{P}^{n-1}\) and \(\pi_{2}:\operatorname{Bl}(\mathbb{P}^{n})\to\mathbb{P}^{n}\). Note that \(\pi_{2}\) is the projection of the blow up - the exceptional divisor \(E\) is the fiber over \([0,0,\dots,0,1]\in\mathbb{P}^{n}\). Let \(L_{1}\) and \(L_{2}\) be the pullbacks of the hyperplane bundles \(\mathcal{O}(1)\) on respectively \(\mathbb{P}^{n-1}\) and \(\mathbb{P}^{n}\). Let us denote by \({}^{bl}S(q_{1},q_{2},z)\) the S-matrix in the GW theory of \(\operatorname{Bl}(\mathbb{P}^{n})\), where \(q_{1}\) and \(q_{2}\) are the Novikov variables corresponding to the line bundles \(L_{1}\) and \(L_{2}\). Then we have
\[{}^{bl}S(q_{1},q_{2},z)^{-1}1=\sum_{d_{1},d_{2}\geq 0}\frac{q_{1}^{d_{1}}q_{2}^{ d_{2}}\,\prod_{m=-\infty}^{0}(p_{2}-p_{1}-mz)}{\prod_{m=1}^{d_{1}}(p_{1}-mz)^{n} \prod_{m=1}^{d_{2}}(p_{2}-mz)\prod_{m=-\infty}^{d_{2}-d_{1}}(p_{2}-p_{1}-mz)},\]
where \(q_{1}\) and \(q_{2}\) are the Novikov variables. The degree class in \(\operatorname{Bl}(\mathbb{P}^{n})\) corresponding to a pair \((d_{1},d_{2})\) is \(d_{1}e_{1}+d_{2}e_{2}\), where \(e_{1}\) is the class of a line in \(E\) and \(e_{2}=\pi_{2}^{-1}(\text{line in $\mathbb{P}^{n}$ avoiding $[0,0,\dots,0,1]$})\). It can be checked that the cohomology ring of the blow up is
\[H(\operatorname{Bl}(\mathbb{P}^{n}))=\mathbb{C}[p_{1},p_{2}]/\langle p_{2}(p_ {2}-p_{1})=0,p_{1}^{n}=0\rangle\]
and that \(\mathcal{O}(E)=L_{2}L_{1}^{-1}\), that is, the Poincare dual of the exceptional divisor \(E\) is \(e=p_{2}-p_{1}\). In order to compute the twisted S-matrix \({}^{tw}S\) we have to restrict \({}^{bl}S\) to \(q_{2}=0\) and substiyute \(q_{1}=Q^{-(n-1)}\). We get
\[{}^{bl}S(Q^{-(n-1)},0,z)^{-1}1=1+\sum_{d=1}^{\infty}\frac{Q^{-d(n-1)}\prod_{m=- d+1}^{0}(p_{2}-p_{1}-mz)}{\prod_{m=1}^{d}(p_{1}-mz)^{n}}. \tag{42}\]
Note that the numerator is proportional to \(p_{2}-p_{1}\). Using the relation \(p_{2}(p_{2}-p_{1})=0\) we get that \(p_{1}-mz\) can be rplaced by \(p_{1}-p_{2}-mz=-e-mz\). The above formula takes the form
\[{}^{bl}S(Q^{-(n-1)},0,z)^{-1}1=1+\sum_{d=1}^{\infty}\frac{(-1)^{dn}Q^{-d(n-1) }e}{(e+dz)^{n}\prod_{m=1}^{d-1}(e+mz)^{n-1}}. \tag{43}\]
Using the above formula and the divisor equation
\[\Big{(}-\frac{1}{n-1}zQ\partial_{Q}+e\cup\Big{)}{}^{bl}S(Q^{-(n-1)},0,z)^{-1} ={}^{bl}S(Q^{-(n-1)},0,z)^{-1}e\bullet,\]
whose proof is the same as the proof of Lemma 7, we get
\[{}^{tw}S(Q,z)^{-1}e^{i}=e^{i}+\sum_{d=1}^{\infty}\frac{(-1)^{dn}Q^{-d(n-1)}e }{(e+dz)^{n-i}\prod_{m=1}^{d-1}(e+mz)^{n-1}},\quad 1\leq i\leq n-1, \tag{44}\]
where the RHS should be expanded into a power series in \(z^{-1}\) and \(e\) should be identified with the linear operator
\[e\cup_{tw}:\widetilde{H}(E)\to\widetilde{H}(E),\quad e\cup_{tw}e^{i}:=\begin{cases} e^{i+1}&\text{ if }1\leq i\leq n-2,\\ 0&\text{ if }i=n-1.\end{cases}\]
Comparing formulas (41) and (44) we get that if we put \(q=(-1)^{n}Q^{-(n-1)}\), then the matrices of \(S(q,z)\) and \({}^{tw}S(Q,-z)\) with respect to respectively the bases \(1,p,\dots,p^{n-2}\) and \(e,e^{2},\dots,e^{n-1}\) coincide. Now we are in position to prove the following key formula.
**Proposition 8**.: _Under the isomorphism \(\widetilde{H}(E)\cong H(\mathbb{P}^{n-2})\) the following identity holds:_
\[{}^{tw}I_{\beta}^{(-m)}(Q,\lambda)=e^{-\pi\mathbf{i}\theta}\ I_{\sigma( \beta)}^{(-m)}(-Q^{-(n-1)},\lambda), \tag{45}\]
_where \(\sigma=e^{\pi\mathbf{i}\theta}\) and \(\theta\) is the grading operator of \(\mathbb{P}^{n-2}\)._
Proof.: By definition
\[{}^{tw}I_{\beta}^{(-m)}(Q,\lambda)=\sum_{l\in\mathbb{Z}}\sum_{i=1}^{n-1} \operatorname{Res}dzz^{l-1}\,(-\partial_{\lambda})^{l}\Big{(}{}^{tw} \widetilde{I}_{\beta}^{(-m)}(\lambda),{}^{tw}S(Q,-z)^{-1}e^{i}\Big{)}\ e_{i},\]
where the residue is defined formally as the coefficient in front of \(dz/z\). Under the isomorphism \(\widetilde{H}(E)\cong H(\mathbb{P}^{n-2})\) the period
\[{}^{tw}\widetilde{I}_{\beta}^{(-m)}(\lambda)=e^{-\pi\mathbf{i}\theta} \widetilde{I}_{\sigma(\beta)}^{(-m)}(\lambda),\]
\[{}^{tw}S(Q,-z)^{-1}e^{i}=S((-1)^{n}Q^{-(n-1)},z)^{-1}p^{i-1},\]
and \(e_{i}=(-1)^{n-1}p^{n-1-i}\). Note that the Poincare pairing on \(H(\mathbb{P}^{n-2})\) differs from the pairing on \(\widetilde{H}(E)\) by the sign \((-1)^{n-1}\). The above formula for the period takes the form
\[{}^{tw}I_{\beta}^{(-m)}(Q,\lambda)=\sum_{l\in\mathbb{Z}}\sum_{i=1}^{n-1} \operatorname{Res}dzz^{l-1}\left(-\partial_{\lambda}\right)^{l}\left(e^{-\pi \mathfrak{i}\theta}\widetilde{I}_{\sigma(\beta)}^{(-m)}(\lambda),S((-1)^{n}Q ^{-(n-1)},z)^{-1}p^{i-1}\right)p^{n-1-i}.\]
Since \(e^{\pi\mathfrak{i}\theta}p=-pe^{\pi\mathfrak{i}\theta}\), using formula (41), we get
\[e^{\pi\mathfrak{i}\theta}S(q,z)^{-1}p^{i-1}=e^{\pi\mathfrak{i}(\frac{n}{2}-i)} S((-1)^{n-1}q,-z)^{-1}p^{i-1}.\]
The formula for the period takes the form
\[{}^{tw}I_{\beta}^{(-m)}(Q,\lambda)=\sum_{l\in\mathbb{Z}}\sum_{i=1}^{n-1} \operatorname{Res}dzz^{l-1}\left(-\partial_{\lambda}\right)^{l}\left( \widetilde{I}_{\sigma(\beta)}^{(-m)}(\lambda),S(-Q^{-(n-1)},-z)^{-1}p^{i-1} \right)\sigma^{-1}(p^{n-1-i}),\]
where we used that \(\sigma^{-1}(p^{n-1-i})=e^{\pi\mathfrak{i}(\frac{n}{2}-i)}p^{n-1-i}.\) Clearly, the RHS of the above identity coincides with \(\sigma^{-1}\big{(}I_{\sigma(\beta)}^{(-m)}(-Q^{-(n-1)},\lambda)\big{)}\). The lemma follows.
### Monodromy of the twisted periods of \(\mathbb{P}^{n-1}\)
Let us describe the monodromy group of the system of differential equations (39)-(40), that is, the monodromy of the twisted periods of \(\mathbb{P}^{n-1}\). According to Proposition 8, it is sufficient to recall the monodromy group for the periods of \(\mathbb{P}^{n-2}\). Let us first fix \(q=1\) and \(\lambda^{\circ}\in\mathbb{R}_{>0}\) sufficiently large - any \(\lambda^{\circ}>n-1\) works. The value of the period \(I^{(-m)}(q,\lambda)\) depends on the choice of a path from \((1,\lambda^{\circ})\) to \((q,\lambda)\) avoiding the discriminant
\[\{(q,\lambda)\ |\ \det(\lambda-(n-1)p\bullet)=0\}\]
For fixed \(q\) the equation of the discriminant has \(n-1\) solutions
\[u_{k}(q):=(n-1)\eta^{-2k}q^{1/(n-1)},\quad 0\leq k\leq n-2,\]
wehre \(\eta=e^{\pi\mathfrak{i}\mathfrak{j}/(n-1)}\). Let us focus first on the monodromy of the twisted periods for \(q=1\). The fundamental group
\[\pi_{1}(\mathbb{C}\setminus\{u_{0}(1),\ldots,u_{n-2}(1)\},\lambda^{\circ})\]
is a free group generated by the simple loops \(\gamma_{k}^{\circ}\) corresponding to the paths \(C_{k}^{\circ}\) from \(\lambda^{\circ}\) to \(u_{k}(1)\) defined as follows. \(C_{k}^{\circ}\) consists of two pieces. First, an arc on the circle with center \(0\) and radius \(\lambda^{\circ}\) starting at \(\lambda^{\circ}\) and rotating clockwise on angle \(2\pi\mathfrak{i}k/(n-1)\). The second piece is the straight line segment from \(\lambda^{\circ}\eta^{-2k}\) to \(u_{k}(1)=(n-1)\eta^{-2k}\). It turns out that the reflection vector corresponding to the simple loop \(\gamma_{k}^{\circ}\) is precisely \(\Psi(\mathcal{O}(k))\), where \(\Psi\) is the Iritani's map for the integral structure of the quantum cohomology of \(\mathbb{P}^{n-2}\) (see formula (1)), that is,
\[\Psi(\mathcal{O}(k))=(2\pi)^{\frac{3-n}{2}}\Gamma(1+p)^{n-1}e^{2\pi\mathfrak{ i}kp}.\]
If \(q\in\mathbb{C}^{*}:=\mathbb{C}\setminus\{0\}\) is arbitrary, then we construct the path \((q,\lambda^{\circ}q^{1/(n-1)})\) by letting \(q\) vary continuously along some reference path. This path allows us to determine the value of \(I_{\alpha}^{(-m)}(q,\lambda)\) at \(\lambda=\lambda^{\circ}q^{1/(n-1)}\) which we declare to be the base point of \(\mathbb{C}\setminus\{u_{0}(q),\ldots,u_{n-2}(q)\}\). Let \(\gamma_{k}(q)\) be the simple loop obtained from \(\gamma_{k}^{\circ}\) by rescaling \(\lambda\in\gamma_{k}^{\circ}\mapsto\lambda q^{1/(n-1)}\). The reflection vectors corresponding to \(\gamma_{k}(q)\) are precisely
\[\Psi_{q}(\mathcal{O}(k))=(2\pi)^{\frac{3-n}{2}}\Gamma(1+p)^{n-1}q^{-p}e^{2\pi \mathfrak{i}kp}.\]
The proof of the above facts in the case of \(\mathbb{P}^{2}\) (that is \(n=4\)) can be found in [17]. In general, the argument is straightforward to generalize. Now let us apply the above construction and Proposition 8 in order to describe the monodromy of the twisted periods of \(\mathbb{P}^{n-1}\). We have \(q=-Q^{-(n-1)}\). Let us assume that \(Q\in\mathbb{R}_{>0}\) is a real number. We pick a reference path from \(1\) to \(q\) consisting of the
interval \([1,Q^{-(n-1)}]\) and the arc in the upper half-plane from \(Q^{-(n-1)}\) to \(q=-Q^{-(n-1)}.\) Note that with such a choice of the reference path \(q^{1/(n-1)}=\eta Q^{-1}\). Therefore, \(\gamma_{k}(q)\) becomes a simple loop around \(u_{k}(q)=(n-1)\eta^{-2k+1}Q^{-1}\) which are precisely the singularities of the differential equation (39). We get the following corollary.
**Corollary 1**.: _If \(\beta\in\widetilde{H}(E)\) is such that the analytic continuation of \({}^{tw}I_{\beta}^{(-m)}(Q,\lambda)\) along \(\gamma_{k}(q)\) is \({}^{tw}I_{-\beta}^{(-m)}(Q,\lambda)\), then \(\beta\) must be proportional to_
\[\Psi(\mathcal{O}_{E}(-k+1))=(2\pi)^{\frac{1-n}{2}}\Gamma(\operatorname{Bl}(X) )Q^{-e(n-1)}(2\pi\mathbf{i})^{\deg}\operatorname{ch}(\mathcal{O}_{E}(-k+1)),\]
_where \(\mathcal{O}_{E}(-k+1):=\mathcal{O}(-(k-1)E)-\mathcal{O}(-kE)\)._
Proof.: According to the above discussion and Proposition 8, under the isomorphism \(\widetilde{H}(E)\cong H(\mathbb{P}^{n-2})\), \(\sigma(\beta)\) must be proportional to \(\Psi_{q}(\mathcal{O}(k))\), that is, \(\beta\) is proportional to
\[e^{-\pi\mathbf{i}\,\theta}\,\,(2\pi)^{\frac{3-n}{2}}\Gamma(1+p)^{n-1}q^{-p}e^ {2\pi kp}=(2\pi)^{(3-n)/2}\,\mathbf{i}^{2-n}\,\Gamma(1-p)^{n-1}q^{p}e^{-2\pi \mathbf{i}kp}.\]
Note that \(q^{p}=e^{\pi\mathbf{i}p}Q^{-p(n-1)}\). Therefore, under the isomorphism \(\widetilde{H}(E)\cong H(\mathbb{P}^{n-2})\), the above expression becomes
\[(2\pi)^{(3-n)/2}\,\mathbf{i}^{2-n}\,\Gamma(1-e)^{n-1}Q^{-e(n-1)}e^{(-2k+1)\pi \mathbf{i}e}\,e.\]
We have to check that the above expression is proportional to the image of the Iritani map for \(\operatorname{Bl}(X)\) of the exceptional object \(\mathcal{O}((-k+1)E)-\mathcal{O}(-kE).\) We have
\[\Gamma(\operatorname{Bl}(X))=\Gamma(X)\Gamma(1-e)^{n}\Gamma(1+e),\]
\[\Gamma(1-e)\Gamma(1+e)=\frac{2\pi\mathbf{i}e}{e^{\pi\mathbf{i}}e-e^{-\pi \mathbf{i}}e}=\frac{2\pi\mathbf{i}\,e}{e^{2\pi\mathbf{i}}e-1}\,e^{\pi \mathbf{i}e},\]
and
\[(2\pi\mathbf{i})^{\deg}\,\operatorname{ch}(\mathcal{O}((-k+1)E)-\mathcal{O}( -kE))=e^{-2\pi\mathbf{i}ke}(e^{2\pi\mathbf{i}e}-1).\]
Since \(\Gamma(X)\cup e=e\), the image of the Iritani map becomes
\[(2\pi)^{(3-n)/2}\,\mathbf{i}\,\Gamma(1-e)^{n-1}\,Q^{-e(n-1)}\,e^{(-2k+1)\pi \mathbf{i}e}e.\]
The claim of the lemma follows.
### Isomonodromic analytic continuation
Let \(D(u,r)\) be the open disk in \(\mathbb{C}\) with center \(u\) and radius \(r\). Put \(\mathbb{D}_{r}:=D(0,r)\). Let \(\epsilon>0\) be a real number, \(V\subset\mathbb{C}\) an open subset, and \(u_{i}:\mathbb{D}_{\epsilon}\to V\) (\(1\leq i\leq m\)) be \(m\) holomorphic functions, such that, there exists a positive real number \(\delta>0\) satisfying
1. The \(m\) disks \(D(u_{i}(0),\delta)\) (\(1\leq i\leq m\)) are pairwise disjoint and contained in \(V\).
2. We have \(u_{i}(Q)\in D(u_{i}(0),\delta)\) for all \(Q\in\mathbb{D}_{\epsilon}\).
Suppose that \(I\) is a multi-valued analytic function on \(\mathbb{D}_{\epsilon}\times V\setminus\Sigma\) with values in a finite dimensional vector space \(H\), where
\[\Sigma:=\{(Q,\lambda)\in\mathbb{D}_{\epsilon}\times V\ |\ \lambda=u_{i}(Q)\ \ \text{for some}\ i\ \}.\]
Let us fix \(\lambda^{\circ}\in V\), such that, \(\mathbb{D}_{\epsilon}\times\{\lambda^{\circ}\}\) is disjoint from \(\Sigma\). Then \(I\) is analytic at \((Q,\lambda)=(0,\lambda^{\circ})\) and \(I\) extends analytically along any path in \(\mathbb{D}_{\epsilon}\times V\setminus\Sigma\) starting at \((0,\lambda^{\circ})\). In particular, we can extend uniquely \(I(Q,\lambda)\) for all \(Q\in\mathbb{D}_{\epsilon}\) and \(\lambda\) sufficiently close to \(\lambda^{\circ}\). Let us expand \(I(Q,\lambda)=\sum_{d=0}^{\infty}I_{d}(\lambda)Q^{d}\), where each coefficient \(I_{d}\) is an \(H\)-valued analytic function at \(\lambda=\lambda^{\circ}\). Clearly, \(I_{d}(\lambda)\) extends analytically along any path in \(V\setminus\{u_{1}(0),\ldots,u_{m}(0)\}\).
**Lemma 8**.: _Suppose that \(\gamma\) is a closed loop based at \(\lambda^{\circ}\) in_
\[V\setminus D\big{(}u_{1}(0),\delta\big{)}\sqcup\cdots\sqcup D\big{(}u_{m}(0), \delta\big{)},\]
_such that, for every fixed \(Q\neq 0\), the analytic extension of \(I(Q,\lambda)\) along the path \(\{Q\}\times\gamma\) transforms \(I(Q,\lambda)\) into \(A(I(Q,\lambda))\), where \(\lambda\) is sufficiently close to \(\lambda^{\circ}\) and \(A\in\operatorname{GL}(H)\) is a linear operator. If the operator \(A\) is independent of \(Q\), then the analytic continuation along \(\gamma\) transforms the coefficient \(I_{d}(\lambda)\) into \(A(I_{d}(\lambda))\)._
Proof.: Since \(I_{d}(\lambda)=\frac{1}{d!}\frac{\partial^{d}I}{\partial Q^{d}}(0,\lambda)\) by replacing the function \(I(Q,\lambda)\) with its partial derivative \(\frac{1}{d!}\frac{\partial^{d}I}{\partial Q^{d}}(Q,\lambda)\), we can reduce the general case to the case when \(d=0\).
Let us cover the path \(\gamma\) with small _closed_ disks \(D_{j}\) (\(1\leq j\leq N\)), such that,
1. \(D_{j}\) is disjoint from \(D(u_{i}(0),\delta)\) for all \(i\).
2. \(D_{j}\cap D_{j+1}\neq\emptyset\).
3. \(D_{N}=D_{1}\).
In other words, the union of the disks \(D_{j}\) give a fattening of the path \(\gamma\). Let \(I(Q,\lambda_{j})\)\(\forall\,\lambda_{j}\in D_{j}\) be the analytic extension of \(I(Q,\lambda)\) along \(\gamma\). Let us fix an arbitrary \(\epsilon^{\prime}>0\). There exists a small \(\rho_{j}>0\), such that, \(I(Q,\lambda_{j})\) is a uniformly continuous function in \((Q,\lambda)\in\mathbb{D}_{\rho_{j}}\times D_{j}\Rightarrow\) there exists \(0<\delta^{\prime}_{j}<\rho_{j}\), such that,
\[|I(Q,\lambda_{j})-I(0,\lambda_{j})|<\epsilon^{\prime}\quad\forall|Q|<\delta^{ \prime}_{j},\quad\lambda_{j}\in D_{j},\]
where \(|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! |\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! \!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! \!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! \!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!\!|\!|\!|\!\!|\!|\!|\! \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!|\!|\!\!|\!\!|\!|\!|\!\!|\!|\! \!|\!|\!\!|\!|\!|\!\!|\!|\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!|\!|\! \!|\!|\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\! \!|\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\! \!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\! \!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!|\!\!|\!\!|\!|\!\!|\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!|\!\!|\!\!|\!|\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|
_such that, for every fixed \(Q\in\mathbb{D}_{\epsilon}\setminus(-\epsilon,0]\), the analytic extension of \(I(Q,\lambda)\) along the path \(\{Q\}\times\gamma\) transforms \(I(Q,\lambda)\) into \(A(I(Q,\lambda))\), where \(\lambda\) is sufficiently close to \(\lambda^{\circ}\) and \(A\in\operatorname{GL}(H)\) is a linear operator. If the operator \(A\) is independent of \(Q\), then the analytic continuation along \(\gamma\) transforms the coefficient \(I_{s,d}(\lambda)\) into \(A(I_{s,d}(\lambda))\)._
Proof.: Let \(I_{s}(Q,\lambda)\) be as in condition (ii) in Definition 2. We have \(I(Q,\lambda)=\sum_{s=0}^{n}I_{s}(Q,\lambda)(\log Q)^{s}\). The analytic continuation along \(\gamma\) yields \(A(I(Q,\lambda))=\sum_{s=0}^{n}\widetilde{I}_{s}(Q,\lambda)(\log Q)^{s}\), where \(\widetilde{I}_{s}(Q,\lambda)\) is the analytic extension of \(I_{s}(Q,\lambda)\) along \(\gamma\). It is easy to prove by letting \(Q\to 0\) that such an identity is possible only if the coefficients in front of the powers of \(\log Q\) are equal, that is, \(A(I_{s}(Q,\lambda))=\widetilde{I}_{s}(Q,\lambda)\). It remains only to recall Lemma 8.
### Vanishing of the base component
Let \(\alpha=Q^{-(n-1)e}\beta\), where \(\beta\in H^{*}(\operatorname{Bl}(X))\) is a vector independent of \(Q\) and \(t\). Let \(\beta=\beta_{e}+\beta_{b}\). We would like to extract the leading order terms in the power series expansion at \(Q=0\) of
\[Q^{\Delta+m+(n-1)/2}I^{(-m)}\left(t,q,Q,Q^{-1}\lambda\right)\alpha, \tag{46}\]
where \(m>0\) is a sufficiently large integer, that is, we choose \(m\) so big that the operator \(\widetilde{\theta}+m+1/2\) has only positive eigenvalues. Moreover, we would like to determine the structure of the following terms in the expansion up to order \(Q^{n}\). Note that
\[Q^{\Delta}Q^{-\widetilde{\theta}-m+\frac{1}{2}}Q^{-\widetilde{\rho}}\alpha=Q^ {-m-(n-1)/2}\Big{(}\beta_{e}+Q^{\deg}(Q^{-\rho}\beta_{b})\Big{)}.\]
Therefore, we have
\[Q^{\Delta+m+(n-1)/2}I^{(-m)}\left(t,q,Q,Q^{-1}\lambda\right)\alpha=\left(Q^{ \Delta}I^{(-m)}\left(t,q,Q,Q^{-1}\lambda\right)Q^{\widetilde{\rho}}Q^{ \widetilde{\theta}+m-\frac{1}{2}}Q^{-\Delta}\right)\Big{(}\beta_{e}+Q^{\deg} (Q^{-\rho}\beta_{b})\Big{)}.\]
Let us look at the contribution of \(\beta_{e}\) to (46), that is, the expression
\[\left(Q^{\Delta}I^{(-m)}\left(t,q,Q,Q^{-1}\lambda\right)Q^{\widetilde{\rho}}Q ^{\widetilde{\theta}+m-\frac{1}{2}}Q^{-\Delta}\right)\beta_{e}. \tag{47}\]
According to Proposition 3, the leading order term of the \(\widetilde{H}(E)\)-component of (47) is at degree \(0\) and it is precisely \({}^{tw}I_{\beta_{e}}^{(-m)}(1,\lambda)\), that is, the twisted period of \(\mathbb{P}^{n-1}\) at \(Q=1\). The leading order term of the \(H(X)\)-component of (47) is at degree \(n\) and the corresponding coefficient in front of \(Q^{n}\) is given by
\[\sum_{l,d=0}^{\infty}(-\partial_{\lambda})^{l}\langle\psi^{l-1}\widetilde{I }_{\beta_{e}}^{(-m)}(\lambda),1\rangle_{0,2,d\ell}\ (-1)^{n-1}\phi_{N}, \tag{48}\]
where
\[\widetilde{I}_{\beta_{e}}^{(-m)}(\lambda)=e^{-(n-1)e\partial_{\lambda} \partial_{m}}\left(\frac{\lambda^{\widetilde{\theta}+m-1/2}}{\Gamma(\widetilde {\theta}+m+1/2)}\right)\beta_{e}\]
is the calibrated period of \(\operatorname{Bl}(X)\) and for \(l=0\) the correlator should be understood via the string equation as \(\langle\psi^{l}\widetilde{I}_{\beta_{e}}^{(-m)}(\lambda),1,1\rangle_{0,3,d\ell}\).
Let us look at the contribution to (46) corresponding to \(\beta_{b}\), that is, the expression
\[\left(Q^{\Delta}I^{(-m)}\left(t,q,Q,Q^{-1}\lambda\right)Q^{\widetilde{\rho}}Q ^{\widetilde{\theta}+m-\frac{1}{2}}Q^{-\Delta}\right)Q^{\deg}(Q^{-\rho}\beta_{b }). \tag{49}\]
Let us decompose \(\beta_{b}=\sum_{a=1}^{N}\beta_{b,a}\phi_{a}\). The \(H(X)\)-component of (49) is a power series in \(Q\) whose coefficients are polynomials in \(\log Q\) whose coefficients are in \(H(X)\). According to Propositions (5)
and (4) the coefficient in front of \(Q^{M}(\log Q)^{0}\) with \(0\leq M\leq n\) has the form
\[\sum_{a:\deg(\phi_{a})=M}\frac{\lambda^{\theta+m-1/2}}{\Gamma(\theta+m+1/2)}\beta _{b,a}\phi_{a}+\sum_{a^{\prime}:\deg(\phi_{a^{\prime}})<M}\beta_{b,a^{\prime}}f_ {M,a^{\prime}}(\lambda), \tag{50}\]
where \(f_{M,a^{\prime}}(\lambda)\) is the \(H(X)\)-component of the coefficient in front of \(Q^{M-\deg(\phi_{a^{\prime}})}\) in the expansion at \(Q=0\) of
\[\left(Q^{\Delta}I^{(-m)}\left(t,q,Q,Q^{-1}\lambda\right)Q^{\overline{\rho}}Q^{ \overline{\theta}+m-\frac{1}{2}}Q^{-\Delta}\right)\phi_{a^{\prime}}.\]
Let us summarize our analysis.
**Proposition 10**.: _Let \(\alpha=Q^{-(n-1)e}\beta\), where \(\beta\in H(\mathrm{Bl}(X))\) is independent of \(t\) and \(Q\). Then_
_a) The \(H(X)\)-component of (46) expands as a power series in \(Q\) whose coefficients are polynomials in \(\log Q\). The coefficient in front of \((\log Q)^{0}\,Q^{M}\) for \(0\leq M\leq n-1\) is given by (50), while for \(M=n\) it is given by the sum of (50)(with \(M=n\)) and (48)._
_b) If \(\beta_{b,1}=0\), then the \(\widetilde{H}(E)\)-component of (46) expands as a power series in \(Q\). The corresponding leading order term is \(\mbox{}^{tw}I^{(-m)}_{\beta_{e}}(1,\lambda)\)._
Let us discuss now the analytic properties of the series (48). It is convenient to introduce the following series
\[\Phi_{\beta}(Q,\lambda):=\sum_{l,d=0}^{\infty}(-\partial_{\lambda})^{l}\langle \psi^{l-1}\widetilde{I}^{(-m)}(\lambda)Q^{-(n-1)e}\beta,1\rangle_{0,2,d\ell}\, Q^{-d(n-1)},\quad\beta\in\widetilde{H}(E).\]
Note that (48) coincides with \(\Phi_{\beta_{e}}(1,\lambda)(-1)^{n-1}\phi_{N}\). Recalling the definition of the period vector \(I^{(-m)}_{\alpha}(t,q,Q,\lambda)\) we get
\[\Phi_{\beta}(Q,\lambda)=(I^{(-m)}_{Q^{-(n-1)e}\beta}(t,0,Q,\lambda),1),\quad \forall\beta\in\widetilde{H}(E).\]
**Proposition 11**.: _Let \(Q\) be a positive real number and \(\gamma_{k}(q)\) with \(q=-Q^{-(n-1)}\) be the same simple loop as in Corollary 1. If_
\[\beta=\Psi(\mathcal{O}_{E}(-k+1))=(2\pi)^{(1-n)/2}\,\Gamma(1-e)^{n-1}\,Q^{-(n -1)e}\,e^{(-2k+1)\pi\mathbf{i}\,e}\,2\pi\mathbf{i}\,e,\]
_then the analytic continuation of \(\Phi_{\beta}(Q,\lambda)\) along \(\gamma_{k}(q)\) is \(-\Phi_{\beta}(Q,\lambda)\)._
The proof of Proposition 11 is based on a mirror symmetry argument. It will be given in Section 6. Now we are in position to prove the main result of this paper. Let us fix \(t\in\widetilde{H}(X)\) and the Novikov variables \(q=(q_{1},\dots,q_{r})\) of \(X\) to be generic, such that, the quantum cohomology of \(X\) is semi-simple and the conclusions of Proposition 7 hold. Let us pick a real number \(R>0\), such that, \(u_{j}(t,q,0)\in\mathbb{D}_{R}\) for all \(1\leq j\leq N\), where recall that \(\mathbb{D}_{R}\) denotes the circle with center \(0\) and radius \(R\). Let us choose a real number \(\epsilon>0\) so small that the quantum cup product of the blowup \(\mathrm{Bl}(X)\) at \((t,q,Q)\) is convergent for all \(|Q|<\epsilon\), \(u_{j}(t,q,Q)\in\mathbb{D}_{R}\) for all \(|Q|<\epsilon\) and \(1\leq j\leq N\), and \(R<(n-1)\epsilon^{-1}\). We would like to use the results from Section (5.6) in the following settings: the domain \(V:=\{\lambda\ |\ |\lambda|>R\epsilon\}\), \(m:=n-1\), and the \((n-1)\) holomorphic functions (denoted by \(u_{i}\) in Section 5.6) will be given by \(Qu_{N+k}(t,q,Q)\), \(1\leq k\leq n-1\). Here we are using Proposition 7 to conclude that \(Qu_{N+k}(t,q,Q)=-(n-1)v_{k}+O(Q)\) is analytic at \(Q=0\). Let us choose \(\delta>0\), such that, the disks \(D(-(n-1)v_{k},2\delta)\)\((1\leq k\leq n-1)\) are pairwise disjoint. If necessary, we decrease \(\epsilon\) even further so that condition (ii) given in the beginning of Section 5.6 is satisfied. Note that condition (i) is satisfied according to our choice of \(\delta\). Before we continue further let us fix the solutions \(v_{k}\) of \(\lambda^{n-1}=(-1)^{n}\) to be given by \(v_{k}=-\eta^{-2k+1}\), where \(\eta:=e^{\pi\mathbf{i}/(n-1)}\). Then \(-(n-1)v_{k}=(n-1)\eta^{-2k+1}\). Finally, for a reference point \(\lambda^{\circ}\in V\) we pick any positive real \(\lambda^{\circ}>(n-1)>R\epsilon\).
Let us define the loop \(\gamma_{k}\) in \(V\setminus D((n-1)\eta^{-1},\delta)\cup\cdots\cup D((n-1)\eta^{-2n+3},\delta)\) to be the simple loop around \((n-1)\eta^{-2k+1}\) based at \(\lambda^{\circ}\) corresponding to the path from \(\lambda^{\circ}\) to \((n-1)\eta^{-2k+1}\) consisting of the following two pieces: an arc along the circle \(|\lambda|=\lambda^{\circ}\) obtained by rotating from \(\lambda^{\circ}\) clock-wise on angle \((2k-1)\pi/(n-1)\) and the second piece is the straight segment from \(\lambda^{\circ}\eta^{-2k+1}\) to \((n-1)\eta^{-2k+1}\).
Suppose now that \(Q\in\mathbb{D}_{\epsilon}\) is a positive real number. Note that by re-scaling the path \(\gamma_{k}\), we obtain a path \(\gamma_{k}\cdot Q^{-1}\) which is a simple loop around \((n-1)\eta^{-2k+1}Q^{-1}\). The simple loop \(\gamma_{k}\) goes around \((n-1)\eta^{-2k+1}\) along a circle with center \((n-1)\eta^{-2k+1}\) and radius \(r\), where \(\delta<r<2\delta\). We claim that by decreasing \(\epsilon\) if necessary, we can arrange that the circle with center \((n-1)\eta^{-2k+1}Q^{-1}\) and radius \(rQ^{-1}\) contains the canonical coordinate \(u_{N+k}(t,q,Q)\). Indeed, we have
\[|u_{N+k}(t,q,Q)-(n-1)\eta^{-2k+1}Q^{-1}|=|Qu_{N+k}(t,q,Q)-(n-1)\eta^{-2k+1}|Q^ {-1}\]
and since \(|Qu_{N+k}(t,q,Q)-(n-1)\eta^{-2k+1}|\) has order \(O(Q)\), by choosing \(\epsilon\) small enough we can arrange that \(|Qu_{N+k}(t,q,Q)-(n-1)\eta^{-2k+1}|<r\) for all \(|Q|<\epsilon\). In other words, the re-scaled loop \(\gamma_{k}\cdot Q^{-1}\) is a simple loop around the canonical coordinate \(u_{N+k}(t,q,Q)\). Let us denote by \(\alpha\in H\big{(}\mathrm{Bl}(X)\big{)}\) the reflection vector corresponding to the simple loop \(\gamma_{k}\cdot Q^{-1}\). Let us recall Proposition 9 for the series (46), that is,
\[I(Q,\lambda):=Q^{\Delta+m+(n-1)/2}I^{(-m)}(t,q,Q,Q^{-1}\lambda)\alpha. \tag{51}\]
The singularities of \(I(Q,\lambda)\) are precisely at \(Q^{-1}\lambda=u_{j}(t,q,Q)\) for \(1\leq j\leq N+k\), that is, \(\lambda=Qu_{j}(t,q,Q)\). Note that by definition of \(R\), the first \(N\) singularities \(Qu_{j}(t,q,Q)\)\((1\leq j\leq N)\) are in \(\mathbb{D}_{R\epsilon}\). Therefore, \(I(Q,\lambda)\) is a multi-valued analytic function in \((Q,\lambda)\in\mathbb{D}_{\epsilon}\times V\setminus\Sigma\). Although we are not going to give a complete proof, let us outline how to prove that \(I(Q,\lambda)\) has at most logarithmic singularity at \(Q=0\) (see Definition 2). Recall the divisor equation (36) with \(i=r+1\). Note that \(q_{r+1}\partial_{q_{r+1}}=\frac{1}{n-1}Q\partial_{Q}\). Combining the divisor equation and the differential equation of the second structure connection with respect to \(\tau_{r+1}=t_{N+1}\), it is easy to prove that for every \(\lambda\in V\setminus D((n-1)\eta^{-1},\delta)\sqcup\cdots\sqcup D((n-1)\eta^ {-2n+3},\delta)\) the function \(I(Q,\lambda)\) is a solution to a differential equation that has a Fuchsian singularity at \(Q=0\). Now the conclusion follows from the theory of Fuchsian singularities.
The analytic continuation of \(I(Q,\lambda)\) along \(\gamma_{k}\) transforms \(I(Q,\lambda)\) into \(-I(Q,\lambda)\) because when \(\lambda\) changes along \(\gamma_{k}\), \(Q^{-1}\lambda\) changes along \(\gamma_{k}\cdot Q^{-1}\) which is the simple loop used to define the reflection vector \(\alpha\). Let us look at the expansion of \(I(Q,\lambda)\) at \(Q=0\) in the powers of \(Q\) and \(\log Q\). To begin with, we know that \(\alpha=Q^{-(n-1)e}\beta\) where \(\beta\in H(\mathrm{Bl}(X))\) is independent of \(t\) and \(Q\) (it could depend on \(q\)). Let us decompose \(\beta=\beta_{e}+\beta_{b}\), where \(\beta_{e}\in\widehat{H}(E)\) and \(\beta_{b}\in H(X)\). Put \(\beta_{b}=:\sum_{i=1}^{N}\beta_{b,i}\phi_{i}\). We claim that \(\beta_{b}=0\). According to Proposition 10, a), the coefficient in front of \(Q^{0}(\log Q)^{0}\) in the expansion of the \(H(X)\)-component of \(I(Q,\lambda)\) is
\[\frac{\lambda^{\theta+m-1/2}}{\Gamma(\theta+m+1/2)}\beta_{b,1}\phi_{1}=\frac{ \lambda^{m+(n-1)/2}}{\Gamma(m+(n+1)/2)}\beta_{b,1}\phi_{1}.\]
According to Proposition 9, the analytic continuation along \(\gamma_{k}\) of the above expression should change the sign. However, the function \(\lambda^{m+(n-1)/2}\) is invariant under the analytic continuation along \(\gamma_{k}\). Therefore, the only possibility is that \(\beta_{b,1}=0\). Suppose that \(M\) is the smallest number, such that, \(\beta_{b,a}\neq 0\) for some \(\phi_{a}\) of degree \(M\). If \(M\leq n-1\), then since \(\beta_{b,a^{\prime}}=0\) for all \(a^{\prime}\), such that, \(\deg(\phi_{a^{\prime}})<M\), Proposition 10, a) yields that the coefficient in front of \(Q^{M}(\log Q)^{0}\) in the expansion of the \(H(X)\)-component of \(I(Q,\lambda)\) is
\[\sum_{a:\deg(\phi_{a})=M}\frac{\lambda^{\theta+m-1/2}}{\Gamma(\theta+m+1/2)} \beta_{b,a}\phi_{a}.\]
Just like before, the above expression is invariant under the analytic continuation along \(\gamma_{k}\), while Proposition 9 implies that the analytic continuation must change the sign. The conclusion is again that \(\beta_{b,a}=0\) for all \(a\) for which \(\phi_{a}\) has degree \(M\). We get that all \(\beta_{b,a}=0\) except possibly for \(\beta_{b,N}\). Let us postpone the analysis of \(\beta_{b,N}\) and consider \(\beta_{e}\) first. Recalling Proposition 10, b), we get that the coefficient in front of \(Q^{0}\) in the expansion of the \(\widetilde{H}(E)\)-component of \(I(Q,\lambda)\) is the twisted period \({}^{tw}I_{\beta_{e}}^{(-m)}(1,\lambda)\). Therefore, the analytic continuation of \({}^{tw}I_{\beta_{e}}^{(-m)}(1,\lambda)\) along \(\gamma_{k}\) must be \({}^{tw}I_{-\beta_{e}}^{(-m)}(1,\lambda)\). Recalling Corollary 1, we get that \(\beta_{e}\) must be proportional to
\[\Psi(\mathcal{O}_{E}(-k+1))=(2\pi)^{\frac{1-n}{2}}\Gamma(\operatorname{Bl}(X ))(2\pi\mathbf{i})^{\deg}\operatorname{ch}(\mathcal{O}_{E}(-k+1)).\]
Let us prove that \(\beta_{b,N}=0\). According to Proposition 10, the coefficient in front of \(Q^{n}(\log Q)^{0}\) in the expansion of the \(H(X)\)-component of \(I(Q,\lambda)\) is
\[\Big{(}\frac{\lambda^{m-(n+1)/2}}{\Gamma(m+(1-n)/2)}\beta_{b,N}+\Phi_{\beta_{ e}}(1,\lambda)(-1)^{n-1}\Big{)}\phi_{N}.\]
Let us analytically continue the above expression along \(\gamma_{k}\). Just like above, the analytic continuation should change the sign. However, recalling Proposition 11 we get
\[\Big{(}\frac{\lambda^{m-(n+1)/2}}{\Gamma(m+(1-n)/2)}\beta_{b,N}-\Phi_{\beta_{e }}(1,\lambda)(-1)^{n-1}\Big{)}\phi_{N}.\]
Therefore, \(\beta_{b,N}=0\) and this completes the proof of our claim that \(\beta_{b}=0\). Moreover, we proved that \(\beta=\beta_{e}\) is proportional to \(\Psi(\mathcal{O}_{E}(-k+1))\). In order to conclude that the proportionality coefficient is \(\pm 1\), we need only to check that the Euler pairing \((\Psi(\mathcal{O}_{E}(-k+1)),\Psi(\mathcal{O}_{E}(-k+1)))=1\). For simplicity, let us consider only the case when \(k=1\). In fact, the general case follows easily by using analytic continuation with respect to \(q\) around \(q=0\): the clock-wise analytic continuation transforms \(\Psi_{q}(\mathcal{O}_{E}(-k+1))\) to \(\Psi_{q}(\mathcal{O}_{E}(-k+1)\otimes\mathcal{O}(-E))=\Psi_{q}(\mathcal{O}_{ E}(-k))\). We have
\[\Psi_{q}(\mathcal{O}_{E})=(2\pi)^{(1-n)/2}\,\Gamma(1-e)^{n-1}\,q^{e}\,(2\pi \mathbf{i}e),\]
where \(q=-Q^{-(n-1)}\) and the branch of \(\log q\) is fixed in such a way that \(q^{e}=e^{\pi\mathbf{i}e}Q^{-(n-1)e}\). Recalling formula (5), after a straightforward computation, we get \((\Psi_{q}(\mathcal{O}_{E}),\Psi_{q}(\mathcal{O}_{E}))=1\).
## 6. Mirror model for the twisted periods
The goal in this section is to prove Proposition 11. The idea is to prove that the Laplace transform of \(\Phi_{\beta}(Q,\lambda)\) with respect to \(\lambda\) can be identified with an appropriate oscillatory integral whose integration cycle is swept out by a family of vanishing cycles. Once this is done, the statement of the proposition follows easily by an elementary local computation. It is more convenient to construct an oscillatory integral when \(q:=-Q^{-(n-1)}\) is a positive real number. Therefore, let us reformulate the statement of Proposition 11 by analytically continuing \(\Phi_{\beta}(Q,\lambda)\) with respect to \(Q\) along an arc in the counter-clockwise direction connecting the rays \(\mathbb{R}_{>0}\) and \(\eta\mathbb{R}_{>0}\), where \(\eta:=e^{\pi\mathbf{i}/(n-1)}\). Note that the value of \(\log Q\) will change to \(\log|Q|+\frac{\pi\mathbf{i}}{n-1}\). In other words, we will assume that \(Q=\eta q^{-1/(n-1)}\) where \(q\in\mathbb{R}_{>0}\) is a positive real number. Note that \(Q^{-(n-1)e}=e^{-\pi\mathbf{i}e}q^{e}\) and that the formula for \(\Phi_{\beta}(Q,\lambda)\) takes the form
\[\Phi_{\beta}(q,\lambda)=\sum_{l,d=0}^{\infty}(-\partial_{\lambda})^{l}\langle \psi^{l-1}\widetilde{I}^{(-m)}(\lambda)q^{e}e^{-\pi\mathbf{i}e}\beta,1)_{0,2,d \ell}(-q)^{d}. \tag{52}\]
Furthermore, it is sufficient to prove Proposition 11 only in the case when \(k=0\), because the general case will follow from that one by taking an appropriate analytic continuation with respect to
around \(q=0\). Let us assume \(k=0\), so that
\[e^{-\pi\mathbf{i}e}\beta=(2\pi)^{(1-n)/2}\,\Gamma(1-e)^{n-1}\,(2\pi\mathbf{i}\,e).\]
Note that \(\gamma_{0}(q)\) is a simple loop approaching \(u_{0}(q)=(n-1)q^{1/(n-1)}\in\mathbb{R}_{>0}\) along the positive real axis. From now on we assume the above settings and denote \(\Phi_{\beta}(Q,\lambda)\) and \(u_{0}(q)\) simply by respectively \(\Phi(q,\lambda)\) and \(u(q)\). We have to prove that the analytic continuation of \(\Phi(q,\lambda)\) along \(\gamma_{0}(q)\) is \(-\Phi(q,\lambda)\). Finally, let us point out that in the previous sections we denoted by \(q\) the sequence of Novikov variables \((q_{1},\ldots,q_{r})\) of \(X\), while in this section we denote by \(q\) just a positive real number. We will never have to deal with \(X\), so there will be no confusion in doing so.
### Contour integral
The Gromov-Witten invariants involved in the definition of the series \(\Phi(q,\lambda)\) can be extracted from formula (42). Indeed, we have
\[\Phi(q,\lambda)=\sum_{l=0}^{\infty}(-\partial_{\lambda})^{l}(^{bl}S_{l}(-q,0) \widetilde{I}^{(-m)}(\lambda)q^{e}e^{-\pi\mathbf{i}e}\beta,1)=\sum_{l=0}^{ \infty}(-\partial_{\lambda})^{l}(\widetilde{I}^{(-m)}(\lambda)q^{e}e^{-\pi \mathbf{i}e}\beta,^{bl}S_{l}(-q,0)^{T}1).\]
Recall that \({}^{bl}S(-q,0,-\partial_{\lambda})^{T}={}^{bl}S(-q,0,\partial_{\lambda})^{-1}\). Using formula (43) we get
\[\Phi(q,\lambda)=\int_{\mathrm{Bl}(X)}\sum_{d=0}^{\infty}\frac{(-1)^{dn}(-q)^{ d}e\partial_{\lambda}}{(e\partial_{\lambda}+d)^{n}\prod_{i=1}^{d-1}(e\partial_{ \lambda}+i)^{n-1}}\ \partial_{\lambda}^{d(n-1)}\ \widetilde{I}^{(-m)}(\lambda)q^{e}e^{-\pi \mathbf{i}e}\beta. \tag{53}\]
Using that \(\theta\,e=e\,(\theta-1)\) we get the relation \(\widetilde{I}^{(-m)}(\lambda)\,e=e\partial_{\lambda}\widetilde{I}^{(-m)}(\lambda)\) (see Lemma 2, a), where slightly abusing the notation we denoted by \(e\) the operator of classical cup product multiplication by \(e\). Therefore,
\[\widetilde{I}^{(-m)}(\lambda)q^{e}e^{-\pi\mathbf{i}e}\beta = q^{e\partial_{\lambda}}(2\pi)^{(1-n)/2}\,\Gamma(1-e\partial_{ \lambda})^{n-1}\,(2\pi\mathbf{i}\,e\partial_{\lambda})\,\widetilde{I}^{(-m)}( \lambda)1=\] \[= q^{e\partial_{\lambda}}(2\pi)^{(1-n)/2}\,\Gamma(1-e\partial_{ \lambda})^{n-1}\,(2\pi\mathbf{i}\,e\partial_{\lambda})\,e^{-(n-1)e\partial_{ \lambda}\partial_{m}}\left(\frac{\lambda^{\frac{n}{2}+m-\frac{1}{2}}}{\Gamma( \frac{n}{2}+m+\frac{1}{2})}\right).\]
Let us substitute the above formula for \(\widetilde{I}^{(-m)}(\lambda)q^{e}e^{-\pi\mathbf{i}e}\beta\) in (53). Note that everywhere the operator \(e\) comes together with the differentiation operator \(\partial_{\lambda}\). On the other hand, since in the entire expression only the coefficient in front of \(e^{n}\) contributes, we may remove \(\partial_{\lambda}\) from \(e\partial_{\lambda}\) and apply to the entire expression the differential operator \(\partial_{\lambda}^{n}\), that is, change \(\partial_{\lambda}^{d(n-1)}\) to \(\partial_{\lambda}^{d(n-1)+n}\). We get the following formula for \(\Phi(q,\lambda)\)
\[(2\pi)^{(1-n)/2}\,2\pi\mathbf{i}\sum_{d=0}^{\infty}\,\int_{\mathrm{Bl}(X)}\, \frac{(-1)^{dn+d}q^{d+e}e^{2}}{(e+d)^{n}\prod_{i=1}^{d-1}(e+i)^{n-1}}\ \Gamma(1-e)^{n-1}\,\partial_{\lambda}^{d(n-1)+n}\ e^{-(n-1)e\partial_{m}} \left(\frac{\lambda^{\frac{n}{2}+m-\frac{1}{2}}}{\Gamma(\frac{n}{2}+m+\frac{1 }{2})}\right).\]
Note that
\[\partial_{\lambda}^{d(n-1)+n}\ e^{-(n-1)e\partial_{m}}\left(\frac{\lambda^{ \frac{n}{2}+m-\frac{1}{2}}}{\Gamma(\frac{n}{2}+m+\frac{1}{2})}\right)=\frac{ \lambda^{-\frac{n}{2}-(n-1)(d+e)+m-\frac{1}{2}}}{\Gamma(-\frac{n}{2}-(n-1)(d+ e)+m+\frac{1}{2})}\]
and
\[\Gamma(1-e)=(-e)(-e-1)\cdots(-e-d)\Gamma(-e-d)=(-1)^{d+1}e(e+1)\cdots(e+d) \Gamma(-e-d).\]
Since \(\int_{\mathrm{Bl}(X)}e^{n}=(-1)^{n-1}\) we can replace \(\int_{\mathrm{Bl}(X)}\) with \((-1)^{n-1}\,\mathrm{Res}_{e=0}\,\frac{de}{e^{n+1}}.\) Note that \(dn+d+(d+1)(n-1)+n-1=2dn+2n-2\) is an even number, so that the signs that appear in our formula cancel out exactly. We get
\[\Phi(q,\lambda)=(2\pi)^{(1-n)/2}\,2\pi\mathbf{i}\,\sum_{d=0}^{\infty}\,\mathrm{ Res}_{e=0}\,\frac{de}{d+e}q^{d+e}\Gamma(-d-e)^{n-1}\,\frac{\lambda^{-\frac{n}{2}-(n-1)(d+e) +m-\frac{1}{2}}}{\Gamma(-\frac{n}{2}-(n-1)(d+e)+m+\frac{1}{2})}.\]
Let us substitute \(x\coloneqq-e-d\), then the above formula becomes
\[\Phi(q,\lambda)=(2\pi)^{(1-n)/2}\,2\pi\mathbf{i}\,\sum_{d=0}^{\infty}\,\operatorname {Res}_{x=-d}\frac{dx}{x}q^{-x}\Gamma(x)^{n-1}\,\frac{\lambda^{-\frac{n}{2}+(n-1) x+m-\frac{1}{2}}}{\Gamma(-\frac{n}{2}+(n-1)x+m+\frac{1}{2})}.\]
The sum of infinitely many residues can be replaced with an integral of the form \(\int_{\epsilon-\mathsf{i}\mathsf{i}\infty}^{\epsilon+\mathsf{i}\mathsf{i} \infty}dx\), where \(\epsilon>0\) is a positive real number. Let us sketch the proof of this claim. Let us fix a real number \(\delta\in(\frac{1}{2},1)\), such that, \(\mu\coloneqq(n-1)(\delta-1/2)\in(\frac{1}{2},1)\). Suppose that \(K\geq 1\) is an integer. Let us consider the rectangular contour given by the boundary of the rectangle with vertices \(\epsilon-\mathbf{i}K,\delta-K-\mathbf{i}K,\delta-K+\mathbf{i}K\), and \(\epsilon+\mathbf{i}K\). The contour is divided into two parts: the straight line segment \([\epsilon-\mathbf{i}K,\epsilon+\mathbf{i}K]\) and its complement which we denote by \(C_{K}\) - see Figure 1 where these two pieces are colored respectively with blue and red. By the Cauchy residue formula, the integral along this contour coincides with the partial sum \(2\pi\mathbf{i}\sum_{d=0}^{K-1}\operatorname{Res}_{x=-d}\). On the other hand, using the standard asymptotic estimates for the \(\Gamma\)-function (see Appendix A), one can prove that if \(\lambda>(n-1)q^{1/(n-1)}\) is a real number then the integral along \(C_{K}\) tends to \(0\) when \(K\to\infty\). We get
\[\Phi(q,\lambda)=(2\pi)^{(1-n)/2}\,\int_{\epsilon-\mathsf{i}\infty}^{\epsilon+ \mathsf{i}\infty}\,q^{-x}\Gamma(x)^{n-1}\,\frac{\lambda^{-\frac{n}{2}+(n-1)x+m -\frac{1}{2}}}{\Gamma(-\frac{n}{2}+(n-1)x+m+\frac{1}{2})}\,\frac{dx}{x}. \tag{54}\]
Let us denote by \(G(q,\lambda)\) the RHS of (54). Note that \(G(q,\lambda)\), after replacing \(1/x\) with \(\Gamma(x)/\Gamma(x+1)\), becomes a _Mellin-Barnes_ integral. The analytic properties of such integrals were studied by Dixon and Ferrar (see [7]). In their terminology \(G(q,\lambda)\) is a Mellin-Barnes integral of the third type. Using the standard asymptotic estimates for the \(\Gamma\)-function it is easy to prove that the integral is convergent for all positive real \(\lambda\) and that it is divergent for \(\operatorname{Im}(\lambda)\neq 0\). Since the series (52), viewed as a Laurent series in \(\lambda^{-1}\), is convergent for \(|\lambda|>u(q)=(n-1)q^{1/(n-1)}\), we get that \(\Phi(q,\lambda)\) is the analytic continuation of the restriction of \(G(q,\lambda)\) to the interval \([(n-1)q^{1/(n-1)},+\infty)\).
**Lemma 9**.: _The Mellin-Barnes integral \(G(q,\lambda)=0\) for \(\lambda\leq(n-1)q^{1/(n-1)}\)._
Proof.: Let \(R>0\) be a sufficiently big positive number. Let us fix \(\delta\in(0,1)\). We would like to deform the contour \(\epsilon+\mathbf{i}\mathbb{R}\) into the contour consisting of the \(3\) linear pieces \(-\mathbf{i}-s\)\((-\infty<s\leq-\epsilon)\), \(s\mathbf{i}+\epsilon\)\((-1\leq s\leq 1)\), and \(\mathbf{i}+s\)\((\epsilon\leq s<+\infty)\). The integral \(G(q,\lambda)\) is a limit as \(R\to\infty\) of the integral over \(s\mathbf{i}+\epsilon\)
Figure 1. Integration contours
\((-T\leq s\leq T)\) where \(T:=\sqrt{R^{2}-\epsilon^{2}}\), while the integral over the deformed contour is a limit as \(R\to\infty\) of the integral over the contour consisting of the 3 linear pieces \(-\mathbf{i}-s\)\((-\sqrt{R^{2}-1}<s\leq-\epsilon)\), \(s\mathbf{i}+\epsilon\)\((\)\(-1\leq s\leq 1)\), and \(\mathbf{i}+s\)\((\epsilon\leq s<\sqrt{R^{2}-1})\). The difference between the two integrals is an integral over the two arcs \(C_{R}:Re^{\mathbf{i}\theta}\)\((\arcsin(1/R)\leq\theta\leq\arcsin T/R)\) and \(\overline{C}_{R}:Re^{\mathbf{i}\theta}\)\((-\arcsin(T/R)\leq\theta\leq-\arcsin 1/R)\). One has to prove that
\[\lim_{R\to+\infty}\int_{C_{R}}\,\text{or}\,\,\overline{C}_{R}\,q^{-x}\Gamma(x )^{n-1}\,\frac{\lambda^{-\frac{n}{2}+(n-1)x+m-\frac{1}{2}}}{\Gamma(-\frac{n}{ 2}+(n-1)x+m+\frac{1}{2})}\,\frac{dx}{x}=0.\]
This is proved in the same way as in [7], Section 5. Namely, divide \(C_{R}\) into two pieces \(C_{R}^{\prime}:Re^{\mathbf{i}\theta}\)\((\arcsin(1/R)\leq\theta\leq\delta)\) and \(C_{R}^{\prime\prime}:Re^{\mathbf{i}\theta}\)\((\delta\leq\theta\leq\arcsin T/R)\) and then use the standard asymptotic estimates for the \(\Gamma\)-function and the assumption \(|\lambda|\leq(n-1)q^{1/(n-1)}\).
Finally, to complete the proof. Note that the integral is independent of \(\epsilon>0\), because the \(\Gamma\)-functions in \(G(q,\lambda)\) do not have poles on the positive real axis. Letting \(\epsilon\to+\infty\) and using again the standard asymptotic estimates for the \(\Gamma\)-function we get that \(G(q,\lambda)=0\) for \(\lambda\leq(n-1)q^{1/(n-1)}\).
Using the above Lemma we get
\[\int_{u(q)}^{\infty}e^{-\lambda s}G(q,\lambda)d\lambda=\int_{0}^{\infty}e^{- \lambda s}G(q,\lambda)d\lambda.\]
Substituting \(G(q,\lambda)\) with the corresponding Mellin-Barnes integral, exchanging the order of integration and using that
\[\int_{0}^{\infty}e^{-\lambda s}\frac{\lambda^{-\frac{n}{2}+(n-1)x+m-\frac{1}{ 2}}}{\Gamma(-\frac{n}{2}+(n-1)x+m+\frac{1}{2})}\,d\lambda=s^{\frac{n}{2}-(n-1 )x-m-\frac{1}{2}},\]
we get
\[\int_{u(q)}^{\infty}e^{-\lambda s}G(q,\lambda)d\lambda=(2\pi)^{(1-n)/2}\,\int _{\epsilon-\mathbf{i}\infty}^{\epsilon+\mathbf{i}\infty}\,q^{-x}\Gamma(x)^{ n-1}\,s^{\frac{n}{2}-(n-1)x-m-\frac{1}{2}}\,\frac{dx}{x}. \tag{55}\]
### Oscillatory integral
Let us consider the following family of functions
\[f(x,q)=x_{1}+\cdots+x_{n-2}+\frac{q}{x_{1}\cdots x_{n-2}}\,(1+x_{n-1}^{2}+x_{n }^{2}),\]
where \(q\) is a positive real number and
\[x=(x_{1},\ldots,x_{n})\in V:=\mathbb{C}^{n}\setminus\{x_{1}\cdots x_{n-2}(1+x _{n-1}^{2}+x_{n}^{2})=0\}\]
Let \(\Gamma:=\mathbb{R}_{>0}^{n-2}\times\mathbb{R}^{2}\subset V\), that is, \(\Gamma\) is the real \(n\)-dimensional cycle in \(V\) consisting of points \(x=(x_{1},\ldots,x_{n})\), such that, the first \(n-2\) coordinates are positive real numbers and the last two ones are arbitrary real numbers. Note that the cycle \(\Gamma\) belongs to the following group of semi-infinite homology cycles:
\[\varprojlim H_{n}(V,\operatorname{Re}(f(x,q))>M,\mathbb{Z})\cong\mathbb{Z}^{n -1},\]
where the inverse limit is taken over all \(M\in\mathbb{R}\).
**Proposition 12**.: _Under the above notation the following identity holds:_
\[2\mathbf{i}\,\int_{\Gamma}e^{-f(x,q)}\frac{dx_{1}\wedge\cdots\wedge dx_{n}}{x _{1}\cdots x_{n-2}(1+x_{n-1}^{2}+x_{n}^{2})}=\int_{\epsilon-\mathbf{i}\infty}^ {\epsilon+\mathbf{i}\infty}q^{-x}\Gamma(x)^{n-1}\frac{dx}{x},\]
_where the orientation of \(\Gamma\) is induced from the standard orientation on \(\mathbb{R}^{n}\)._
Proof.: Let us integrate out \(x_{n-1}\) and \(x_{n}\). Using polar coordinates \(x_{n-1}=r\cos\theta\) and \(x_{n}=r\sin\theta\), since \(dx_{n-1}\wedge dx_{n}=rdr\wedge\theta\), we get
\[\int_{\mathbb{R}^{2}}e^{-K(1+x_{n-1}^{2}+x_{n}^{2})}\frac{dx_{n-1}\wedge dx_{n} }{1+x_{n-1}^{2}+x_{n}^{2}}=\int_{0}^{\infty}e^{-K(1+r^{2})}\int_{0}^{2\pi} \frac{rdr\wedge d\theta}{1+r^{2}}=\pi\int_{1}^{\infty}e^{-Ku}\frac{du}{u},\]
where \(K\) is a positive real number and for the 2nd equality we used the substitution \(u=1+r^{2}\). Applying the above formula to our oscillatory integral, we get
\[\int_{\Gamma}e^{-f(x,q)}\frac{dx_{1}\wedge\cdots\wedge dx_{n}}{x_{1}\cdots x_ {n-2}(1+x_{n-1}^{2}+x_{n}^{2})}=\pi\,\int_{\mathbb{R}_{>0}^{n-2}}\,\int_{1}^{ \infty}e^{-\left(x_{1}+\cdots+x_{n-2}+\frac{qv}{x_{1}\cdots x_{n-2}}\right)} \frac{du}{u}\,\,\frac{dx_{1}\cdots dx_{n-2}}{x_{1}\cdots x_{n-2}}, \tag{56}\]
where \(dx_{1}\cdots dx_{n-2}\) is the standard Lebesgue measure on \(\mathbb{R}_{>0}^{n-2}\). On the other hand, let us recall the oscillatory integral
\[J(q)\coloneqq\int_{\mathbb{R}_{>0}^{n-2}}\exp\Big{(}-\Big{(}x_{1}+\cdots+x_{n -2}+\frac{q}{x_{1}\cdots x_{n-2}}\Big{)}\Big{)}\,\frac{dx_{1}\cdots dx_{n-2}} {x_{1}\cdots x_{n-2}}.\]
Note that the Mellin transform of \(J(q)\) is
\[\{\mathcal{M}J\}(x)=\int_{0}^{\infty}q^{x-1}J(q)dq=\Gamma(x)^{n-1}.\]
Recalling the Mellin inversion theorem we get
\[J(q)=\frac{1}{2\pi\mathbf{i}}\int_{\epsilon-\mathfrak{i}\infty}^{\epsilon+ \mathfrak{i}\infty}q^{-x}\Gamma(x)^{n-1}dx\]
where \(\epsilon>0\) is a positive real number. Let us apply the above formula to (56). Namely, on the RHS of (56), after exchanging the order of the integration, we get
\[\pi\int_{1}^{\infty}J(qu)\frac{du}{u}=\frac{1}{2\mathbf{i}}\int_{1}^{\infty} \int_{\epsilon-\mathfrak{i}\infty}^{\epsilon+\mathfrak{i}\infty}(qu)^{-x} \Gamma(x)^{n-1}dx\,\frac{du}{u}.\]
Exchanging again the order of integration and using that
\[\int_{1}^{\infty}u^{-x}\,\frac{du}{u}=\left.\frac{u^{-x}}{-x}\right|_{u=1}^{u \infty}=\frac{1}{x}\]
we get the formula stated in the proposition.
### Laplace transform
The function \(f(x,q)\) has a minimum over \(x\in\Gamma\) achieved at the critical point \(x_{1}=\cdots=x_{n-2}=q^{1/(n-1)}\), \(x_{n-1}=x_{n-2}=0\). Note that the corresponding critical value is \(u(q)=(n-1)q^{1/(n-1)}\). Let us consider the map \(\Gamma\to[u(q),+\infty)\), \(x\mapsto f(x,q)\). The fiber over \(\lambda\in(u(q),+\infty)\) is the real algebraic hypersurface \(\Gamma_{\lambda}\subset\Gamma\) defined by
\[x_{1}+\cdots+x_{n-2}+\frac{q}{x_{1}\cdots x_{n-2}}\,(1+x_{n-1}^{2}+x_{n}^{2})=\lambda.\]
It is easy to see that \(\Gamma_{\lambda}\) is compact and it has the homotopy type of a sphere. Indeed, the map
\[\Gamma\setminus\{u(q)\}\to(u(q),+\infty),\quad x\mapsto f(x,q)\]
is proper and regular. Therefore, according to the Ehresmann's fibration theorem, it must be a locally trivial fibration and hence a trivial fibration, because \((u(q),+\infty)\) is a contractible manifold. If \(\lambda\) is sufficiently close to \(u(q)\), then \(\Gamma_{\lambda}\) is contained in a Morse coordinate neighborhood of the critical point \((q^{1/(n-1)},\ldots,q^{1/(n-1)},0,0)\). Switching to Morse coordinates for \(f\) we get that the fiber \(\Gamma_{\lambda}\) is diffeomorphic to the \((n-1)\)-dimensional sphere.
Let use denote by \(\Gamma_{\leq\lambda}\) the subset of \(\Gamma\) defined by the inequality
\[x_{1}+\cdots+x_{n-2}+\frac{q}{x_{1}\cdots x_{n-2}}\,(1+x_{n-1}^{2}+x_{n}^{2}) \leq\lambda.\]
Note that \(\Gamma_{\leq\lambda}\) is a manifold with boundary and its boundary is precisely \(\partial\Gamma_{\leq\lambda}=\Gamma_{\lambda}\). Put
\[\mathcal{I}(q,\lambda):=\int_{\Gamma_{\leq\lambda}}\frac{(\lambda-f(x,q))^{m- \frac{n}{2}-\frac{1}{2}}}{\Gamma(m-\frac{n}{2}+\frac{1}{2})}\,\omega\]
where
\[\omega:=\frac{dx_{1}\wedge\cdots\wedge dx_{n}}{x_{1}\cdots x_{n-2}(1+x_{n-1}^{ 2}+x_{n}^{2})}.\]
**Lemma 10**.: _The following formula holds:_
\[\int_{\Gamma}e^{-f(x,q)s}\omega=s^{m-\frac{n}{2}+\frac{1}{2}}\int_{u(q)}^{ \infty}e^{-\lambda s}\mathcal{I}(q,\lambda)d\lambda.\]
Proof.: Using Fubini's theorem, we transform
\[\mathcal{I}(q,\lambda)=\int_{u(q)}^{\lambda}\frac{(\lambda-\mu)^{m-\frac{n}{2 }-\frac{1}{2}}}{\Gamma(m-\frac{n}{2}+\frac{1}{2})}\,\int_{\Gamma_{\mu}}\frac{ \omega}{df}\,d\mu.\]
Therefore,
\[\int_{u(q)}^{\infty}e^{-\lambda s}\mathcal{I}(q,\lambda)d\lambda=\int_{u(q)}^ {\infty}\int_{u(q)}^{\lambda}\,e^{-\lambda s}\,\frac{(\lambda-\mu)^{m-\frac{n }{2}-\frac{1}{2}}}{\Gamma(m-\frac{n}{2}+\frac{1}{2})}\,\int_{\Gamma_{\mu}} \frac{\omega}{df}\,d\mu\,d\lambda.\]
Exchanging the order of the integration we get
\[\int_{u(q)}^{\infty}\left(\int_{\mu}^{\infty}e^{-\lambda s}\frac{(\lambda-\mu )^{m-\frac{n}{2}-\frac{1}{2}}}{\Gamma(m-\frac{n}{2}+\frac{1}{2})}\,d\lambda \right)\int_{\Gamma_{\mu}}\frac{\omega}{df}\,d\mu=s^{-m+\frac{n}{2}-\frac{1}{ 2}}\int_{u(q)}^{\infty}e^{-\mu s}\,\int_{\Gamma_{\mu}}\frac{\omega}{df}\,d\mu.\]
Recalling again Fubini's theorem we get that the above iterated integral coinicodes with \(\int_{\Gamma}e^{-f(x,q)s}\omega\). The formula stated in the lemma follows.
According to the above lemma, the Laplace transform of the integral \(\mathcal{I}(q,\lambda)\) is given by the following formula:
\[\int_{u(q)}^{\infty}e^{-\lambda s}\mathcal{I}(q,\lambda)=s^{-m+\frac{n}{2}- \frac{1}{2}}\,\int_{\Gamma}e^{-f(x,q)s}\omega=:F(s).\]
Let us recall Proposition 12 and note that \(f(x,q)\) has the following rescaling symmetry: \(f(s\cdot x,s^{n-1}q)=f(x,q)s\), where
\[s\cdot(x_{1},\dots,x_{n})=(sx_{1},\dots,sx_{n-2},x_{n-1},x_{n}).\]
Note that if \(s>0\) is a positive real number, then the integration cycle and the holomorphic form \(\omega\) are invariant under the rescaling action by \(s\). Therefore, the formula from Proposition (12) yields the following formula:
\[\int_{\Gamma}e^{-f(x,q)s}\omega=\frac{1}{2\mathbf{i}}\,\int_{\epsilon- \mathfrak{i}\infty}^{\epsilon+\mathfrak{i}\infty}q^{-x}\Gamma(x)^{n-1}s^{-(n-1 )x}\frac{dx}{x}.\]
Therefore, the function
\[F(s)=\frac{1}{2\mathbf{i}}\,\int_{\epsilon-\mathfrak{i}\infty}^{\epsilon+ \mathfrak{i}\infty}q^{-x}\Gamma(x)^{n-1}s^{-(n-1)x-m+\frac{n}{2}-\frac{1}{2}} \frac{dx}{x}.\]
Comparing the above formula with (55) and using that the Laplace transformation is injective on smooth functions, we get that \(G(q,\lambda)=2\mathbf{i}(2\pi)^{(n-1)/2}\mathcal{I}(q,\lambda)\). Finally, in order to complete the proof of Proposition 11, we need only to check that the analytic continuation of \(\mathcal{I}(q,\lambda)\) around \(\lambda=u(q)=(n-1)q^{1/(n-1)}\) transforms \(\mathcal{I}(q,\lambda)\) into \(-\mathcal{I}(q,\lambda)\). This however is a local computation. Indeed, if \(\lambda\) is sufficiently close to \((n-1)q^{1/(n-1)}\), then the integration cycle defining \(\mathcal{I}(q,\lambda)\) is
sufficiently close to the critical point \((q^{1/(n-1)},\ldots,q^{1/(n-1)},0,0)\). By switching to Morse coordinates we get
\[\int_{\Gamma_{\mu}}\omega/df=(\mu-u(q))^{\frac{n}{2}-1}P(q,\mu),\]
where \(P(q,\mu)\) is holomorphic at \(\mu=u(q)\) (see [1], Section 12.1, Lemma 2). Therefore,
\[\mathcal{I}(q,\lambda)=\int_{u(q)}^{\lambda}\frac{(\lambda-\mu)^{m-\frac{n}{2 }-\frac{1}{2}}}{\Gamma(m-\frac{n}{2}+\frac{1}{2})}\,(\mu-u(q))^{\frac{n}{2}-1} P(q,\mu)\,d\mu. \tag{57}\]
Changing the variables \(\mu-u(q)=t(\lambda-u(q))\Rightarrow\lambda-\mu=(1-t)(\lambda-u(q))\) and \(d\mu=(\lambda-u(q))dt\), we get
\[\int_{u(q)}^{\lambda}\frac{(\lambda-\mu)^{m-\frac{n}{2}-\frac{1}{ 2}}}{\Gamma(m-\frac{n}{2}+\frac{1}{2})}\,(\mu-u(q))^{i+\frac{n}{2}-1}d\mu =\int_{0}^{1}\frac{(1-t)^{m-\frac{n}{2}-\frac{1}{2}}}{\Gamma(m- \frac{n}{2}+\frac{1}{2})}\,t^{i+\frac{n}{2}-1}dt\,\,(\lambda-u(q))^{i+m-1/2}=\] \[=\Gamma(i+n/2)\,\frac{(\lambda-u(q))^{i+m-1/2}}{\Gamma(i+m+1/2)}.\]
Substituting the Taylor series expansion of \(P(q,\mu)=\sum_{i=0}^{\infty}P_{i}(q)(\mu-u(q))^{i}\) at \(\mu=u(q)\) in (57) and using the above formula, we get
\[\mathcal{I}(q,\lambda)=(\lambda-u(q))^{m-1/2}\,\sum_{i=0}^{\infty}\frac{\Gamma (i+n/2)}{\Gamma(i+m+1/2)}\,P_{i}(q)\,(\lambda-u(q))^{i}.\]
The above expansion is clearly anti-invariant under the analytic continuation around \(\lambda=u(q)\).
## Appendix A Bending the contour
For the sake of completeness we would like to prove that if \(\lambda\) is a positive real number, such that, \(\lambda>(n-1)q^{1/(n-1)}\), then
\[\lim_{K\to+\infty}\int_{C_{K}}q^{-x}\lambda^{(n-1)x}\,\frac{\Gamma(x)^{n-1}}{ \Gamma(-\frac{n}{2}+(n-1)x+m+\frac{1}{2})}\,\frac{dx}{x}=0,\]
where \(C_{K}\) is the contour defined in Section 6.1 (see Figure 1). The integrand of the above integral differs from the integrand in (54) by the constant factor \(\lambda^{-\frac{n}{2}+m-\frac{1}{2}}\). Therefore, the vanishing result needed in the derivation of (54) follows from the above statement.
Let us consider first the upper horizontal part of \(C_{K}\), that is, \(x=a+\mathbf{i}K\), \(\delta-K\leq a\leq\epsilon\). The estimate in this case is a direct consequence of the Stirling's formula for the gamma function. Namely, recall that if \(x=a+\mathbf{i}b\notin(-\infty,0]\), then
\[|\Gamma(x)|=\sqrt{2\pi}e^{-a-|b|\,|\mathrm{Arg}(x)|}|x|^{a-1/2}(1+o(1)),\]
where \(-\pi<\mathrm{Arg}(x)<\pi\) and \(o(1)\to 0\) uniformly when \(|x|\to\infty\) in any proper subsector \(-\pi<\alpha\leq\mathrm{Arg}(x)\leq\beta<\pi\). Put \(c:=-\frac{n}{2}+m+\frac{1}{2}\). Using Stirling's formula we get
\[|\Gamma((n-1)x+c)|=\sqrt{2\pi}\,e^{-(n-1)a-c-(n-1)|b||\,|\mathrm{Arg}(x+c/(n-1 ))||(n-1)x+c|^{(n-1)a+c-1/2}(1+o(1)).\]
Note that \(|\,\mathrm{Arg}(x+c/(n-1))|\leq|\,\mathrm{Arg}(x)|\) because we may choose \(m\) so big that \(c>0\) while
\[|(n-1)x+c|^{(n-1)a+c-1/2}=(n-1)^{(n-1)a}\,|x|^{(n-1)a+c-1/2}\,O(1).\]
Moreover, both \((n-1)x+c\) and \(x\) belong to the sector \(-\frac{3\pi}{4}\leq\mathrm{Arg}(x)\leq\frac{3\pi}{4}\) for all \(x\) in the horizontal integration contour. Therefore, we have an estimate of the form
\[|\Gamma((n-1)x+c)|^{-1}\leq\mathrm{const}\,(n-1)^{-(n-1)a}\,|x|^{-(n-1)a-c+1/2} \,e^{(n-1)a+(n-1)|b|\,|\mathrm{Arg}(x+c/(n-1))|},\]
for all \(x\) in the upper horizontal part of \(C_{K}\), where the constant is independent of \(K\). Note that \(|q^{-x}\lambda^{(n-1)x}|=q^{-a}\lambda^{(n-1)a}\). Combining all these estimates together we get that the absolute value of the integrand along the upper horizontal contour can be bounded from above by
\[\operatorname{const}\left((n-1)q^{1/(n-1)}/\lambda\right)^{(n-1)a}|x|^{-m-1/2} |da|\leq\operatorname{const}\,K^{-m-1/2}|da|,\]
where we used that \(\lambda>(n-1)q^{1/(n-1)}\) and \(|x|^{2}\leq K^{2}+(K+\epsilon-\delta)^{2}\leq(1+|\epsilon-\delta|)^{2}K^{2}\) for all \(x=a+\mathbf{i}K\) (\(\delta-K\leq a\leq\epsilon\)). Therefore, up to a constant independent of \(K\) the integral is bounded by \(K^{-m+1/2}\) which proves that the integral vanishes in the limit \(K\to\infty\).
The estimate for the lower horizontal part of \(C_{K}\), that is, \(x=a-\mathbf{i}K\), \(\delta-K\leq a\leq\epsilon\) is the same as above. Let us consider the vertical part \(x=\delta-K+\mathbf{i}b\), \(-K\leq b\leq K\). In order to apply Stirling's formula, let us first recall the reflection formula for the gamma function
\[\Gamma(x)=\Gamma(1-x)^{-1}\,\frac{2\pi\mathbf{i}}{e^{2\pi\mathbf{i}x}-1}\,e^{ \pi\mathbf{i}x}.\]
If \(x\) is on the vertical part of the integration contour, then \(-x\) belongs to a proper subsector of \(-\pi<\operatorname{Arg}(x)<\pi\) in which the Stirling's formula for \(\Gamma(1-x)=(-x)\Gamma(-x)\) can be applied, that is,
\[|\Gamma(x)|=\sqrt{2\pi}\,\frac{e^{-\pi b}}{|e^{2\pi\mathbf{i}a}e^{-2\pi b}-1|} \,|x|^{a-1/2}\,e^{-a+|b|\,|\operatorname{Arg}(-x)|}\,(1+o(1)),\]
where \(x=a+\mathbf{i}b\). Similarly,
\[|\Gamma((n-1)x+c)|= \sqrt{2\pi}\,\frac{e^{-\pi(n-1)b}}{|e^{2\pi\mathbf{i}((n-1)a+c)} e^{-2\pi(n-1)b}-1|}\,|(n-1)x+c|^{(n-1)a+c-1/2}\times\] \[\times e^{-(n-1)a-c+(n-1)|b|\,|\operatorname{Arg}(-x-c/(n-1))|} \,(1+o(1)).\]
Note that if \(x=a+\mathbf{i}b\) is on the integration contour, then \(a=\delta-K\) and \((n-1)a+c=\mu+m-K\), where \(\mu=(n-1)(\delta-1/2)\Rightarrow e^{2\pi\mathbf{i}a}=e^{2\pi\mathbf{i}\delta}\) and \(e^{2\pi\mathbf{i}((n-1)a+c)}=e^{2\pi\mathbf{i}\mu}\) are constants independent of \(K\). Moreover, we chose both \(\mu\) and \(\delta\) to be non-integers, so \(e^{2\pi\mathbf{i}\delta}-1\) and \(e^{2\pi\mathbf{i}\mu}-1\) are non-zero. We get
\[\frac{|\Gamma(x)|^{n-1}}{|\Gamma((n-1)x+c)x|}\leq \operatorname{const}\,\frac{|e^{2\pi\mathbf{i}\mu}e^{-2\pi(n-1)b} -1|}{|e^{2\pi\mathbf{i}\delta}e^{-2\pi b}-1|^{n-1}}\,\frac{|x|^{(n-1)(a-1/2) -1}}{|(n-1)x+c|^{(n-1)a+c-1/2}}\times\] \[\times e^{(n-1)|b|(|\operatorname{Arg}(-x))|-|\operatorname{Arg}(-x -c/(n-1))|}(1+o(1)).\]
The first fraction is clearly a bounded function in \(b\in\mathbb{R}\). For the second one we have
\[\frac{|x|^{(n-1)(a-1/2)-1}}{|(n-1)x+c|^{(n-1)a+c-1/2}}\leq\operatorname{const }\,\frac{|x|^{-m-1/2}}{(n-1)^{(n-1)a}}.\]
Finally, for the exponential term, let us look at the triangle formed by vectors \(-x\) and \(-x-c/(n-1)\). The area of this triangle is \(\frac{|b|c}{2(n-1)}\). On the other hand, the difference \(\theta\coloneqq|\operatorname{Arg}(-x))|-|\operatorname{Arg}(-x-c/(n-1))|\) as \(K\to\infty\) tends to \(0\) uniformly in \(x=\delta-K+\mathbf{i}b\) for \(|b|\leq K\). Therefore, up to a constant independent of \(K\) we can bound \(\theta\) from above by \(\sin\theta\). Using that the area of the triangle is also \(\frac{1}{2}|x|\,|x+c/(n-1)|\,\sin\theta\) we get
\[(n-1)|b|(|\operatorname{Arg}(-x))|-|\operatorname{Arg}(-x-c/(n-1))|=(n-1)|b| \theta\leq\operatorname{const}\,|b|\sin\theta\leq\operatorname{const}\,\frac{b^ {2}c}{|x|\,|(n-1)x+c|}.\]
The above expression is bounded by a constant independent of \(K\). We get the following estimate:
\[\frac{|\Gamma(x)|^{n-1}}{|\Gamma((n-1)x+c)x|}\leq\operatorname{const}\,K^{-m-1 /2}\,(n-1)^{(n-1)K}\]
for all \(x=\delta-K+\mathbf{i}b\), \(-K\leq b\leq K\), where the constant is independent of \(K\). Finally, since \(|q^{-x}\lambda^{(n-1)x}|=q^{-a}\lambda^{(n-1)a}\), we get the following estimate
\[\left|q^{-x}\lambda^{(n-1)x}\,\frac{\Gamma(x)^{n-1}}{\Gamma((n-1)x+c)x}\right| \leq\operatorname{const}\left((n-1)q^{\frac{1}{n-1}}/\lambda\right)^{(n-1)K}K^ {-m-1/2}.\]
Since \(\lambda>(n-1)q^{\frac{1}{n-1}}\) the integral along the vertical segment of \(C_{K}\), up to a constant, is bounded by \(K^{-m+1/2}\). Therefore, the integral vanishes in the limit \(K\to\infty\).
|
2304.13619 | Recompositing of Vast Irregularly-Sampled Seismic Data via Compressed
Sensing Framework: An FPOCS Based on Seislet Transform Approach | Acquiring seismic data from irregular topographic surface is oftently
oppressed by irregular and nonequivalent source-receiver arrays and even more
it yields bad traces after storing the original signal. In the light of
preprocessing seismic data, we have to extract out most of the given signal,
thus further processing and interpretation can obtain extremely accurate
outcomes. We applied Compressed Sensing theorem on Sigmoid vast
irregularly-sampled seismic data based on the fast projection onto convex sets
(FPOCS) algorithm with sparsity constraint in the seislet transform domain,
which gives faster convergence than other conventional methods and is
preserving an optimum signals recovery. The FPOCS seislet transform approach
can achieve accurate and high data recovery results than other methods because
of a much sparser structure in the seislet transform domain as demonstrated.
Moreover, FPOCS algorithm is also efficient in minimizing the number of
required iterations to achieve optimum data refilling. | Hussein Muhammed | 2023-04-26T15:21:00Z | http://arxiv.org/abs/2304.13619v1 | Recompositing of Vast Irregularly - Sampled Seismic Data via Compressed Sensing Framework: An FPOCS Based on Seislet Transform Approach
###### Abstract
Acquiring seismic data from irregular topographic surface is oftenly oppressed by irregular and nonequivalent source-receiver arrays and even more it yields bad traces after storing the original signal. In the light of preprocessing seismic data, we have to extract out most of the given signal, thus further processing and interpretation can obtain extremely accurate outcomes. We applied Compressed Sensing theorem on Sigmoid vast irregularly-sampled seismic data based on the fast projection onto convex sets (FPOCS) algorithm with sparsity constraint in the seislet transform domain, which gives faster convergence than other conventional methods and is preserving an optimum signals recovery. The FPOCS seislet transform approach can achieve accurate and high data recovery results than other methods because of a much sparser structure in the seislet transform domain as demonstrated. Moreover, FPOCS algorithm is also efficient in minimizing the number of required iterations to achieve optimum data refilling.
## Introduction
Compressed Sensing aims to reconstruct a full high-resolution image, or whatever objective form of signals, from a dramatic subsampling of the given data by assuming there is a universal picture or signal matrix that contains out input data. Even if we only have a small number of random pixels from that image we are able to infer the active Fourier coefficients in that image. The principal framework of it encompasses solving a least-squares minimization riddle with an \(L_{1}\) norm condition of the recomposed model, which requires compromising a least-square data-misfit constraint and a sparsity constraint over the reconstructed model. There are two conditions to achieve Compressive Sensing which are; we have to measure seismic data and eliminate about half of it to get fewer signals in the Fourier basis and the second one; these seismic measurements must be collected randomly to allow signals recovering.
There are several methods can be composed to solve the problem such as: iterative shrinkage thresholding (IST) and the projection onto convex sets (POCS) which are common approaches used to solve the minimization problem in the seismic data processing field. The key step in finding the sparest solution to the optimization problem lies in the technique of converting to Fourier transform to find the sparsest matrix or solve the combinatorial hard problem to satisfies the system of equation by an infinite trials of choices. The actual development of this technology started in mid 2000s (after Candes et al. 2006; Donoho, 2001; Donoho, 2006; Ying & Hao, 2010; Donoho & Tanner, 2010; Herrmann, 2010). The FPOCS algorithm (Gan et al. 2016) is equivalent to the fast iterative shrinkage-thresholding algorithm (FISTA) by Beck and Teboulle (2009). The seislet transform is sparser than other auxiliary sparse transforms (after Chen et al., 2014; Fomel and Liu, 2010).
Baraniuk & Steeghs (2017) addressed various topic and discussed several applications to seismic data acquisition and processing while Elzanaty et al. (2018) gave an analysis of the restricted isometry constant (RIC) of finite dimensional Gaussian measurement matrices to impart a tight lower bound on the maximum sparsity order of the given signal which in turn allows signal recovery with an already set
target probability. A compressive sensing based method to regularize non-stationary VSP data which obtains improved data reconstructions, is proposed by Yu et al. (2020). This paper aims at introducing Compressed Sensing method and a fast calculation method for restoring seismic data. The relevant concepts and theories are presented and a synthetic example is given to demonstrate the FPOCS algorithm in comparison with well know algorithms.
### Compressed Sensing Framework
The following equations are generalized formulation of our framework.
Any compressible image/signal \(\mathbf{x}\in\mathbf{R}^{n}\) may be written as a sparse vector \(\mathbf{s}\in\mathbf{R}^{n}\) in a \(\mathbf{\Psi}\) Fourier transform basis as:
\[\mathbf{x}=\Psi.\mathbf{s} \tag{1}\]
Assume that we have some seismic measurements and \(C\) is measurement matrix represents a set of linear measurements, \(\mathbf{y}\) which is a function of the compressible image/signal \(\mathbf{x}\), then:
\[\begin{split}&\mathbf{y}=C\ \mathbf{x}\\ &\mathbf{y}=C\Psi.\mathbf{s}\end{split} \tag{2}\]
Equation (2) is an optimization or inverse problem due the reversibility of the term \(\mathbf{\Psi}\) (Fourier transform and its inverse) thus we solve for the sparsest solution that satisfies the whole system of equations by finding \(\mathbf{s}\).
### FPOCS Algorithm
The FPOCS approach (Gan et al., 2016) is trying to solve the following system of equations:
\[\min_{\mathbf{d}}\left\|\mathbf{d}_{obs}.-\mathbf{S}\mathbf{d}\right\|_{2}^{2 }+\lambda\left\|\mathbf{A}\mathbf{d}\right\|_{1} \tag{3}\]
where \(\mathbf{d}_{obs}\) is the collected seismic data, \(\mathbf{S}\) is the sampling operator, \(\mathbf{d}\) is the unknown estimated seismic data and \(\mathbf{A}\) is the sparsity-promoting transform.
The projection onto convex sets conventional algorithm is globally used one for recompositioning incomplete seismic traces, especially in the case of vast irregularly-sampled seismic data casted onto conventional grids. Liang et al. (2014) stated that; the analysis-based approach underlines the sparsity of the canonical transformed coefficients; thus it is likely to restore seismic data with smooth regions; while the synthesis-based method realizes the sparsest approximation of the given seismic data in the transformed domain.
### Seismic Data Examples
In order to demonstrate the idea of Compressed Sensing (CS) we use Sigmoid reflectivity model (Figure1) and decimated the traces to illustrate how CS efficient can be in reconstructing vast irregularly-sampled. Figure 2(a) is a reconstructed output reflectivity image after applying projection onto convex sets (POCS) with \(f-k\) thresholding algorithm while 2(b) shows a restored image after implementing Fast-POCS with \(f-k\) thresholding. We see both algorithms have restored valuable parts of the initial reflectivity model. On the other hand, applying the seislet transform can increase the convergence of the whole framework and decrease the number of iterations to match the data as seen in Figure 3(a) since we apply POCS with seislet thresholding. In addition to that, the algorithm that runs very fast was the Fast-POCS (FPOCS) with seislet thresholding Figure 3(b). In general, all algorithm works significantly without any issues, however the last one is the fastest and it preferred in industry.
Figure 1: Sigmoid synthetic reflectivity model (after Claerbout, 2014) with about 45% missing traces.
## Conclusion
The FPOCS algorithm for applying Compressed Sensing on seismic data via sparsity constraint in seislet transform domain will have huge potentialities when implementing it on field data. The FPOCS can obtain much faster convergence than conventional POCS, which can potentially make the seislet-based POCS approach applicable in practice according to the efficiency acceleration. This conclusion can guide us to use different iterative approach according to the noise level in the data. The CS based on seislet transform can obtain optimum data recovery results than \(f-k\) transform based algorithms because of a much sparser structure in the seislet transform domain. We have used synthetic data examples to demonstrate the advantages of using CS seislet-based FPOCS approach.
## Keywords
Compressed Sensing, Vast Irregularly-sampled seismic data, Fourier Transform, Seislet transform.
## Acknowledgment
The research is funded by: National Natural Science Foundation of China (41574098 & 41630964). We thank the colleagues and students within SWPI laboratory for their weekly discussion. Special thanks to the creators of Madagascar software and Prof. Dr. Jeffery Shragge (center for wave phenomena, CSM, CO, USA) for his valuable discussion and technical assistant.
|
2306.13060 | Impact of recent MINERvA measurement of the antineutrino-proton
scattering cross-section on the generalized parton distributions | We investigate the impact of the new measurement of the antineutrino-proton
scattering cross-section from the MINERvA Collaboration on generalized parton
distributions (GPDs), particularly the polarized GPDs denoted as
$\widetilde{H}^q$. To achieve this, we perform some QCD analyses of the MINERvA
data, in addition to all available data of the proton's axial form factors. We
demonstrate that MINERvA data lead to consistent results with other related
experimental data, confirming the universality of GPDs. Our results indicate
that MINERvA data can impose new constraints on GPDs, particularly on
$\widetilde{H}^q$. Our predictions for the proton's axial charge radius, WACS
cross-section, and axial form factor show good consistency with those of other
studies and measurements. This leads us to conclude that the result of a more
comprehensive analysis, considering all related experimental data, is not only
reasonable but also more reliable, even in light of existing tensions among the
data. The present study can be considered as a guideline for performing a new
and comprehensive QCD global analysis of GPDs including the MINERvA
measurements like that presented in Phys. Rev. D \textbf{107}, 096005 (2023). | Fatemeh Irani, Muhammad Goharipour, Hadi Hashamipour, K. Azizi | 2023-06-22T17:27:04Z | http://arxiv.org/abs/2306.13060v3 | # New insight on the nucleon structure from recent MINERvA measurement
###### Abstract
We investigate the impact of the new measurement of the antineutrino-proton scattering cross-section from the MINERvA Collaboration on the generalized parton distributions (GPDs), especially of polarized GPDs \(\widetilde{H}^{q}\). To this aim, we perform some QCD analyses of the MINERvA data in addition to all available data of the proton axial form factors (FFs) \(F_{A}\). We show that the MINERvA data are in a good consistency with the other related experimental data which confirms the universality of GPDs in turn. Our results indicate that the MINERvA data can put new constrains on GPDs, especially \(\widetilde{H}^{q}\). The present study can be considered as a guideline for performing a new and comprehensive QCD global analysis of GPDs including the MINERvA measurements like as Phys. Rev. D **107**, 096005 (2023).
## I Introduction
One of the practical and informative tools to probe the internal structure of hadrons is to use the scattering processes where high energy particles are scattered from composite objects like nucleons. Depending on the energy scale of the process, the incoming particles and those that are produced finally, various information can be accessed comprising both on the momentum and spin distributions of the partons- the constituent parts of the nucleons. For example, the measurements of the vector form factors (FFs) of the nucleon that have been considered as the Fourier transform of charge and magnetism distributions can be performed by analyzing the world electron scattering data [1]. However, the scattering of neutrinos from nucleons serves as a complementary tool, providing the measurements of both the vector and axial vector FFs of the nucleon [2; 3]. The axial vector FF, in particular, characterizes the distribution of weak charge within the nucleon, highlighting the nuanced differences from other scattering processes.
It is well established now that different kinds of the nucleon FFs can be defined as the Mellin moments of some nonperturbative objects, namely GPDs [4; 5; 6; 10], arising from light-cone correlators of quark and gluon fields [11; 12; 13; 14]. From another point of view, GPDs are considered as the generalization of the usual parton distribution functions (PDFs) [15] which are crucial at very high energies where the nucleon is decomposed during the scattering or collision. To be more precise, GPDs contain more degrees of freedom and depend on the longitudinal momentum transfer \(\xi\), which is called skewness, and the momentum transfer squared \(t=-Q^{2}\), in addition to the longitudinal momentum fraction \(x\). However, they reduce to PDFs at the so-called forward limit where both \(\xi\) and \(t\) are equal to zero. From the theoretical point of view, GPDs can be achieved from a wide range of the hard exclusive processes, though some of these processes only provide information at zero skewness (for a brief review see Ref. [16] and references therein).
There are some models that were used to extract information on GPDs from the related experimental data such as the Reggeized spectator model [7] and conformal-moment-based models [8; 9] (see Ref. [10] for a review) as well as the light-front approaches [17; 18; 19] and the GPD models based on the double-distribution (DD) representation [20]. Although the lattice QCD [21; 22; 23; 24] and its extension as a large
momentum effective theory [25; 26] can provide us a framework to determine GPDs or their moments, the phenomenological approaches in which GPDs are determined through the QCD analysis of the experimental data are of special importance [5; 10; 27; 28; 29; 30; 31]. For example, in Ref. [16], the authors have recently presented the most comprehensive determination of GPDs at \(\xi=0\) by performing a simultaneous analysis of all available experimental data of the nucleon electromagnetic FFs, nucleon charge and magnetic radii, proton axial FF, and wide-angle Compton scattering (WACS) cross section. However, the significant tension observed between the WACS and the proton magnetic form factor (\(G_{M}^{p}\)) data at high \(-t\) values has yet to be explained. As a result, the authors proposed the need for either reassessing the experimental measurements of both WACS and \(G_{M}^{p}\) or refining the theoretical calculations of the WACS cross section. This clearly highlights the necessity for significant progress in both theoretical and experimental domains, along with ongoing efforts in phenomenological research.
Very recently, the MINERvA Collaboration has presented the first high-statistics measurement of the muon antineutrino scattering from free protons cross-section, \(\bar{\nu}_{\mu}p\to\mu^{+}n\), as a function of \(Q^{2}\) from the hydrogen atom [32], using the plastic scintillator target of the MINERvA experiment [33]. This process turns the muon antineutrino into the more massive positively charged muon \(\mu^{+}\) and the proton into a neutron, and therefor provides a direct access to the nucleon transition axial form factor \(F_{A}\) which is also important for the neutrino oscillation experiments. The special importance of such measurement is that it is free from the nuclear theory corrections. This is while the previous measurements were performed from the neutrino scattering off the deuterium, \(\nu_{\mu}D\to\mu^{-}pp\), which requires the theoretical assumptions about the Fermi motion of the bound nucleons as well as the nuclear wave function to extract \(F_{A}\). An intriguing question arises regarding the potential impact of the new MINERvA data on GPDs if they are incorporated into the analysis alongside existing data. It sparks curiosity to explore whether these data can offer fresh perspectives and enhance our understanding of the nucleons structure. The aim of the present study is to answer these questions by performing some QCD analyses of the MINERvA data in addition to the other available \(F_{A}\) data.
This paper is organized as follows: Sec. II reviews the theoretical formulas to calculate the MINERvA cross-section and the phenomenological framework that we use to study the impact of MINERvA data on GPDs. We also introduce the datasets which are considered in the present study in this section. In Sec. III, by performing several analyses and comparing the results obtained with each other as well as the corresponding ones from Ref. [16], we study in details the goodness of fits and also the impact of MINERvA data on the extracted GPDs. We summarize our results and conclusions in Sec. IV.
## II Theoretical, phenomenological, and experimental requirements
In this section, we introduce briefly the theoretical, phenomenological, and experimental requirements of the present study. The main questions to be answered are: How to calculate the cross-section of the antineutrino-proton scattering theoretically?, what is the phenomenological framework that we use to perform QCD analyses and determine GPDs?, and which experimental datasets are included in our analyses?
The free nucleon cross-section for the process \(\bar{\nu}_{\mu}p\to\mu^{+}n\) can be written as [2; 32]
\[\frac{d\sigma}{dQ^{2}}=\frac{M^{2}G_{F}^{2}\cos^{2}\theta_{c}}{8\pi E_{V}^{2} }\left[A(Q^{2})+B(Q^{2})\frac{(s-u)}{M^{2}}+C(Q^{2})\frac{(s-u)^{2}}{M^{4}} \right], \tag{1}\]
where the three parameters \(A\), \(B\), and \(C\) are defined as
\[A(Q^{2}) = \frac{m^{2}+Q^{2}}{4M^{2}}\left[\left(4+\frac{Q^{2}}{M^{2}} \right)\left|F_{A}\right|^{2}-\left(4-\frac{Q^{2}}{M^{2}}\right)\left|F_{V} \right|^{2}+\frac{Q^{2}}{M^{2}}\left(1-\frac{Q^{2}}{4M^{2}}\right)\left|\xi F _{V}^{2}\right|^{2}+\frac{4Q^{2}}{M^{2}}F_{V}^{1}\xi F_{V}^{2}\right],\] \[B(Q^{2}) = \frac{Q^{2}}{M^{2}}F_{A}\left(F_{V}^{1}+\xi F_{V}^{2}\right),\] \[C(Q^{2}) = \frac{1}{4}\left[\left|F_{A}\right|^{2}+\left|F_{V}^{1}\right|^{2 }+\frac{Q^{2}}{4M^{2}}\left|\xi F_{V}^{2}\right|^{2}\right]. \tag{2}\]
In above equations, \(G_{F}\), \(\theta_{c}\), and \(m\) are the Fermi coupling constant, the Cabibbo angle, and the charged lepton mass, respectively. The average nucleon mass \(M\) is calculated using the proton and
neutron masses as \(M=(M_{p}+M_{n})/2\). For the difference of the Mandelstam variables we have \((s-u)=4ME_{\nu}-m^{2}-Q^{2}\), where \(E_{\nu}\) is the neutrino energy and equal to 5.4 GeV according to the MINERvA paper [32]. The values of all aforementioned constants are taken from Particle Data Group [34] in the present study. As can be seen, the above cross-section is related to three kinds of FFs: \(F_{A}\) and two vector FFs \(F_{V}^{1}\) and \(\xi F_{V}^{2}\) which are related to the proton and neutron electric and magnetic FFs in turn,
\[F_{V}^{1}(Q^{2})=F_{1}^{p}(Q^{2})+F_{1}^{n}(Q^{2}),\] \[\xi F_{V}^{2}(Q^{2})=\mu_{p}F_{2}^{p}(Q^{2})-\mu_{n}F_{2}^{n}(Q^{2 }), \tag{3}\]
where \(\xi=\mu_{p}-\mu_{n}\) is the difference of the magnetic moments of the proton and neutron. On the other hand, \(F_{A}\), \(F_{1}^{p/n}\), and \(F_{2}^{p/n}\) can be calculated theoretically from three kinds of GPDs at zero skewness, namely \(\widetilde{H}(x,Q^{2})\), \(H(x,Q^{2})\), and \(E(x,Q^{2})\), respectively, by the integration over \(x\)[5; 31]. The last two ones are unpolarized while \(\widetilde{H}\) GPDs are polarized. Another point should be noted is that only valence GPDs \(H_{s}^{n}\) and \(E_{v}^{n}\), where \(q=u,d\) refers to the up and down quarks, are contributed to the Dirac and Pauli FFs of the nucleon \(F_{1}\) and \(F_{2}\) (neglecting the strange quark contribution because of its small magnitude), while \(F_{A}\) contains, also the sea quark (\(\bar{q}\)) contributions [28].
According to the above explanations, in order to calculate the MINERvA cross-section of Eq. (1) theoretically, one needs to have all three kinds of GPDs \(\widetilde{H}\), \(H\), and \(E\) at desired values of \(x\) and \(Q^{2}\). This is possible thanks to the recent analysis performed in Ref. [16] where the authors have determined simultaneously \(\widetilde{H}\), \(H\), and \(E\) at \(\xi=0\), through a QCD analysis of a wide range of the related experimental data. Hence, it is currently intriguing to theoretically compute Eq.(1) and compare the outcomes with the MINERvA measurements[32]. But different sets of GPDs have been presented in Ref. [16] depending on what experimental datasets are included in the analysis or under what conditions. Therefore, we calculate Eq. (1) using four sets of GPDs which have been called Set 9, Set 10, Set 11, and Set 12. Firstly, let us briefly introduce these sets of GPDs:
* Set 9: this set has been obtained by analyzing the AMT07 [35] and Mainz [36] data for the proton magnetic FF \(G_{M}^{p}\), the YAHL18 data [1] for the ratio of the proton electric and magnetic FFs \(R^{p}=\mu_{p}G_{E}^{p}/G_{M}^{n}\) as well as the neutron electric and magnetic FFs \(G_{E}^{n}\) and \(G_{M}^{n}/\mu_{n}G_{D}\), the data of the charge and magnetic radii of the nucleons [37], a reduced set of the world proton axial FF \(F_{A}\) (see Ref. [16] for the experimental data references and the methodology employed for selecting the data points), and finally the WACS cross-section data.
* Set 10: this set has been obtained by incorporating the CLAS Collaboration measurements of \(F_{A}\) at higher values of \(-t\)[38] to the existing data utilized in Set 9.
* Set 11: this set has been obtained by analyzing all data that were used for Set 10 and considering a normalization factor \(\mathcal{N}_{\rm CL}=1.67\) for the CLAS data.
* Set 12: this set has been obtained by analyzing all data that were used for Set 10 except the AMT07 and Mainz data of \(G_{M}^{p}\), and considering a normalization factor \(\mathcal{N}_{\rm CL}=2.16\) for the CLAS data.
Figure 1 shows a comparison between the theoretical calculation of Eq. (1) using the aforementioned sets of GPDs and the corresponding experimental data from the MINERvA measurements [32]. Additionally, the ratios of these predictions to the data have been plotted in the lower panel to examine the differences more closely across various \(Q^{2}\) values. As can be seen, Set 10 that has been obtained including all data in the analysis and considering the original CLAS data leads to the best description of data, while Set 9 which does not contain the CLAS data has the worst result. This indicates the good consistency between the \(F_{A}\) CLAS data and the MINERvA measurements as expected. Set 11, which addresses the tension between the CLAS data, world \(F_{A}\) data, and WACS measurements by introducing a normalization factor for the CLAS data, offers a compelling description that ranks second in performance after Set 10. It is worth noting that these results not only provide compelling evidence but also showcase the remarkable universality property of GPDs trivially.
Now the question is how the new MINERvA data can affect GPDs if they are also included in the analysis? Although, the straightforward way to get the answer is performing a new comprehensive
analysis like [16] that includes all related data in addition to the MINERvA data, there is also an easier way that brings us to the answer to a very good extent. The idea is to conduct a concise QCD analysis of the MINERvA data [32], the reduced set of the world \(F_{A}\) data introduced in Ref. [16], and the CLAS \(F_{A}\) data [38]. Such an analysis can be performed utilizing GPDs \(H_{\pi}^{q}\) and \(E_{v}^{q}\) from [16] and just parameterizing GPDs \(\widetilde{H}^{q}\) (both valence and sea quark contributions). It is noteworthy that this approach is viable due to the significant contribution of \(\widetilde{H}\) in the cross-section of Eq. (1) as well as its exclusive involvement in \(F_{A}\). To provide further clarity, a simple calculation demonstrates that the parameter \(C(Q^{2})\) exhibits absolute dominance in comparison to \(A(Q^{2})\) and \(B(Q^{2})\). On the other hand, in parameter \(C(Q^{2})\), the contribution of term \(F_{A}\) which is related to \(\widetilde{H}\) is remarkably larger than those come from \(F_{V}^{1}\) and \(\xi F_{V}^{2}\) which are related to GPDs \(H\) and \(E\), respectively. Hence, the MINERvA data impose the strongest constraints on the GPDs \(\widetilde{H}^{q}\).
To conduct the aforementioned analysis, we adopt the phenomenological framework employed in Ref. [16], which facilitates logical comparisons between different results. In this way, we parameterize GPDs \(\widetilde{H}\) at \(\xi=0\) using the following ansatz
\[\widetilde{H}_{v}^{q}(x,t,\mu^{2})=\Delta q_{v}(x,\mu^{2})\exp \Bigl{[}t\widetilde{f}_{v}^{q}(x)\Bigr{]},\] \[\widetilde{H}^{q}(x,t,\mu^{2})=\Delta\bar{q}(x,\mu^{2})\exp \Bigl{[}t\widetilde{f}^{q}(x)\Bigr{]}, \tag{4}\]
that proposed in Refs. [31; 39]. In above equations, \(\Delta q(x,\mu^{2})\) and \(\Delta\bar{q}(x,\mu^{2})\) are the polarized PDFs for the valence and sea quarks, respectively, which are taken from the NNPDF analysis [40] at the next-to-leading order (NLO) and scale \(\mu=2\) GeV, using the LHAPDF package [41]. Note that GPDs are reduced to PDFs at forward limit (\(t=0\) and \(\xi=0\)). \(\widetilde{f}_{v}^{q}(x)\) and \(\widetilde{f}^{q}(x)\) are the related profile functions and can have the general form
\[\mathcal{F}(x)=\alpha^{\prime}(1-x)^{3}\log\frac{1}{x}+B(1-x)^{3 }+Ax(1-x)^{2}. \tag{5}\]
Here, the parameters \(\alpha^{\prime}\), \(B\), and \(A\) represent unknown free parameters associated with each quark flavor. They need to be determined through a standard \(\chi^{2}\) analysis of the experimental data. The minimization procedure is performed using the CERN program library MINUIT [42]. In order to find the best values of the parameters, we utilize the parametrization scan procedure as described in Refs. [16; 30]. In this way, for the \(\alpha^{\prime}\) parameters of the profile functions \(\widetilde{f}_{v}^{u}(x)\) and \(\widetilde{f}_{v}^{d}(x)\), we use the values obtained in [16] which are equal to the corresponding ones of the unpolarized profile functions \(f_{v}^{u}(x)\) and \(f_{v}^{d}(x)\).
Figure 1: A comparison between the theoretical calculations of Eq. (1) using Set 9, Set 10, Set 11, and Set 12 of GPDs taken from Ref. [16] and the corresponding experimental data from the MINERvA measurements [32].
A standard Hessian approach [43] is also used to calculate the uncertainties of GPDs as well we the observables.
As mentioned before, in present study, we include not only the MINERvA data [32], but also a comprehensive set of \(F_{A}\) data that directly relates to the polarized GPDs \(\widetilde{H}\). This set encompasses a reduced collection of older measurements from various sources as described in [16], as well as the MiniBooNE data obtained from neutrino and antineutrino charged-current quasielastic scattering [44], and the CLAS measurements at large values of \(Q^{2}\)[38]. The total number of data included in the analysis is 54 (see Table 2 to find the list of datasets and the number of data points each set comprises). As explained before, we take GPDs \(H_{\pi}^{a}\) and \(E_{\eta}^{a}\) from [16] to calculate the the MINERvA cross-section of Eq. (1), theoretically. To explore the consistency between different sets of GPDs and the MINERvA data, we conduct different analyses by systematically varying the GPDs set. Specifically, we perform four distinct analyses utilizing GPDs of Set 9, Set 10, Set 11, and Set 12, as previously introduced. For each analysis, we obtain corresponding modified set of polarized GPDs denoted as Set 9p, Set 10p, Set 11p, and Set 12p, respectively. By comparing the resulting \(\chi^{2}\) values, we aim to identify the GPDs set that exhibits greater agreement with the MINERvA data and yields a smaller \(\chi^{2}\).
## III Results
In this section, we present the results obtained for the \(\chi^{2}\) analysis of the MINERvA data in the framework described in the previous section. In particular, we investigate the quality of the fits, the possible tension between different datasets, and the impact of MINERvA data on the extracted GPDs. As mentioned before, we perform four analyses, namely Set 9p, Set 10p, Set 11p, and Set 12p, using different sets of \(H_{\pi}^{q}\) and \(E_{\eta}^{q}\) GPDs from [16].
Following the parametrization scan procedure as described in Ref. [30], one finds a set of GPDs with \(\widetilde{f}^{q}(x)=\widetilde{f}_{\pi}^{q}(x)\). Actually, releasing the parameters of the sea quark profile functions does not lead to any improvement in the fit quality. Note again that we take the parameters \(\alpha_{u_{\pi}}^{\prime}\) and \(\alpha_{d_{\pi}}^{\prime}\) in Eq. (5) from Ref. [16] which are equal to the corresponding ones of the unpolarized profile functions for each set. In fact, by treating these two parameters as free, we did not observe a significant decrease in the value of \(\chi^{2}\). In this way, the only parameters that contribute in the parametrization scan are the \(A\) and \(B\) parameters of the valence profile functions \(\widetilde{f}_{\nu}^{u}(x)\) and \(\widetilde{f}_{\nu}^{d}(x)\) (four free parameters). Table 1 contains the values of the optimum parameters obtained from four analyses described above. According to this table, for the case of up quark distribution, the biggest difference is seen in parameters \(A\) that control the large \(Q^{2}\) values. While for the case of down quark distribution, the differences are seen in both \(A\) and \(B\) parameters.
Table 2 contains the results obtained for the \(\chi^{2}\) values. The datasets used in the analysis with their references have been presented in the first column of the table. The second column contains the ranges of \(-t\) which are covered by data. Note that the MINERvA data cover a wide range of \(-t\) compared with other datasets that indicates their importance for constraining GPDs especially of \(\widetilde{H}\). For each dataset, we have mentioned the value of \(\chi^{2}\) per number of data points, \(\chi^{2}/N_{\text{pts}}\), that can be used to check the quality of the fit. The last row of the table comprised the values of total \(\chi^{2}\) divided by the number of degrees of freedom, \(\chi^{2}/\text{d.o.f.}\), for each analysis separately.
According to Table 2, Set 9p has the largest value of the \(\chi^{2}\) which is in agreement with the results of Fig. 1, where Set 9 has the worst prediction for the MINERvA cross-section. This indicates the importance of the CLAS data for constraining GPDs \(\widetilde{H}^{q}\) at larger values of \(-t\). Actually, by removing the CLAS data from the analysis, the WACS data lead to an invalid estimate of \(\widetilde{H}\) at large \(-t\) which in
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Distribution & Parameter & Set 9p & Set 10p & Set 11p & Set 12p \\ \hline \hline \(f_{\pi}^{a}(x)\) & \(A\) & \(9.201\pm 0.870\) & \(9.652\leq 1.036\) & \(4.098\pm 2.714\) & \(4.200\pm 1.138\) \\ & \(B\) & \(-1.328\pm 0.159\) & \(-1.249\pm 0.188\) & \(0.052\pm 0.474\) & \(-1.361\pm 0.269\) \\ \hline \(f_{\pi}^{a}(x)\) & \(A\) & \(11.167\pm 1.439\) & \(-0.145\pm 0.610\) & \(6.339\pm 5.323\) & \(14.601\pm 10.418\) \\ & \(B\) & \(-1.602\pm 0.075\) & \(0.546\pm 0.320\) & \(-1.335\pm 0.491\) & \(0.106\pm 1.472\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison between the values of the optimum parameters obtained from the analyses performed in this section, namely Sets 9p, Set 10p, Set 11p, and Set 12p. See Sec. III for more details.
turn affects the results for GPDs \(H\) and \(E\) (see Ref. [16]). This leads to a bad description of MINERvA data for Set 9 in Fig. 1 and also a bad \(\chi^{2}\) for Set 9p in Table 2 (especially due to the large \(\chi^{2}\) of the MiniBooNE data) even after releasing \(\widetilde{H}\) and performing the analysis of the related data again. As can be seen, the best result belongs to the analyses of Set 11p where we use GPDs \(H\) and \(E\) of Set 11 from [16] that have been obtained by including the CLAS data in the analysis and considering a normalization factor for them. Comparing to Set 10 in Fig. 1, it is evident that the inclusion of the MINERvA data in the analysis and the subsequent determination of the \(\widetilde{H}\) GPDs (Set 10p) no longer yields the best result. The reason is that Set 10p has been obtained without considering a normalization factor for the CLAS data [16], leading to unresolved tension between these data and other \(F_{A}\) measurements, although it leads to a smaller \(\chi^{2}\) for the MINERvA data compared with Set 11p. It is worth noting that Set 12p demonstrates the highest \(\chi^{2}\) value for the MINERvA data compared to other sets, although its overall \(\chi^{2}\) is relatively close to that of Set 11p. This observation suggests that the inclusion of the MINERvA data does not strongly favor the exclusion of the \(G_{M}^{p}\) data from the analysis, unlike the WACS data, as demonstrated in Ref. [16].
The obtained results for the polarized GPD \(x\widetilde{H}_{v}^{a}(x)\), along with their corresponding uncertainties, have been compared at four different values of \(t\), namely \(t=0,-1,-3,-6\) GeV\({}^{2}\), as illustrated in Fig. 2. Based on the findings depicted in the figure, it is observed that Set 9p and Set 10p exhibit similar behavior across all values of \(-t\). Moreover, these sets display a higher degree of suppression as \(-t\) increases, indicating more pronounced contributions of GPDs \(H_{v}^{u}\) and \(E_{v}^{u}\) in the MINERvA cross-section compared to other sets. On the other hand, Set 12p has the largest distribution compared with other sets. This shows that the MINERvA data compensates the smallness of GPDs \(H_{v}^{u}\) and \(E_{v}^{u}\) for Set 12 (See Figs. 20 and 22 of [16]) by enlarging the \(\widetilde{H}_{v}^{u}\). Set 11p that has been obtained by analyzing all data and considering a normalization factor for the CLAS data shown more moderate behavior in analogy with Set 9p and Set 10p.
Figure 3 shows the same results as Fig. 2, but for polarized GPDs \(x\widetilde{H}_{v}^{d}(x)\). In this case, Set 12p exhibits the smallest distribution among all sets, indicating a strong suppression as \(-t\) increases. This behavior can be attributed to the same reason mentioned earlier, which explains the enhancement of Set 12p in Fig. 2. It is worth noting that \(x\widetilde{H}_{v}^{d}(x)\) has negative values in \(x\), contributing to its different behavior here. Although the other sets have considerable magnitudes in comparison with Set 12p (and also in comparison with the corresponding up quark distribution in Fig. 2), they display notably different behaviors. This observation suggests that the constraints from data on \(x\widetilde{H}_{v}^{d}(x)\) are relatively weaker in comparison to those on \(x\widetilde{H}_{v}^{u}(x)\) overall.
In order to investigate the impact of MINERvA data on the extracted GPDs, we compare the results obtained in the present study with the corresponding ones from Ref. [16]. To this aim, we consider Set 11p and Set 12p and compare them with Set 11 and Set 12. The results have been shown in Figs. 4 and 5 for \(x\widetilde{H}_{v}^{u}(x)\) and \(x\widetilde{H}_{v}^{d}(x)\), respectively. In the case of up valence quark distribution (Fig. 4), the MINERvA data considerably affect Set 12 and make it smaller and pull it toward smaller values of \(x\). However, Set 11 is not much affected so that Set 11p and Set 11 are in a good consistency at all values of \(-t\). The situation is the same in the case of down valence quark distribution in Fig. 5, except that Set 12p has been inclined to the larger values of \(x\). Since GPDs \(H\) and \(E\) of Set 11 have been obtained by considering all the related data in the analysis (see Ref. [16]), the good consistency between Set 11 and Set 11p indicates that the MINERvA data are compatible with the bulk of the experimental data. It also authenticates the universality of GPDs. On the other hand, since GPDs \(H\) and \(E\) of Set 12 have been obtained by removing the \(G_{M}^{p}\) data from the analysis, the considerable difference between Set 12
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Experiment & \(\cdot t\) (GeV\({}^{2}\)) & \multicolumn{4}{c}{\(\chi^{2}/N_{\rm jet}\).} \\ & & Set 9p & Set 10p & Set 11p & Set 12p \\ \hline \hline World \(F_{A}\)[16] & \(0.07-1.84\) & 74.18/20 & 84.66/20 & 82.21/20 & 73.96/20 \\ MiniBooNE [44] & \(0.025-0.9\) & 115.46/14 & 91.14/14 & 56.19/14 & 58.40/14 \\ CLAS [83] & \(2.12-4.16\) & 25.27/5 & 16.86/5 & 4.35/5 & 13.99/5 \\ MINERvA [32] & \(0.0188-5\) & 28.59/15 & 29.49/15 & 61.42/15 & 72.47/15 \\ \hline Total \(\chi^{2}/d\).o.f. & & 243.50/50 & 222.15/50 & 206.87/50 & 218.82/50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of four analyses performed using different sets of GPDs taken from Ref. [16]. See Sec. II for more details.
and Set 12p in Figs. 4 and 5 indicates that the MINERvA and WACS data may put somewhat different constraints on GPDs \(H\) and \(E\). As previously mentioned, a significant tension exists between the WACS and \(G_{M}^{p}\) data, making it challenging to obtain a satisfactory description of the WACS data without excluding the \(G_{M}^{p}\) data from the analysis. However, the MINERvA data exhibit better compatibility with the WACS data. Overall, in order to investigate the impact of MINERvA data on GPDs more accurately, we have to use the MINERvA data to estimate the impact of the MINERvA data on the GPDs \(H\) and \(E\).
Figure 3: Same as Fig. 2, but for polarized GPDs \(x\vec{H}_{v}^{\,a}(x)\).
precisely, one must perform a comprehensive QCD analysis like as done in Ref. [16], i.e., by considering all related data and releasing all three kinds of GPDs. It is also may be of importance to adjust again the normalization factor considered for the CLAS data. This aspect will be a priority for our future endeavors.
Figure 4: A comparison between the results of Sets 11p and Set 12p from the present study and Set 11 and Set 12 from Ref. [16] for the polarized GPDs \(x\bar{H}_{v}^{*}(x)\) at four \(t\) values shown in panels (a) \(t=0\), (b) \(t=-1\), (c) \(t=-3\), and (d) \(t=-6\) GeV\({}^{2}\).
Now it is also of interest to compare the theoretical predictions of the MINERvA cross-section, calculated using the different sets of GPDs obtained in this study, with the corresponding data. Figure 6 shows a comparison between the theoretical predictions of Set 9p, Set 10p, Set 11p, and Set 12p of GPDs and the MINERvA measurements. As can be seen, the differences between different sets become important at \(Q^{2}\gtrsim 0.7\,\mathrm{GeV}^{2}\). Although Set 9p and Set 10p exhibit a better description of the MINERvA data, set 11p and has the best \(\chi^{2}\) in total according to Table 2. It is possible that readjusting the normalization factor of the CLAS data in addition to releasing all three kinds of GPDs simultaneously in a comprehensive global analysis, where there are also the electromagnetic FFs, the nucleon radii, and the WACS cross-section data leads to a better description of Set 11p of the MINERvA data. According to the results obtained, it is evident that the MINERvA data can significantly constrain the GPDs, particularly the polarized GPDs \(\widetilde{H}^{q}\). However, to fully address the challenging tension between the WACS and \(G_{M}^{p}\) data, as discussed in Ref. [16], a comprehensive global analysis incorporating all relevant data is necessary.
## IV Summary and conclusion
In this study, we investigated the impact of the new measurement of the antineutrino-proton scattering cross-section [32] conducted by the MINERvA Collaboration on GPDs, especially of polarized GPDs \(\widetilde{H}^{q}\). The special importance of this measurement is that it is free from the nuclear theory corrections. In pursuit of this objective, we adopted the phenomenological framework introduced in Ref. [16]. Utilizing the unpolarized GPDs \(H_{v}^{q}\) and \(E_{v}^{q}\) from different sets presented in [16], we obtained different sets of the polarized GPDs \(\widetilde{H}^{q}\) with their uncertainties through QCD analyses of the MINERvA data beside all available proton axial FFs, \(F_{A}\), data. Consistently, we found that the best result belongs to the analysis called Set 11p in which we used Set 11 from [16] that have been obtained by including the \(F_{A}\) CLAS data in the analysis and considering a normalization factor for them. Our results indicates that the MINERvA data are compatible with the bulk of the experimental data which confirms the universality of GPDs in turn. Although the MINERvA data put new constrains on GPDs especially of the polarized GPDs \(\widetilde{H}^{q}\), they cannot judge firmly about the hard tension between the WACS and \(G_{M}^{p}\) data introduced in Ref. [16]. We emphasize that in order to investigate the impact of MINERvA data
Figure 6: A comparison between the theoretical calculations of Eq. (1) using Set 9p, Set 10p, Set 11p, and Set 12p of GPDs obtained in this study and the corresponding experimental data from the MINERvA measurements [32].
on GPDs more precisely, one must perform a comprehensive QCD analysis like as done in Ref. [16], i.e., by considering all related data, releasing all three kinds of GPDs simultaneously, and adjusting again the normalization factor considered for the CLAS data. This aspect will be further explored in our future research.
## Acknowledgements
H. Hashamipour thanks the School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM), for financial support provided for this research. K. Azizi is thankful to Iran Science Elites Federation (Saramadan) for the partial financial support provided under the grant number ISEF/M/401385.
## Note added
The GPDs extracted in this study with their uncertainties in any desired values of \(x\) and \(t\) are available upon request.
|
2307.13349 | In situ electron paramagnetic resonance spectroscopy using single
nanodiamond sensors | An ultimate goal of electron paramagnetic resonance (EPR) spectroscopy is to
analyze molecular dynamics in place where it occurs, such as in a living cell.
The nanodiamond (ND) hosting nitrogen-vacancy (NV) centers will be a promising
EPR sensor to achieve this goal. However, ND-based EPR spectroscopy remains
elusive, due to the challenge of controlling NV centers without well-defined
orientations inside a flexible ND. Here, we show a generalized zero-field EPR
technique with spectra robust to the sensor's orientation. The key is applying
an amplitude modulation on the control field, which generates a series of
equidistant Floquet states with energy splitting being the
orientation-independent modulation frequency. We acquire the zero-field EPR
spectrum of vanadyl ions in aqueous glycerol solution with embedded single NDs,
paving the way towards \emph{in vivo} EPR. | Zhuoyang Qin, Zhecheng Wang, Fei Kong, Jia Su, Zhehua Huang, Pengju Zhao, Sanyou Chen, Qi Zhang, Fazhan Shi, Jiangfeng Du | 2023-07-25T09:07:56Z | http://arxiv.org/abs/2307.13349v1 | # In situ electron paramagnetic resonance spectroscopy using single nanodiamond sensors
###### Abstract
An ultimate goal of electron paramagnetic resonance (EPR) spectroscopy is to analyze molecular dynamics in place where it occurs, such as in a living cell. The nanodiamond (ND) hosting nitrogen-vacancy (NV) centers will be a promising EPR sensor to achieve this goal. However, ND-based EPR spectroscopy remains elusive, due to the challenge of controlling NV centers without well-defined orientations inside a flexible ND. Here, we show a generalized zero-field EPR technique with spectra robust to the sensor's orientation. The key is applying an amplitude modulation on the control field, which generates a series of equidistant Floquet states with energy splitting being the orientation-independent modulation frequency. We acquire the zero-field EPR spectrum of vanadyl ions in aqueous glycerol solution with embedded single NDs, paving the way towards _in vivo_ EPR.
Electron paramagnetic resonance (EPR) spectroscopy is a well-established technique for analyzing molecules containing unpaired electrons. Among its widespread applications in diverse scientific fields [1, 2, 3], a featured one is the study of dynamic processes, such as monitoring redox reactions [4] and unraveling molecular motions [5]. Performing those studies in living cells is an active research topic [6, 7, 8], where an ultimate goal is promoting the EPR detection to single-cell level. Towards this goal, an essential but unmet precondition is developing suitable EPR sensors with both high spin sensitivity and good biocompatibility. The conventional EPR sensor is a macroscopic resonant microwave cavity with limited spin sensitivity. In the past decades, numerous microscopic EPR sensors have been developed to improve the spin sensitivity, including magnetic resonance force microscopy [9], scanning tunneling microscopy [10], and superconducting microresonator [11, 12], but they require cryogenic temperatures and high vacuum environments. Alternatively, nitrogen-vacancy (NV) centers in diamond can also serve as EPR sensors with single-spin sensitivity [13, 14], even at ambient conditions [15, 16, 17]. Furthermore, the diamond hosting NV centers can shrink to nanometer size, making itself more flexible to be _in situ_ sensor, such as magnetometry inside polymers [18], relaxometry in lipid bilayer [19], and thermometry in living cells [20]. However, using this flexible nanodiamond (ND) as an EPR sensor remains challenging.
The flexibility of NDs, on the other hand, also brings the uncertainty of their orientations, which prevents the hosted NV centers from measuring EPR spectra. It is because the NV center has an anisotropic response to magnetic fields with a principal axis along the N-V axis [21]. In the presence of external static or oscillating magnetic field, random tumbling of the ND will lead to variations in the transition frequency or strength of the hosted NV center, and thus prevents the current EPR detection schemes. For instance, both the double electron-electron resonance (DEER) [15, 16, 17, 22, 23] and the cross-polarization schemes [24, 25] require precise quantum controls of the NV spin states. To overcome this challenge, an active approach is either manipulating the ND orientation, such as, by using optical tweezers [26], tracking the ND orientation [27] and then adjusting the control field, or optimizing the control pulses [28]. Besides, another passive but technically simpler way is to develop detection schemes that are naturally insensitive to the orientation. An illuminating example is the zero-field EPR [29, 30], where the resonance frequency does not depend on the spin target's orientation, although still depend on the NV sensor's orientation.
Here, we generalize the robustness of zero-field EPR technique to not only the target's but also the sensor's orientations. By applying an extra amplitude modulation on the control field, the modulation frequency, rather than the field strength, will determine the resonance condition. We experimentally demonstrate it by performing the EPR measurements on P1 centers with different driving field strength, showing that the peak position is indeed field-strength independent. To further show the robustness of our method, we immerse the ND in an aqueous glycerol solution of vanadyl sulfate, and then use the hosted NV centers to detect the vanadyl ions. Although both the ND and the ions are tumbling, we can still acquire a clear EPR spectrum. Our results show the possibility of using flexible NDs as EPR sensors to enable _in
situ_ and even _in vivo_ EPR measurements.
## III Results
### Scheme of zero-field ND-EPR
Considering that a ND is placed inside a sample containing paramagnetic molecules, where all of them are tumbling randomly (Fig. 1a), the task is using the NV center inside the ND to measure the EPR spectrum of the molecules. In the absence of external static magnetic field, the energy levels of both the sensor and the target are irrelevant to their orientations [29]. The Hamiltonian of this sensor-target system can be written as
\[H_{0}=DS_{z}^{2}+d_{ij}S_{i}T_{j}+\omega T_{z}, \tag{1}\]
where \(\mathbf{S}\) and \(\mathbf{T}\) are the spin operator of the NV sensor and the spin target, respectively, \(d_{ij}\) (\(i,j=x,y,z\)) is the dipole-dipole coupling between them, \(D=2.87\) GHz is the zero-field splitting of NV sensor, and \(\omega\) is the energy splitting of the target spin induced by the intrinsic interaction.
There usually exists a large energy mismatch between the sensor and the target (\(|D-\omega|\gg d\)), and thus the dipole-dipole coupling between them has undetectable influence on the sensor. A driving field can eliminate this energy mismatch by bringing the NV center from lab frame to dress frame (Fig. 1b). A direct way is applying a resonant microwave (MW) field \(B_{1}\cos Dt\) (Fig. 1c) [30], and then we have
\[H_{\mathrm{I}}=\frac{\Omega}{2}S_{x}+d_{zj}S_{z}T_{j}+\omega T_{z} \tag{2}\]
in the interaction picture, where \(\Omega=\gamma_{\mathrm{NV}}B_{1}\sin\theta\) is the Rabi frequency, \(\gamma_{\mathrm{NV}}=-28.03\) GHz/T is the gyromagnetic ratio of the NV electron spin, and \(\theta\) is the angle between the MW magnetic field \(\mathbf{B_{1}}\) and the N-V axis. The energy gap of the sensor becomes \(\Omega/2\). By sweeping \(\Omega\), a resonant cross-relaxation process will happen when \(\Omega/2=\omega\), resulting in a reduction of the photoluminescence (PL) rate of the NV center. Experimentally, the sweep is performed on \(B_{1}\) rather than \(\Omega\), so the actual resonance condition is
\[\gamma_{\mathrm{NV}}B_{1}\sin\theta=\omega. \tag{3}\]
Figure 1d gives a simulation of the dependence of spectra on \(\theta\). One can see the peak position varies dramatically when \(\theta\) deviates from \(\pi/2\). For a randomly tumbling ND, the spectrum will show line broadening and become asymmetric. Note that the two \(\pi/2\) pulses in Fig. 1c will also deteriorate, resulting in weaker signal.
To address this issue, we perform an periodical amplitude modulation on the continuous driving MW of the form \(B_{1}\cos ft\cos Dt\) (Fig. 1e), and then the Hamiltonian Eq. 2 turns into (Supplementary Note 1)
\[H_{\mathrm{II}}=\sum_{m=\mathrm{odd}}2J_{m}(\frac{\kappa}{2})d_{zj}\sin mftS _{y}T_{j}+\omega T_{z}, \tag{4}\]
where \(J_{m}\) is the \(m\)-th order Bessel functions of the first kind, and \(\kappa=\Omega/f\) is the relative driving index. This periodical modulation creates a series of Floquet side bands with splitting determined by the modulation frequency \(f\). In the situation of \(\Omega\ll f\), only the first-order term matters. The dipolar coupling will induce an additional longitudinal relaxation on the NV sensor with rate of (Supplementary Note 1)
\[\Gamma_{1}^{{}^{\prime}}=\frac{3\kappa^{2}}{64}\frac{(d_{zx}^{2}+d_{zy}^{2}) \Gamma_{2}}{\Gamma_{2}^{2}+(f-\omega)^{2}}, \tag{5}\]
where \(\Gamma_{2}=\Gamma_{2,\mathrm{NV}}+\Gamma_{2,\mathrm{tar}}\) is the total decoherence rate. After a evolution time of \(t\), the signal contrast will be
\[S=\frac{2}{3}e^{-\Gamma_{1}t}(1-e^{-\Gamma_{1}^{{}^{\prime}}t}), \tag{6}\]
where \(\Gamma_{1}=1/T_{1}\) is the intrinsic relaxation rate of the NV sensor. So the resonance condition becomes
\[f=\omega. \tag{7}\]
The energy mismatch can also be removed at the price of a reduced coupling by a factor of \(\kappa/4\). Now the resonance condition does NOT depend on \(\theta\), but the signal strength does (Fig. 1f). The tumbling induced line broadening is completely removed.
### EPR measurements with fixed NDs
To better see the dependence of EPR spectra on the Rabi frequency \(\Omega\), we first perform the experimental demonstrations on fixed NDs, where \(\Omega\) is well-defined and adjustable. We place the NDs on a coverslip by spin coating, and observe them on a home-built confocal microscope (Fig. 2a). The 532 nm green laser and the red fluorescence are used to polarized and read out the spin state of NV centers, respectively. Figure 2b shows the Rabi oscillation of the NV centers in one ND. As each ND contains an average of 12-14 NV centers, the Fourier transform of the Rabi oscillation shows different peaks, corresponding to differently oriented NV centers. Here \(\Omega\) is defined as the dominated Rabi frequency.
By applying the pulse sequence given in Fig. 1e, we can directly get the zero-field EPR spectrum of P1 centers (Fig. 2c), which are another kind of defects in diamond. Since the signal strength depends on the relative driving index \(\kappa\) (Eq. 5 and Eq. 6), which is proportional to \(B_{1}/f\), we sweep the driving amplitude \(B_{1}\) accordingly
during the sweeping of \(f\), and keep \(\kappa\) as a constant. According to previous measurements on bulk diamonds [30], the zero-field EPR spectrum of \({}^{14}\)N P1 centers should have three peaks at 18 MHz, 130 MHz, and 148 MHz. The first peak is difficult to observe for NDs, because it merges with the broad background peak around \(f=0\) MHz (Fig. 2c). Here we focus on the last two. As shown in Fig. 2c, two clear peaks appear at the expected positions, even though the ND contains several differently oriented NV centers with different effective driving amplitudes. Besides, we repeat the measurement with different \(\kappa\) to simulate the rotation of NDs. The peak position indeed shows independence on \(\kappa\) (Fig. 2d), while the peak contrasts show positive dependence on \(\kappa\) (Fig. 2e), consisting with the theoretical prediction. Therefore, it is promising that the ND-EPR spectrum will be robust to the tumbling of NDs.
### EPR measurements with tumbling NDs
We then perform the measurement on tumbling NDs to directly show the robustness of our scheme. For a free ND in aqueous solutions, it has both rotational and transnational diffusion. To keep the ND staying at the focus of laser, extra techniques such as wide-field excitation and charge-coupled device (CCD) detection [31] or real-time tracking [32] are required, which are beyond the scope of this work. In order to simplify the measurement, we use a soft'string', which is a PEG molecule, to tether the ND on the surface of a coverslip (see Methods). Its length is roughly 120 nm, much larger than the ND diameter (\(\sim\) 40 nm) and smaller than the laser spot (\(\sim\) 800 nm). So the rotational motion is nearly unperturbed, while the transnational motion is restricted. We use a short mPEG molecule to control the density of NDs, so that single NDs can be observed (Supplementary Note 2). We put a glycerol aqueous solution (glycerol:water=9:1) of vanadyl sulfate with a concentration of 25 mM on the ND-tethered coverslip (see Methods), and then use the embedded ND to sensing the vanadyl ions (Fig. 3a). Here we use the glycerol-water mixture rather than the pure water in order to reduce the rotational diffusion rate of vanadyl ions, which we will discuss below. As shown in Fig. 3b, the Rabi oscillation decays quickly, corresponding to a wide distribution of Rabi frequency. The oscillation also changes in time slowly, confirming the tumbling of NDs in the aqueous solution. The irregular oscillation shows the difficulty of precisely controlling the spin in a tumbling ND. Nevertheless, we can still clearly acquire the zero-field EPR spectrum of vanadyl ions.
Figure 1: **Methods for EPR measurements based on tumbling NDs.****a** Simplified model of the ND sensor and the target spin. A microwave field is applied to control the spin state of the NV center, where only the component perpendicular to the N-V axis matters. **b** Generalized Hartmann-Hahn scheme. The driving field can transfer the NV center from lab frame to dressed frame in order to eliminate the huge energy mismatch between the NV center and the target spins. **c** Pulse sequence and corresponding energy match condition for direct drive. The black arrows mark the scanning variables. **d** Simulated EPR spectra for direct drive. Left side is a simulation of the spectral dependence on \(\theta\), while right side is the expected spectra after average over random \(\theta\). We omit the gyromagnetic ratio \(\gamma_{\rm NV}\) for simplicity. **e** Pulse sequence and corresponding energy match condition for amplitude-modulated drive. **g** Simulated EPR spectra for amplitude-modulated drive.
The ion exists as \([\mathrm{VO(H_{2}O})_{5}]^{2+}\) in aqueous solution at moderately low pH [33]. It consists of a electron spin \(S=1/2\) and a nuclear spin \(I=7/2\) with hyperfine interaction between them. At zero magnetic field, the spin Hamiltonian is
\[H_{\mathrm{VO}}=A_{\perp}(S_{x}I_{x}+S_{y}I_{y})+A_{\parallel}S_{z}I_{z}, \tag{8}\]
where \(A_{\perp}=208.5\) MHz, \(A_{\parallel}=547\) MHz [33], and we neglect the small nuclear quadrupole coupling term. The eigenstates can be written as \(|T,m_{T}\rangle\)\((T=4,3,m_{T}=\pm T,\pm(T-1),\ldots,0)\), where \(\mathbf{T}=\mathbf{S}+\mathbf{I}\) is the total angular momentum. The eigenenergies are \(\left(-A_{\parallel}\pm\sqrt{m_{T}^{2}A_{\parallel}^{2}+(16-m_{T}^{2})A_{ \perp}^{2}}\right)/4\), where plus and minus correspond to \(T=4\) and \(3\), respectively. Due to the axial symmetry (\(A_{x}=A_{y}=A_{\perp}\)), all the \(m_{T}\neq 0\) states are doubly degenerate, and thus \(16\) states occupy \(9\) energy levels (Fig. 3c). In general, all the transitions that meet the selection rule \(\Delta m_{T}=0,\pm 1\) are observable. For each transition, the vanadyl ion can be simplified to a two-level system \(\omega T_{z}\) as in Eq. 1 with a modified dipolar coupling to the NV sensor (Supplementary Note 3).
Figure 3d gives a simulated EPR spectrum of vanadyl ions, where up to \(12\) peaks are observable. However, Peaks 5-8 will overlap with the strong signal of P1 centers, and thus be hardly to resolve. There also exists a strong background peak at \(D/2=1435\) MHz. Because the amplitude-modulated driving field can be divided into two microwaves with frequencies of \(D-f\) and \(D+f\), each of which alone can also be used for the EPR measurement. This off-resonant driving field is technically simper, but will induce a second-order shift of the resonance frequency depending on the relative driving index \(\kappa\) (Supplementary Note 4). In the case of poor spectral resolution, the off-resonant drive is similar with the amplitude-modulate drive. When \(f\approx D/2\), the driving field itself will be an very strong artificial signal (Supplementary Note 4). Besides, observations of higher-frequency peaks require stronger driving power. Therefore, our measurement focus on the middle range, which is enough to extract the hyperfine constants. As shown in Fig. 3d, three clear peaks appear at \(780\) MHz, \(950\) MHz, and \(1150\) MHz, possibly corresponding to peak \(1\), \(2\), and \(10\), respectively. Peak \(9\) should also exist at the middle of peak \(1\) and \(2\) with height \(\sim 67\%\) of peak \(10\) (Supplementary Note 3). But if considering that the axial microwave also contributes to the EPR signal, the relative peak height may reduce down to \(\sim 32\%\) (Supplementary Note 5). Such a weak signal is hardly to observe with current signal-to-noise ratio.
The theoretical frequencies of peak \(1\), \(2\), and \(10\) are \(4A_{\perp}\), \(\sqrt{A_{\parallel}^{2}+15A_{\perp}^{2}}\), and \((\sqrt{A_{\parallel}^{2}+15A_{\perp}^{2}}+\sqrt{4A_{\parallel}^{2}+12A_{ \perp}^{2}})/2\), respectively. We then use values as peak positions to fit the spectrum (Fig. 3d). The fitted result gives \(A_{\perp}^{\mathrm{fit}}=195\pm 2\) MHz and \(A_{\parallel}^{\mathrm{fit}}=579\pm 8\) MHz, which is slightly different from previous measurements with conventional EPR [33]. Quantitative calculation shows that the signal contrast in Fig. 3d can be hardly explained by freely diffused ions (Supplementary Note 3). There may exist an absorption layer of vanadyl ions on the ND surface [34], which contributes to the main signal. Since the hyperfine constants of vanadyl ions strongly depends on the local ligand environment [35], we think the diamond surface might change this environment, and thus lead to different hyperfine inter
Figure 2: **Experimental demonstrations on fixed NDs.****a** Sketch of the experimental setup. The NDs are dispersed and fixed on a coverslip, which is placed in a confocal microscope. Yellow wire indicates the coplaner waveguide to radiate microwave. **b** Rabi oscillation. The top is time-domain data, while the bottom is a FFT spectrum. The highest peak indicates \(\Omega=65\) MHz. The input microwave power is \(0.6\) W. **c** Zero-field EPR spectra of P1 centers with different relative driving index \(\kappa\). The three vertical dash lines marked \(1\), \(2\), and \(3\) indicate the theoretical peak positions of \({}^{14}\)N P1 centers. Inset indicates a two-peak Lorentz fit on a representative spectrum to extract the peak contrasts and positions. The fit function has an extra linear skew baseline. **d** Dependence of the peak position on \(\kappa\). The points are fitted results, where error bars are fitting errors. The horizontal lines indicate the mean values of \(129.9\) MHz (peak \(2\)) and \(148.9\) MHz (peak \(3\)). **e** Dependence of the signal contrast on \(\kappa\). The points are fitted results, where error bars are fitting errors. The lines are fits according to Eq. 6.
action. Repeated measurements on different NDs show different results (Supplementary Note 6), suggesting that the hyperfine constant is indeed ND dependent. Here we cannot perform a blank control because the NV centers in NDs are slowly losing spin contrast. To speed up the measurement, we repeat the experiment on ND ensembles (Supplementary Note 6), which confirms the signal indeed comes from the vanadyl ions. The additional line broadening in the ensemble ND-EPR spectrum consists with the assumption of ND-dependent hyperfine constant. Moreover, ensemble measurements on conventional EPR spectrometers rule out the dependence of hyperfine constant on glycerol ligands (Supplementary Note 7). As the ND contains multiple NV centers, each of them may measure different spectrum because of the different position in ND. But considering the signal strongly depends on the NV depth (defined by the minimum distance of the NV center from the ND surface), it is more likely the shallowest NV dominates the signal.
According to Eq. 5, the spectral linewidth, defined by the full width at the half maximum (FWHM), is determined by \(2\Gamma_{2}=2\Gamma_{2,\mathrm{NV}}+2\Gamma_{2,\mathrm{VO}}\). For the ND we used, \(\Gamma_{2,\mathrm{NV}}\sim 12\) MHz is estimated from the resonance spectrum of the NV center itself (Supplementary Note 2). The relaxation of the vanadyl ion \(\Gamma_{2,\mathrm{VO}}\) is contributed by the intrinsic relaxation \(R_{\mathrm{VO}}^{\mathrm{int}}\), the dipole-dipole interaction between ions \(R_{\mathrm{VO}}^{\mathrm{dip}}\), and the rotational diffusion of ions \(R_{\mathrm{VO}}^{\mathrm{rot}}\)[36]. The intrinsic relaxation \(R_{\mathrm{VO}}^{\mathrm{int}}\) is estimated to be \(<12\) MHz according to the X band EPR spectrum [37]. The dipole-dipole mediated relaxation rate \(R_{\mathrm{VO}}^{\mathrm{dip}}=c\times 272\) MHz-M\({}^{-1}\)[36]. For \(c=25\) mM, this rate is 7 MHz. The rotational diffusion rate is calculated by [38]
\[R_{\mathrm{rot}}=\frac{k_{B}T}{8\pi r_{0}^{3}\eta}\approx 2\ \mathrm{MHz}, \tag{9}\]
where \(k_{B}\) is the Boltzmann constant, \(T=293\) K is the temperature, \(r_{0}\sim 0.37\) nm is the radius of the aqueous vanadyl ion [VO(H\({}_{2}\)O)\({}_{5}\)]\({}^{2+}\)[37], and \(\eta=0.3\) Pa-s is the viscosity of \(9:1\) glycerol/water mixtures. \(R_{\mathrm{rot}}\) will increase to \(\sim 600\) MHz in pure water, and then measurements will be impossible. Note the transitional diffusion of the vanadyl ion also contributes to the line broadening [36], but is negligible here (\(\sim\) kHz) because the viscosity of the glycerol/water mixtures is much higher than the pure water. Therefore, the estimated linewidth is \(\lesssim 66\) MHz, roughly consisting with the measured spectrum. Here \(R_{\mathrm{VO}}^{\mathrm{dip}}\) is underestimated and \(R_{\mathrm{rot}}\) is overestimated for ions in the absorption layer. As we perform the measurement at ambient conditions, the Zeeman splitting in
Figure 3: **Detection of vanadyl ions with tumbling NDs.****a** Feature of the sensor and the target. The ND is tethered by a long PEG molecule and merged by a solution of vanadyl sulfate. The short blank PEG is used to control the density of NDs. **b** Rabi oscillation. The two lines are measurements at different times, confirming the existence of a rotational motion of the ND. **c** Energy levels of vanadyl ions. The arrows mark all the allowable transitions. The dash and solid arrows correspond to \(\Delta m_{T}=0\) and \(\pm 1\), respectively. For gray and black solid arrows correspond to \(\Delta T=0\) and \(1\), respectively. **d** Zero-field EPR spectra of 25 mM vanadyl ions. The blue line is a simulated spectrum with \(A_{\perp}=208.5\) MHz and \(A_{\parallel}=547\) MHz. The vertical dash lines mark the peak positions. The points are the experimental result, while the red line is a three-peak Lorentz fit with peak position determined by \(A_{\perp}\) and \(A_{\parallel}\). The fitted linewidth of the three peaks are 25 MHz, 58 MHz, and 42 MHz. The measurement sequence is repeated 16 million times with a duty cycle of 1:19 and total time consumption of 7 days.
duced by the geomagnetic field (\(\sim 50\) uT) will also contribute to the line broadening, which is negligible (\(<2.8\) MHz) here.
## Discussion
In conclusion, we have presented a robust method for EPR spectroscopy base on a nanometer-size sensor, even the sensor itself is randomly tumbling. By deploying a amplitude-modulated driving field on the NV center, the resonance condition convert from the NV orientation-dependent driving amplitude to the NV orientation-independent modulation frequency, and thus robust to the tumbling of the host ND. As a demonstration, we show that the zero-field EPR spectrum of P1 centers is indeed robust to the variations of driving amplitude. Moreover, we measure a clear EPR spectrum of vanadyl ions with the ND sensor immersed in a solution of vanadyl sulfate. The extracted hyperfine constants may be used to study the different local environment in the future. This measurement is also robust to the presence of other ions because the peak positions at zero field are determined solely by the characteristic intrinsic interaction [29; 30]. Our method opens the way to nanoscale EPR measurements in complex biological environment, such as _in vivo_ EPR inside a single cell.
The vanadium has been discovered in many biological systems and participates in various biochemical reactions [39], for example, mimic the effect of insulin on glucose oxidation [40], although the mechanism is still unclear. Nanoscale EPR studies of the vanadyl ion may benefit the understanding of its interaction with biological molecules, if the spectral resolution can be improved. Except for the intrinsic relaxation \(R_{\mathrm{VO}}^{\mathrm{int}}\), all other components contributing to the line broadening can be removed by some technical improvements. For example, \(\Gamma_{2,\mathrm{NV}}\) can be reduced to submegahertz by using high-purity NDs [41]. \(R_{\mathrm{VO}}^{\mathrm{dip}}\) can be directly reduced by bowing the ion concentration. \(R_{\mathrm{rot}}\) can be removed by measuring solid-state spectrum. Moreover, even \(R_{\mathrm{VO}}^{\mathrm{int}}\) itself can be reduced to \(\sim 2\) MHz if utilizing the noise-insensitive transitions [42]. We note the fundamental limit may be even better than this value if single ions can be detected [43]. By then, magnetic shielding or compensation will be required.
Although we have solved the problem of random tumbling of NDs, there still exists other challenges for biological applications of ND-EPR, such as cellular uptake of NDs, microwave heating, and measurement efficiency. The ND we used may be too large (\(\sim 40\) nm) to compatible with single-cell studies. Fortunately, some progress has been made to reduce the ND size [41; 44; 45]. It is reported that even 5-nm NDs can contain NV\({}^{-}\) centers [46]. Reductions of the ND size will inevitably leads to poorer charge-state stability and coherence time of NV centers inside the ND. However, for near-surface NV centers with the same depth, the ND size will play a marginal role. Since our scheme is highly surface-sensitive (Supplementary Note 3), smaller NDs will not deteriorate the performance, but increase the probability of finding near-surface NV centers. Besides, surface engineering of NDs is another effective way to improve cellular uptake [47]. It is also helpful for increasing the charge-state stability and the coherence time of near-surface NV centers [48; 49].
The heating effect is the main damage of microwave radiation on living cells [50]. It is indeed a problem for the detection of vanadyl ions and other paramagnetic targets having strong intrinsic interactions, because the resonance frequency and accordingly required microwave power are high. A direct but inefficient way to control the average microwave power is prolonging the idle time. During our measurement of vanadyl ions, the duty cycle is 1:19, corresponding to an average power of \(<1\) W. A better way is to optimize the microwave antenna to improve the radiation efficiency. In the future, our demonstration can be generalized to the detection of radicals with a relatively lower resonance frequency [17; 30], and then the microwave heating issue can be directly avoided.
The current demonstration is still time consuming due to the poor measurement efficiency and signal contrast. For instance, the data in Fig. 3d costs nearly one week, where 95 % of the time is wasted to control the average microwave power. The spin-to-charge conversion technique [51] may be an excellent solution, because it can not only achieve better readout fidelity [52; 53], but also utilize the idle time to perform the slow charge-state readout. As described above, surface engineering is a possible way to improve the property of near-surface NV centers, which can thus increase the signal contrast. For example, the current Rabi contrast of only 10 % (Fig. 3b) can be improved to \(\sim 30\) % via the charge-state improvement, corresponding to a signal enhancement of 3 and time saving of nearly an order of magnitude. Another benefit is shallower NV centers may be usable, which will significantly increase the signal contrast due to its strong dependence on the NV depth. Considering the spectrum is robust to the orientation of both the sensor and the target, we can utilize ensemble of NDs simultaneously to achieve high parallel efficiency, although with sacrifice of spatial resolution.
## Methods
### Experimental setup
The optical part of our setup is a home-built confocal microscopy, where a diode laser (CNI MGL-III-532) generates the excitation light, and the photoluminescence is detected by an avalanche photodiode (Perkin Elmer
SPCM-AQRH-14). The microwave part consists of an arbitrary waveform generator (Keysight M8190a), a microwave amplifier (Mini-circuits ZHL-16W-43+), and a coplanar waveguide.
### Chemical preparations of tumbling NDs
The surface of the ND we use (Adamas, Car-boxylated 40 nm Red Fluorescent ND in DI water, \(\leqslant\) 1.5 ppm NV, 12-14 NV centers per particle, 1 mg/mL) is originally terminated with carboxyl groups. To realize the biotinylation, we cover the ND surface with amine-PEG3-biotin. The detailed procedure is as follows: we freshly prepare 100 \(\upmu\)L solution containing 1mM amine-PEG3-biotin (EZLinkTM), 5mM EDC (1-[3-(Dimethylamine)propyl]-3-ethylcardiimide methiodide, Sigma-Aldrich) in 100mM MES (4-Morpholineethanesulfonic acid sodium salt, pH 5.0), and mix it with 10 \(\upmu\)L ND suspension. The reaction is allowed to proceed at room temperature for 30min. We then add 20 \(\upmu\)L of 5 mM EDC to the ND mixture and wait 30 min, repeating 2 times to maximize the amount of amine-PEG3-biotin bound to the ND surface.
A coverslip is used as the substrate for bonding the NDs. Before use, the coverslip is thoroughly cleaned by the following procedure. First, we sonicate the coverslip with MilliQ water for 15 min to remove dirt. After that, we replace the MilliQ water with acetone, and sonicate the coverslip for a further 15 min, rinsing it 3 times with MilliQ water to remove any acetone residue. The coverslip is then sonicated with 1 M KOH for 20 min and rinsed with MilliQ water. Finally, we immerse the coverslip in Piranha solution (3:1 mixture of concentrated sulphuric acid and 30% hydrogen peroxide) for 30 min at 90\({}^{\circ}\)C and rinse it with MilliQ water. After cleaning, we modify the surface of the coverslip with amino group. We prepare the aminosilylation solution by adding 10 mL of methanol, 0.5 mL of acetic acid, and 0.3 mL of APTES (3-aminopropyltrimethoxysilane, Sigma-Aldrich) to a clean beaker. Then we rinse the cleaned coverslip with methanol and sink it in the aminosilylation solution. The reaction is allowed to proceed at room temperature for 20-30 min, during which time the coverslip is sonicated in the aminosilylation solution once for 1 minute. The coverslip is then rinsed 3 times with methanol. To tether the NDs to the surface of the coverslip and to maintain the rotational movement of the NDs, we bind long-chain biotinylated PEG to the surface of the coverslip, and use short-chain mPEG to control the density of the biotin termination. We prepare a PEG mixture of 0.8 mg biotinylated NHS-ester PEG (20,000 Da, Aladdin) and 8 mg of NHS-ester mPEG (5,000 Da, Aladdin) in a 100 \(\upmu\)L tube, add 64 \(\upmu\)L of 0.1 M NaHCO\({}_{3}\) solution, and pipette it to dissolve them completely. We then drop the PEGylation solution onto the amino-silanated coverslip and keep the environment moist to prevent the solution from drying out. We incubate the coverslip in a dark and humid environment for 5 hours, then rinse the coverslip with MilliQ water.
The final step is to attach the biotinylated ND to the biotinylated coverslip using streptavidin. We drop 1 mg/mL streptavidin solution (Sangong Biotech) onto the biotinylated coverslip, wait for 30 min and then rinse the coverslip with MilliQ water. Then we add the biotinylated ND suspension obtained in the previous step to the coverslip, wait for 30 min again and rinse the coverslip with MilliQ water.
### Chemical preparations of vanadyl ions
We dissolve the vanadium sulfate pentahydrate powder in the deoxygenated MilliQ water to make a 100 \(\upmu\)L 250 mM VO\({}^{2+}\) solution, then mix it with 900 \(\upmu\)L deoxygenated glycerol to obtain glycerol aqueous solution (glycerol: water=9:1) of 25 mM VO\({}^{2+}\). The solvents are deoxygenated to prevent oxidation of the vanadyl ions. The detailed deoxygenation operations are purging the MilliQ water with N\({}_{2}\) under reduced pressure and placing the glycerol container in liquid nitrogen, then purging the glycerol with N\({}_{2}\) under reduced pressure. To keep the solution acidic, we add 10 \(\upmu\)L of 1 M sulfuric acid prepared with deoxygenated MilliQ water to the solution and mix it thoroughly. We then seal a tiny drop of the solution between the ND-bonded coverslip and the coplanar waveguide. All the above operations are done under nitrogen atmosphere in a glove box.
## Data availability
All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Information.
|
2304.02714 | Learning Stage-wise GANs for Whistle Extraction in Time-Frequency
Spectrograms | Whistle contour extraction aims to derive animal whistles from time-frequency
spectrograms as polylines. For toothed whales, whistle extraction results can
serve as the basis for analyzing animal abundance, species identity, and social
activities. During the last few decades, as long-term recording systems have
become affordable, automated whistle extraction algorithms were proposed to
process large volumes of recording data. Recently, a deep learning-based method
demonstrated superior performance in extracting whistles under varying noise
conditions. However, training such networks requires a large amount of
labor-intensive annotation, which is not available for many species. To
overcome this limitation, we present a framework of stage-wise generative
adversarial networks (GANs), which compile new whistle data suitable for deep
model training via three stages: generation of background noise in the
spectrogram, generation of whistle contours, and generation of whistle signals.
By separating the generation of different components in the samples, our
framework composes visually promising whistle data and labels even when few
expert annotated data are available. Regardless of the amount of
human-annotated data, the proposed data augmentation framework leads to a
consistent improvement in performance of the whistle extraction model, with a
maximum increase of 1.69 in the whistle extraction mean F1-score. Our
stage-wise GAN also surpasses one single GAN in improving whistle extraction
models with augmented data. The data and code will be available at
https://github.com/Paul-LiPu/CompositeGAN\_WhistleAugment. | Pu Li, Marie Roch, Holger Klinck, Erica Fleishman, Douglas Gillespie, Eva-Marie Nosal, Yu Shiu, Xiaobai Liu | 2023-04-05T19:21:46Z | http://arxiv.org/abs/2304.02714v1 | # Learning Stage-wise GANs for Whistle Extraction in Time-Frequency Spectrograms
###### Abstract
Whistle contour extraction aims to derive animal whistles from time-frequency spectrograms as polylines. Foroothed whales, whistle extraction results can serve as the basis for analyzing animal abundance, species identity, and social activities. During the last few decades, as long-term recording systems have become affordable, automated whistle extraction algorithms were proposed to process large volumes of recording data. Recently, a deep learning-based method demonstrated superior performance in extracting whistles under varying noise conditions. However, training such networks requires a large amount of labor-intensive annotation, which is not available for many species. To overcome this limitation, we present a framework of stage-wise generative adversarial networks (GANs), which compile new whistle data suitable for deep model training via three stages: generation of background noise in the spectrogram, generation of whistle contours, and generation of whistle signals. By separating the generation of different components in the samples, our framework composes visually promising whistle data and labels even when few expert annotated data are available. Regardless of the amount of human-annotated data, the proposed data augmentation framework leads to a consistent improvement in performance of the whistle extraction model, with a maximum increase of 1.69 in the whistle extraction mean F1-score. Our stage-wise GAN also surpasses one single GAN in improving whistle extraction models with augmented data. The data and code will be available at [https://github.com/Paul-LiPu/CompositeGAN_WhistleAugment](https://github.com/Paul-LiPu/CompositeGAN_WhistleAugment).
Data Augmentation, Generative Adversarial Networks.
## I Introduction
### _Background_
Spectrograms in the time \(\mathsf{x}\) frequency domain can show signal structure and are frequently used in audio analysis [1]. Patterns in spectrograms are used for sound event classification [2], bird song recognition [3], music genre classification [4], automatic music transcription [5], speech emotion recognition [6], and other tasks. Many acoustic signals have frequency-modulated (FM) components that are visible in spectrograms. Examples include human speech [7], human singing [8], cries of newborns [9], vocal melodies [10], and whale calls [11]. In this paper, we concentrate on whistles, the characteristic FM tonal calls of toothed whales.
and time-consuming. This motivates us to explore ways to synthesize whistle data cheaply with existing data by applying learning-based data generation methods.
### _Objectives_
The primary focus of this work was to develop methods that improve whistle extraction models when data are limited, thereby reducing the amount of data annotation required to recognize whistles. Therefore, our experiments mainly addressed situations with few data, and we sought to mitigate the effect of overfitting and improve the model's transferability for recognizing tonal signals. Although there are many ways to reduce overfitting, e.g., semi-supervised learning [24] and regularization [25], we focused on data augmentation methods for two reasons. First, we seek a method that can be applied to all datasets of frequency-modulated signal, including those containing no unannotated data. Semi-supervised learning may not be applicable in this scenario. Second, we are interested in characterizing the distribution of whistle data and exploring the effect of novel data on extraction of tonal signals. Regularization terms may not provide insight in this context. We note that our data augmentation method may be combined with a semi-supervised framework or loss function regularization to further improve the system performance. Though it is interesting to have these techniques involved, it is beyond the scope of this work.
Common audio or image data augmentation methods usually transform existing data to acquire new data, e.g., by adding Gaussian noise [26], and the augmented samples may implicitly act as a regularizer for the training of deep models [27]. But the distribution of the augmented samples may not be similar to that of the original data; e.g., generated whistle data may have abnormal contour shapes or unrealistic background noise. Previous work [19] generated novel samples by adding whistle contours to negative samples (background noise that contains no whistle signals), which simulated the situation where the same whistles occur in different ocean environments. However, the generated data did not include novel whistle shapes or background noise patterns, which restricted the variance in the data.
In this paper, our goal is to generate novel pairs of whistle data and labels. Although changes in noise affect vocalizations of many taxa [28], including toothed whales [29], we make the simplifying assumption that background noise is independent of whistle contours (contour-shape segmentation of whistles, which indicates the location of whistles on spectrogram and the whistles' frequency modulation). On the basis of this assumption, we decouple the synthesis of background noise and whistle contours. The generated whistle contours are used as labels for the model in [19]. Next, we add generated whistle signals with the desired contour shape to the spectrogram of background noise; i.e., we generate corresponding whistle data for the whistle contour.
We design our whistle generation algorithm as a series of three generative adversarial network (GAN) modules. The first GAN learns the ocean noise environment; it maps random numbers that have a Gaussian distribution to spectrograms representing background noise. The next GAN learns to map random inputs to spectrograms with whistle-like FM sweeps. The third GAN combines the outputs of the first two GAN modules, synthetic background noises and whistles, to obtain a synthetic whistle spectrogram. The generated whistle should follow the whistle contour's shape in the input. We employ an unpaired domain transfer framework, CycleGAN [30], to learn how synthetic noise and whistles can be merged into a synthetic spectrogram. While the original CycleGAN can generate slightly misaligned whistle signals from the desired contours, we exploit the whistle extraction network learned from annotated data to enforce the bin-wise consistency between generated whistles and input contours.
Another challenge is that GANs may not learn well with limited data. This may lead to corrupted synthesis, especially of the whistle contours. We observe that corrupted data have less confident predictions: the predicted probability is neither close to 0 nor close to 1, and thus the entropy is high. Accordingly, we introduce a method to prune such low-quality generated samples. Furthermore, because imperfect learning by GANs with few data may lead to discrepancies between the distributions of real data and generated data, we employ auxiliary batch normalization (ABN) layers [31] which separate the statistics of real and generated data to reduce the possible harmful effect of training with generated data.
### _Contributions_
We made three contributions. First, we proposed the stage-wise composite GANs to generate novel whistle extraction data, including spectrograms and corresponding whistle contour labels. Our experiments showed that the proposed stage-wise GAN surpassed the vanilla GANs with respect to the visual quality of the generated data (Fig. 1 middle and right). Second, we designed a comprehensive strategy to use GAN-generated samples to improve whistle-extraction models. We set criteria to remove corrupted data and we redesigned the whistle extraction network by adding ABN layers to optimize the training with generated data. Third, we applied our proposed data augmentation methods to varied amounts of whistle extraction data and observed consistent and significant improvements. Although GAN frameworks have been used for spectrogram generation and data augmentation in audio recognition tasks [32], to our knowledge, this is the first work to apply GAN-based augmentation to audio spectrogram segmentation data.
## II Related Works
### _Whistle Contour Extraction_
There are three main classes of methods for extracting whale frequency-modulated whistles. The first is models that predict the probability of whistle peaks conditioned on past observations. Examples of this class include tests of hypothesized spectrogram region distributions [33], Bayesian inference [34], Kalman filters [20], and Monte-Carlo density filters [11][35][36][22]. The second class, trajectory-search methods, seeks energy peaks along the frequency dimension and connects those peaks along the time dimension on the basis of trajectory
estimation [12][11][37]. Improved trajectory-search methods reduce excessive numbers of false positives by applying ridge regression to local contexts [38] or energy minimization algorithms to ridge regression maps [39].
In recent years, the third class, deep learning methods, has been applied to process tonal information. Early works included extraction of information from human speech [40] and music [41]. Deep neural networks were also applied to toothed whale whistles [13][42], but the goal of these works was to classify a time segment to species or call type rather than to extract detailed time \(\mathsf{x}\) frequency information. In [19], we proposed a deep neural network to extract time \(\mathsf{x}\) frequency contours of individual whistles. We apply our proposed data augmentation system to the training of whistle extraction model developed in [19].
### _Generative adversarial networks_
Generative adversarial networks (GANs) are a category of generative models. GANs are widely used for artificial image generation, e.g., face manipulation [43], compression noise removal [44], and generating images of people [45]. We adapted the methods from these computer vision tasks to generate realistic spectrograms that served as our novel training data. The landmark work on GANs [46] proposed one generator network that synthesizes samples (\(G:X\to Y\), where \(X\) is a random vector and \(Y\) is a generated sample) and one discriminator network that learns to distinguish between generated samples and real samples. These networks are coupled in a zero-sum game with each network trying to outperform the other. Following [46], researchers have improved the network architecture of GANs [47] and objective functions [48] to stabilize the training of GANs. Those GANs implicitly learn the distribution of real samples, and novel data can be sampled from the distribution. We employ this type of GAN to generate novel spectrogram noises and whistle contours.
Another type of GAN tackles the image-to-image translation problem, aiming to learn a mapping (\(F:x\to y\), where \(x\in X\), \(y\in Y\)) between a source domain \(X\) and a target domain \(Y\), e.g., transfer a horse in the image to a zebra. CycleGAN [30] extends this idea by leaning two mappings (\(F:x\to y\), \(G:y\to x\), where \(x\in X\), \(y\in Y\)) without the need for pairwise correspondence between the elements of \(X\) and \(Y\). This idea can be adapted to our task to generate spectrograms containing whistles, where \(X\) is the domain containing pairs of desired whistle contours and spectrograms with background noise, and \(Y\) consists of spectrograms with whistles and noise. Recent work improved the idea of [30] by adding a spatial attention mechanism [49] and image quality assessment term [50].
### _GAN-based augmentation_
GANs provide an option to generate novel data by learning the distribution of existing data and sampling data from the distribution, which is a valuable addition to the common augmentation techniques that are based on data transformation. Vanilla GAN models, which map random numbers to generated samples, have been used for data augmentation. [51] trained a GAN model to augment computed tomography (CT) images of livers for the classification of lesions. [52] applied a conditional GAN to augment samples from given categories and restore the balance of imbalanced image classification data. [53] applied progressively grown GANs (PGGANs) to a brain segmentation task, and the generator learned to synthesize the generated sample and corresponding segmentation labels. Domain transfer GANs have also been used for data augmentation. [54] applied CycleGAN to day-to-night image translation, which helped to improve the object detection model.
Despite the success of GANs in synthesizing visually appealing samples and augmenting existing data, there are still limitations of GANs for synthesizing high-quality augmented data, especially for pixel-wise regression tasks such as semantic segmentation. First, GANs usually suffer from mode collapses [55]: the generated samples may have lower variance than the real samples. Second, GANs may generate samples with artifacts or failure regions [56], which may especially hamper the training of pixel-wise regression tasks. A sample selection method may be required to choose high-quality samples from the GAN-generated samples [57]. Third, the training of GANs can be unstable, which results in different distributions of generated samples and real samples [58].
Therefore, GAN-based data augmentation usually requires improving the quality of generated samples. A common solution is to use real samples or computer graphics models in the generator network. In [59], the GAN learned to generate
Fig. 2: Sketch of the proposed stage-wise GAN frameworks. The first two generators produce a spectrogram patch of background noise and a spectrogram patch of foreground whistle contour, respectively. These patches serve as inputs for the third generator.
samples conditioned on real samples and random numbers. Similarly, [60] transferred synthetic images built by computer graphics models to realistic images, and the augmented samples improved models in estimation of gazes, hand poses, and animal poses. Another way to improve training of the GAN is to use supervision from target tasks. [61] added an auxiliary classification head on the discriminator of GAN and used the classification loss to guide discriminator and generator learning.
Recently, stage-wise GANs were proposed to augment data for pixel-wise regression tasks. [62] employed a two-stage GAN augmentation of cell nuclei segmentation data. Their framework generates a cell nuclei segmentation mask in the first stage and images of nuclei in the second stage. Our proposed method is closely related to [62], and we further separate the learning of object appearance and the segmentation mask. This separation can be extended to other semantic segmentation scenarios. For example, when generating a scene containing road and cars, our framework may first generate the appearance of the road and car independent of the segmentation mask, then generate an image of the scene according to the segmentation mask and the appearance of the objects (road and cars). In this way, our framework explores the distribution of object appearance and provides variance in the appearance of objects in the generated image of the scene. Another improvement is that we employ the knowledge from segmentation networks to regularize bin-wise correspondence between generated samples and labels.
## III Methods
The objective of this work is to develop a data augmentation approach to generate novel data for whistle extraction. We treat the cropped patches from the time-frequency spectrograms as data samples, and we employ stage-wise GANs, which we call WAS-GANs (Whistle **A**ugmentation **S**tage-wise **G**enerative **A**dversarial **N**etworks), to generate both negative samples (noise only) and positive samples (whistles in the presence of noise). Our techniques can be extended to other acoustic tasks or computer vision tasks, e.g., sound classification and semantic segmentation.
Fig. 2 illustrates the three stages of our sample generation approach. In Stage 1, a Wasserstein GAN with gradient penalty (WGAN-gp) [48] learns to produce the negative samples containing background noises. In Stage 2, we train another WGAN-gp model with the real whistle contour annotations to generate whistle contour segmentation masks. In Stage 3, we use a CycleGAN [30] to generate positive samples. The whistle signals are added to the negative samples obtained in Stage 1 according to contour shapes defined in Stage 2. The positive samples and segmentation masks are used as the whistle extraction data and labels, respectively. Both generated negative samples and positive samples are used to train the whistle extraction model, and the resulted whistle extraction performance is used to assess our GAN-based augmentation.
### _GAN-based negative sample synthesis_
We assume that the underwater background noise (negative samples) follows an implicit distribution. The generator learns the mapping between a multivariate Gaussian distribution and the distribution of negative samples. While many GAN models can learn this mapping, we chose WGAN-gp because its training is relatively stable [48]. The model includes a generator network, \(G\), and a discriminator network, \(D\). Network \(G\) maps a multivariate Gaussian random variable to generate negative samples. Network \(D\) estimates the Wasserstein distance between real samples and generated background noise (negative) samples. We denote \(P_{r}\) as the distribution of real data \(x\); \(P_{g}\) as the distribution of generated data implicitly defined by \(\widetilde{x}=G(z)\), where \(z\) is a random vector following the standard multivariate Gaussian distribution; and \(\hat{x}\) as a randomly weighted sum of x and \(\widetilde{x}\). The loss function for the discriminator network is defined as
\[L=\mathbb{E}_{\hat{x}\sim\mathbb{P}_{g}}\ \left[D\left(\tilde{x} \right)\right]-\mathbb{E}_{x\sim\mathbb{P}_{r}}\ \left[D\left(x \right)\right]+\\ \lambda\mathbb{E}_{\hat{x}\sim\mathbb{P}_{\hat{x}}}\ \left[\left(\left\|\nabla_{\hat{x}}D \left(\hat{x}\right)\right\|_{2}-1\right)^{2}\right] \tag{1}\]
where \(\nabla_{\hat{x}}D\left(\hat{x}\right)\) is the gradient of discriminator \(D\)'s output on \(\hat{x}\). This loss function encourages the discriminator to maximize the estimated Wasserstein distance between real and generated samples. The gradient penalty term \(\mathbb{E}_{\hat{x}\sim\mathbb{P}_{\hat{x}}}\ \left[\left(\left\|\nabla_{\hat{x}}D \left(\hat{x}\right)\right\|_{2}-1\right)^{2}\right]\) enforces a soft version of Lipschitz constraint on the discriminator network. The loss function for the generator network is
\[L_{G}=\mathbb{E}_{z}\ \left[-D\left(G(z)\right)\right] \tag{2}\]
which encourages the generator to generate samples that have a small estimated Wasserstein distance from the real samples, i.e., to follow a distribution similar to that of the real data.
### _GAN-based positive sample synthesis_
We split synthesis of positive samples (spectrograms containing whistles) into two stages: generation of whistle contours and injection of the whistle into synthetic background noise. In the first stage, we employ the same networks and loss functions as in Section III-A, given the assumption that the shape of whistle contours is independent of the underwater environments.
Fig. 3: Illustration of whistle contour selection. Low-quality generated patches are highlighted by red bounding boxes. Multiple 64\(\times\)64 patches are concatenated.
In the second stage, we aim to generate positive samples according to the synthetic background noise and whistle contours. We treat this as an unpaired domain transfer task, which can be solved effectively by CycleGAN [30]. Our source domain, A, consists of pairs of negative samples and whistle contours, and the target domain B includes positive samples. We adopt the CycleGAN from [30] for our experiments, but any improved model readily can be used in our framework.
There are two sets of generator and discriminator networks in CycleGAN. \(G_{A}\) denotes the generator network that transfers samples from domain \(A\) to domain \(B\), i.e., generates whistle with the desired shape on the background noise spectrogram. \(D_{A}\) denotes the discriminator network that distinguishes between real and generated spectrograms in domain \(B\). \(G_{B}\) is the network that transfers samples from domain \(B\) to domain \(A\), effectively separating the whistle contour from the background noise. Because we assume that the whistle contour and background noise are independent, we do not use a single \(D_{B}\) network for the joint distribution of whistle contours and background noise. Instead, we use two \(D_{B}\) networks for the marginal distributions, one to discriminate negative samples and one to discriminate whistle contours.
Instead of directly generating positive samples by \(G_{A}\), we let \(G_{A}\) predict a residual term (whistle signals without background noises) to be added to the negative samples. By denoting a negative sample as \(I_{N}\), a whistle contour as \(I_{W}\), and the generated positive sample as \(I^{\prime}_{P}\), the process can be described as
\[I^{\prime}_{P}=I_{N}+\gamma G_{A}(I_{N},\ I_{W}) \tag{3}\]
where \(\gamma\) is a factor that controls the signal strength and accounts for variability in the received signal level. This parameter can simulate the variation in signal strength caused by variation in signal source strength or the distance between the animal and recording devices.
To enforce the bin-wise correspondence between generated positive samples and whistle contours, i.e., to avoid misalignment between generated whistle extraction data and labels, we use the whistle extraction models, which are trained on the same set of real samples as CycleGAN, to design a regularization term for \(G_{A}\) training. We call this term a loss function for the pixel-wise consistency, and represent it as
\[L_{consistence}=||f(I^{\prime}_{P})-I_{W}||_{1} \tag{4}\]
where \(f\) denotes the whistle extraction model and \(f(x)\) is the model's output, a confidence map indicating the presence of whistle energy in each bin of the spectrogram, with an input \(x\). This loss encourages the whistle signals to appear at the same position as the desired whistle contour.
To guarantee that the generated positive samples have the same background magnitude as the input negative samples, we also include the identity loss,
\[L_{identity}=||G_{A}(I_{N},0)||_{1}+||G_{B}(I_{N})-(I_{N},0)||_{1} \tag{5}\]
where \(0\) indicates an empty whistle contour input, i.e., we do not want the CycleGAN to generate any whistles. We denote \((I_{N},0)\) as the concatenated \(I_{N}\) and empty whistle segmentation map. \(G_{A}\) should produce residuals of zero when there are no input whistle contours. We also use adversarial loss, \(L_{D_{A}}\), \(L_{D_{B}}\), \(L_{G_{A}}\), \(L_{G_{B}}\), and cycle consistence loss (\(L_{cyc}\)) from CycleGAN
\[L_{D_{A}}=(D_{A}(I_{P})-1)^{2}+(D_{A}(I^{\prime}_{P}))^{2} \tag{6}\]
\[L_{D_{B}}=(D_{B}(I_{N},\ I_{W})-1)^{2}+(D_{B}(G_{B}(I_{P})))^{2} \tag{7}\]
\[L_{G_{A}}=(D_{A}(I^{\prime}_{P})\ -\ 1)^{2} \tag{8}\]
\[L_{G_{B}}=(D_{B}(G_{B}(I_{P}))-\ 1)^{2} \tag{9}\]
\[L_{cyc}=||G_{B}(I^{\prime}_{P})-(I_{N},\ I_{W})||_{1}+\\ ||G_{A}(G_{B}(I_{P}))-I_{P}||_{1} \tag{10}\]
where \(I_{P}\) refers to real positive samples. We simplify the notation of two \(D_{B}\) networks in one \(D_{B}\) function in the above equation. The full objective for generators is
\[L_{G}=L_{G_{A}}+L_{G_{B}}\ +\lambda_{0}L_{cyc}+\lambda_{1}L_{consistence}+ \lambda_{2}L_{identity} \tag{11}\]
where \(\lambda_{0}\), \(\lambda_{1}\), and \(\lambda_{2}\) control the relative importance of the corresponding loss items. The full objective of the discriminator is
\[L_{D}=L_{D_{A}}+L_{D_{B}} \tag{12}\]
Ideally, \(D_{A}\), \(D_{B}\) will assign 1 to real samples and assign 0 to generated samples with this training objective. \(G_{A}\), \(G_{B}\) will try to fool the discriminators and generate realistic samples.
### _Whistle extraction model_
We use the whistle extraction model from [19] as our baseline. This model, which is similar to a selective edge detection model, produces a confidence map of the whistle signals. Although the generated samples are visually similar to real samples (Fig. 3), the distributions of the real and generated whistle contour may differ due to the imperfect training of GAN when data are limited. This discrepancy decreases the accuracy of our whistle extraction model when we use the generated samples for data augmentation. Therefore, we use ABN layers [31]; i.e., we use auxiliary BatchNorm (BN) layers for forwarding generated samples and normal BN layers for real samples. We share the same convolutional layers for real and generated samples. By denoting the input sample as \(x\), the whistle signal label as \(y\), and the whistle extraction model as \(f\), the loss without ABN can be described as
\[L=||y-f(x))||_{2} \tag{13}\]
The loss with ABN is
\[L=\frac{1}{(1+\lambda)}(||(y_{real}-f(x_{real}))||_{2}+\\ \lambda||y_{fake}-f_{abn}(x_{fake}))||_{2}) \tag{14}\]
where \(x_{real}\), \(y_{real}\) are the real samples and labels, respectively, and \(x_{fake},y_{fake}\) are the generated samples and labels, respectively. \(\lambda\) is a factor to adjust the weights of real data and generated data in loss calculation. \(f_{abn}(x)\) denotes the output of the whistle extraction model for input \(x\) when the auxiliary BN layer is used in forwarding. We empirically find
that ABN layers improve the whistle extraction performance when the distributions of the generated and real samples may be different.
The quality of GAN-synthesized samples is affected by the number of real samples available for training. The generator may synthesize poor-quality samples when the number of real examples used in GAN training is low. Fig. 3 provides examples of synthetic whistle contours when 2500 real positive samples are used for GAN training, including whistle contours that are of poor quality. Therefore, we designed two heuristic conditions for selecting high-quality generated samples. Denoting the value of an individual bin in the whistle contour patch as p, we select the sample for training the whistle extraction model when
\[\sum-plopp\ <T_{e} \tag{15}\]
and
\[\sum\delta(p-T_{c})>T_{p} \tag{16}\]
where
\[\delta(x)=\begin{cases}0&x\leq 0\\ 1&x>0\end{cases} \tag{17}\]
\(T_{e}\) is a threshold for the sum of the pixel entropy, so the first condition removes generated whistles with diffuse medium-intensity signals (high entropy). The second condition chooses samples in which more than \(T_{p}\) bins have intensity above \(T_{c}\), allowing samples with short whistle fragments to be removed.
## IV Data and Implementation
### _Datasets_
We used the whistle extraction data from the 2011 workshop on detection, classification, localization, and density estimation of marine mammals (DCLDE 2011, available on the MobySound Archive [63]). These data contain recordings of calls made by five toothed whale species: long-beaked common dolphins (_Delphinus capensis_), short-beaked common dolphins (_Delphinus delphis_), bottlenose dolphins (_Tursiops truncatus_), melon-headed whales (_Peponcepphala electrica_), and spinner dolphins (_Stenella longirostris_). Whistle contours were annotated by trained analysts across the 5-50 kHz bandwidth as described in [11]. We use 30 recordings from the 5 species to train and 12 recordings from 4 species to test. Short-beaked common dolphins are removed from evaluation because some of the files had annotation errors. The training data consisted of approximately 127 min of recordings with 12,539 annotated whistles. The test data (\(\sim\)43 min of acoustic data) contained 6,011 annotated whistles.
We computed log-magnitude spectrograms for the whistle extraction model and the GAN-based data synthesis. We employed series of discrete Fourier transforms in spectrogram computation. 8 ms Hamming-windowed frames (125 Hz bandwidth) were computed every 2 ms, and we empirically restricted the dynamic range of the \(log_{10}\) magnitude spectrogram to the range [0, 6] (an intensity range of 0 to 120 dB rel.), i.e., we transformed the values \(<\)0 to 0, and those \(>\)6 to 6. We divided the spectrogram values by 6 which made them within [0, 1], and discarded the spectrogram values outside of the annotation frequency range of 5-50 kHz (361 frequency bins), which covers the frequency range of most delphinid whistles and their harmonics.
For network training, we partitioned the spectrogram into 64 \(\times\) 64 patches, where each patch covered a time interval of
Fig. 4: Illustration of whistle extraction. (Top) spectrogram visualized by _Silbido_[11]; (Bottom) extracted whistles, where each whistle is highlighted with a different color.
Fig. 5: Mean spectral peak detection F1-score (upper) or mean whistle extraction F1-score (lower) against the number of real positive samples in the training set. Optimal Dataset Scale (ODS) is an edge detection metric that assesses peak detection. ”w/o GAN” and ”w GAN” indicates the performance without and with GAN augmentation, respectively.
128 ms and frequency interval of 8 kHz. For the training data, we selected the positive patches with a sliding window with a 25 pixel step size across portions of spectrograms containing whistles, which led to 115,968 positive patches available for training. We randomly selected the same number of negative patches, which only contain noise, and combined them with positive patches as our training data (referred to as the full dataset). Most of our experiments used a subset of the full data (referred to as a reduced dataset). We describe the details of generating the reduced dataset in Section V-A.
### _Networks and Algorithms_
#### Iv-B1 Whistle extraction network
We used the same network architecture as [19]. The model has 10 convolutional layers, including 1 input layer, 4 residual blocks (each block contains two convolution layers), and 1 output layer. The input layer and output layer use kernel size 5 and padding size of 2, and other layers use a kernel size of 3 and a padding size of 1. All hidden layers have 32 channels. The model input is a one-channel spectrogram and the output is a confidence map of whistle occurrence. The size of the output confidence map is the same as that of the input spectrogram.
We trained the whistle extraction model with an Adam optimizer (initial learning rate=\(1\times 10^{-3}\), betas = [0.9, 0.999], weight decay=\(5\times 10^{-4}\)) for \(1\times 10^{6}\) and \(3\times 10^{5}\) iterations on the full dataset and reduced datasets, respectively. The learning rate was multiplied by 0.1 every \(4\times 10^{5}\) and \(1\times 10^{5}\) iterations for the full and reduced datasets, respectively. We set the batch size to 64, and we used 64 real samples and 64 generated samples in each iteration for data augmentation experiments. We used \(\lambda\)=1 in the loss function of Eq. 14 for our experiments with generated data, which make the generated samples have the same contribution of loss as real examples.
#### Iv-B2 Wgan
We used the same WGAN architecture for the generation of whistle contours and negative samples. The generator network uses a fully-connected layer to output feature maps of size (512,4,4) from a 128-dimensional standard Gaussian distribution. Four groups of convolutional layers and pixel shuffle layers are used to gradually enlarge the feature map to \(64\times 64\). A Tanh layer is used to output the \(64\times 64\) patch. The discriminator network takes the generated samples and real samples as input, and outputs the Wasserstein distance estimation. It contains 4 convolutional layers with a stride of 2 and a fully connected layer. The networks are optimized by Adam optimizers (initial learning rate = \(1\times 10^{-4}\), betas = [0.5, 0.9], batch size = 64) for \(3\times 10^{4}\) and \(5\times 10^{4}\) iterations on the reduced and full datasets, respectively. In each WGAN training iteration, the discriminator is optimized for 5 steps while the generator is optimized for 1 step, where the network parameters are updated by applying the optimizer to one mini-batch of data in each step. For sample selection, we used \(T_{e}\)=70, \(T_{e}\)=0.5, \(T_{p}\)=64.
#### Iv-B3 CycleGAN
The GAN model that we used to add whistles on synthetic noise employs the CycleGAN architecture of [30]. The generators follow the U-Net [64] architecture, which has 6 U-Net blocks with a basic width of 64. InstanceNorm layers are used in the U-Net blocks. The discriminator is a fully convolutional network with 3 convolutional layers. We trained the generators and discriminators with Adam optimizers (learning rate = \(2\times 10^{-4}\), betas = [0.5, 0.999], batch size = 64) for 25,120 iterations (160 epochs for 10,000 real positive samples) for the reduced dataset and 50 epochs for the full dataset. We set \(\lambda_{0}\)=10, \(\lambda_{1}\)=0.5, and \(\lambda_{2}\)=0.5 for Eq. 11. We apply a random \(\gamma\) following a unified distribution between (0.5, 1.5) in Eq. 3.
#### Iv-B4 Graph Search
We adapted the graph search [11] algorithm to the outputs of the whistle extraction network to predict individual whistles. This algorithm maintains sets of graphs, the nodes of which indicate the trace of predicted whistle contours. Multiple crossing whistles can be represented by a single graph. At each time step, local maximum points (peaks) on the confidence map are selected along the frequency dimension, and peaks with confidence greater than 0.5 are retained as candidate points. For each candidate point, the algorithm either initiates a new graph or extends terminating nodes of existing graphs. Extensions are made when the new node is along a reasonable trajectory predicted by a low-order polynomial fit of the graph path near a terminating node. Graphs that have not been extended within a specified time are considered closed. Closed graphs are removed from the current graph set. When a graph is of a shorter duration than a settable minimum whistle duration, it is discarded. Otherwise individual whistles are extracted from the graph on the basis of an analysis of graph vertices.
### _Metrics_
#### Iv-C1 Evaluation of confidence maps
We first assessed the quality of the whistle-energy confidence maps predicted by the whistle-extraction model. To do this, we utilized the BSDS 500 benchmark tools and protocol [65] to calculate the highest dataset-scale F1-score across various thresholds (referred to as the "Optimal Dataset Scale," or ODS). We thinned each ground-truth whistle to a width of one pixel and compared them to predicted confidence maps that were binarized using 50 evenly distributed thresholds between 0 and 1. All default parameters within the benchmark tool were used in our evaluation.
#### Iv-C2 Evaluation of whistle extraction
We used _Silbido_[11] to evaluate the quality of whistle extraction after the graph search was applied to the confidence map. This library calculates recall, the percentage of validated whistle contours that were detected; and precision, the percentage of detections that were correct. Then we calculated the precision, recall, and F1-score on testing files of each species and averaged them among all species. We determined the success or failure of whistle extraction results by examining the set of expected analyst annotations as described in [11]. We checked whether any of the detections overlapped with the analyst-annotated whistle contour in time. If so, we examined whether each overlapping detection matched the analyst's annotation. When the average deviation in frequency between the detected contour and annotation was \(<\) 350 Hz and the analyst detections had lengths \(\geq\) 150 ms, with a signal-to-noise ratio \(\geq\) 10 dB over at least 30% of the whistle, we classified the overlapping detections as matched detections. When an annotated whistle
did not meet the above criteria (too short or low intensity), we discarded any matching detections, and they did not contribute to the metrics. We classified unmatched detections as false positives.
## V Experiments and Results
### _Varied number of annotated samples_
We first studied the effect of varying the amount of training data for our whistle extraction network. Because annotation is expensive, a key motivation for data augmentation is to reduce the number of annotations required. Training effective deep-learning models requires a considerable amount of high-quality annotated data [66]. For the whistle extraction task in this paper, it remains unclear how the whistle models perform when the amount of annotated data varies. To address this issue, we conducted 6 experiments that selected a positive patches and n negative patches, where n = 500, 1000, 2500, 5000, 7500, or 10000. Random selection of patches was structured to ensure that smaller datasets were subsets of larger ones. We repeated this process five times to obtain 5 datasets for each n. For each dataset, we trained whistle extraction models 5 times, and report average performance.
The experimental results are shown in Fig. 5. The black curves show the performance of the confidence map (ODS) and whistle extraction (F1-score) (upper and lower plots, respectively) with respect to the quantity of training data. While the ODS quantifies the performance of the whistle extraction model in detecting the presence and shape of the whistles, the results suggest that, with more training data, the average ODS increases. The increase in whistle extraction F1-score follows the same trend as ODS. Our results show that increasing the amount of annotated data substantially improves the performance of whistle detection. At the same time, as the amount of data increases, the rate of performance improvement decreases, which means that exponentially more data may be needed to increase performance by 1 unit when the initial dataset is larger.
### _Data augmentation_
We also studied the effect of varying dataset size on GAN training and data augmentation. In this set of experiments, we applied the proposed augmentation method to augment n = 1000, 2500, and 10000 positive samples and negative samples. In each experiment, we generated 10 \(\times\) n samples with our
Fig. 6: Real background noise samples (upper left): Our GAN generated background noise samples (upper right); Real whistle samples (bottom left); Our GAN generated whistle samples (bottom right). Multiple 64 \(\times\) 64 patches are concatenated in each category for better visualization of the data variance.
WAS-GAN. All GAN networks were randomly initialized and trained once per dataset. For each augmented dataset, we trained the whistle extraction model with ABN for 5 times.
Fig. 6 shows examples of samples generated by our WAS-GAN (n = 2500). By visually comparing the real samples and generated samples, we see that the noise patterns and whistle signal patterns are well simulated by our GAN networks, e.g., the clicks (wide vertical band of high energy across the frequency domain) are simulated well, as are the width and strength of whistle signals.
Table I reports the experiment's ability to correctly predict time-frequency peaks associated with whistles (mean ODS) and to correctly extract whistles from these predictions (mean F1-score). Consistent performance improvements were obtained for both measures. Our methods obtained gains of 1.53, 2.41, and 0.74 in mean ODS, and 0.99, 1.69, and 0.51 in mean F1-score for the three augmentation experiments when n=1000, 2500, and 10000 training patches, respectively. We also obtained improvements of 0.38 and 0.46 in the mean ODS and mean F1-score, respectively, when we used WAS-GAN on the full dataset. In comparison to experiments using n=10000, we utilized over 100,000 additional annotated samples in our full dataset experiment. These samples were manually labeled as opposed to our GAN augmented samples, and this led to an increase of 2.72 in the whistle extraction F1-score. Without our GAN-generated samples, in order to achieve a 0.46 increase in the F1-score by adding more human-annotated samples to our current dataset, we would have to annotate tens of thousands more samples. The training stability was notably improved (with a reduction in the variance of the F1 metrics) with the addition of the generated data. These improvements highlight the effectiveness of our proposed stage-wise, GAN-based data augmentation method: the use of augmented data improves spectral peak detection results, which in turn also improves whistle contour extraction results.
### _Ablation study_
We conducted a set of ablation experiments to examine the contributions of different components of the proposed method. We chose datasets with n = 2500 samples for these experiments. The quantitative results are shown in Table III.
#### Iv-C1 Residual learning
In this ablation experiment, we trained the CycleGAN in stage 3 to directly generate positive samples rather than adding the residual to the negative samples (Eq. 3). While we can change the whistle signal magnitude by altering the weight in Eq. 3 when the generator outputs residual, the whistle signal's magnitude is determined by the generator model in this setting. In contrast to the proposed WAS-GAN, we observed a decrease of 1.43 in mean ODS and a decrease of 1.44 in mean whistle extraction F1-score when we removed residual learning. This performance drop might be caused by the fact that the GAN needed to output background noise, which might increase the difficulty and instability of learning. Moreover, the variance of generated data decreases when the magnitude of whistle signals cannot be adjusted by the multiplier of the residual.
#### Iv-C2 Patch selection
This ablation experiment removed the quality assurance filter (Eq. 15 and Eq. 16) for whistles generated by the GAN. As a result, generated whistles similar to those surrounded by the red bounding boxes (Fig. 3) were included in the training data. The mean ODS dropped by 1.21 and the mean F1-score decreased by 1.44 after this change. This indicates that low-quality samples may reduce the performance of the whistle extraction network training, and our simple heuristic selection method effectively selects samples for the whistle extraction task.
Fig. 7: Positive samples (left) and corresponding whistle contour (right) generated by vanilla GAN. Multiple 64 \(\times\) 64 patches are concatenated.
#### Iv-C3 Abn
Because ABN stores statistics of real samples and generated samples separately, it may better stabilize the training when the generated samples and real samples have different distributions [31]. We evaluated the functionality of applying ABN with and without patch selection to our whistle extraction task; patch selection affects the generated sample distribution. After removal of ABN, the whistle extraction F1-score dropped by 0.98 with patch selection and by 1.97 without patch selection. This suggests that our patch selection method contributes to generating samples that are closer to the actual distribution of whistles. The performance change is consistent with our hypothesis that generated samples and real samples have a different distribution when few data are included in GAN training.
We also observed decreases in ODS of 0.68 with patch selection and 0.86 without patch selection after removal of ABN, which is a less decrease compared to whistle extraction F1-score. While ODS demonstrates the whistle extraction model's performance at the spectrogram bin level, this metric does not always linearly correlate to the whistle extraction performance, because it ignores the signal continuity among bins. We observed that removing ABN frequently resulted in poorer continuity of predicted patches (e.g., Fig. 8d and 8e, first and third examples) and a greater number of false positives (e.g., Fig. 8d and 8e, second example). The whistle extraction F1-score also indicates the model's ability to recognize whistle signals under varying noise conditions or suppress false positives in the high-energy region of spectrogram according to the context information (signals in the neighborhood). The generated whistle contour and signals may be less continuous than the real samples, which will train the whistle extraction model to ignore context information and make discontinuous predictions when ABN is removed. The comparison among Fig. 8 (c), (d), (e) rightmost column also shows that use of our generated data reduces false positives.
#### Iv-C4 Stage-wise GAN
Instead of decomposing the sample generation into multiple stages, we used a single WGAN-gp with two output channels to generate whistle data, the spectrogram samples and their labels, similar to [53]. To deal with the increased learning difficulty of one WGAN, we increased the WGAN-gp capacity of the generator by using twice the number of hidden layers for each convolutional layer output as that in Section IV-B2. Examples of samples generated by this model are shown in Fig. 7. We saw clear artifacts and unnatural, sudden changes in the magnitude in adjacent bins on the spectrogram. The visual quality of generated samples was substantially worse than those generated by our stage-wise GAN in Fig. 6. We also observed a decrease of 1.04 in the whistle extraction F1-score compared to our proposed framework. Data augmentation with the low-quality samples still permitted the performance of the model to surpass that without augmentation for the time-frequency detection task. The negative effect of using corrupted data might be mitigated by the ABN layer.
#### Iv-C5 The third GAN
In this ablation study, we remove the third GAN and instead generate positive sample \(I^{\prime}_{P}\) by simply adding the generated whistle contour \(I_{W}\) to the generated background noise \(I_{N}\). Following the work of Li et al. [19], we apply Gaussian blur \(G\) with random deviation parameter \(\sigma\) to the whistle contour, and we add the blurred contour to the background noise:
\[I^{\prime}_{P}=I_{N}+\lambda CLIP(I_{W}+G(Y,\ \sigma)) \tag{18}\]
where the clipping function \(CLIP(x)\) is
\[CLIP(x)=\begin{cases}&0,\,x\in(-\infty,0)\\ &x,\,x\in[0,1]\\ &1,\,x\in(1,+\infty)\end{cases} \tag{19}\]
We also try a simple version which does not contain Gaussian blur:
\[I^{\prime}_{P}=I_{N}+\lambda I_{W} \tag{20}\]
where \(\lambda\) is a random weighting parameter. We use the same parameter setting as Li et al. [19], where \(\lambda\) and \(\sigma\) are uniform random numbers within the ranges of \([0.03,\ 0.23]\) and \([0.3,\ 1.3]\), respectively. As shown in Table III, both methods in Equation 19 and Equation 20 lead to inferior performance compared to the proposed stage-wise GAN method ("2500+GAN") that uses the third GAN. Considering that we use the same set of background noise and whistle contour shapes, this ablation study indicates that our proposed stage-wise GAN method generates more realistic whistle signals with a appearance which contributes to the improved training of the whistle model.
### _Comparison with other whistle extraction methods_
In addition to our previous work on network-based whistle extraction [19], we have selected two representative and
Fig. 8: Outputs of whistle extraction models. Models with the best whistle extraction F1-score among all parallel experiments in each training setting are visualized. (a) Spectrograms that are used as model input. (b) Ground truth. (c) Output of model trained with 2500 real positive patches and negative patches. (d) Output of model trained with 2500 positive patches and negative patches and GAN synthesized data. (e) Same as d, but the model does not have auxiliary batch normalization (ABN).
competitive whistle extraction methods for comparison. Both methods identify whistle candidate points by determining if the Signal-to-Noise Ratio (SNR) values are above a threshold on the denoised spectrogram. The Graph-Search method developed by Roch et al. [11] employs graphs consisting of candidate points, which are extended with new points based on how well these new candidate points align with the existing graph through polynomial fitting results. As a point of comparison, Gruden et al. [22] uses a probabilistic approach based on the sequential Monte-Carlo probability hypothesis density (SMCPHD). In addition to the result of all SMCPHD predictions, we also present the results of predictions that are longer than 150ms, as both Graph-Search and our method apply this length criterion for detection.
Our approach outperforms SMCPHD and Graph-Search in the whistle extraction F1-score by 4.48 and 11.93, respectively. Additionally, our GAN-generated samples improve the method in [19] by 0.46 in F1-score, 0.33 in precision, and 0.59 in recall. SMCPHD demonstrates the highest recall but the lowest precision in this comparison, which indicates its aggressive strategy of making more whistle predictions. By removing whistle detections by SMCPHD that are shorter than 150ms, the precision of SMCPHD is improved by 19.3, while the recall is decreased by 31.57. This study suggests that SMCPHD prefers shorter segments of whistles in its predictions. Our GAN-generated samples help the learning-based model achieve a competitive performance advantage on this whistle extraction task, however, it should be noted that optimizing the other algorithms for this specific dataset may diminish these advantages.
## VI Conclusion and Discussion
We present a framework of stage-wise generative adversarial networks to generate training samples for whistle extraction. The data generation process consists of three stages: (i) generate time x frequency spectrogram patches containing background noise (ii) generate whistle contours and automatically discard poor quality contours (iii) fuse whistle signals with the background noise. Each stage is completed by one trained generative adversarial network. Compared to using a single vanilla GAN generating whistle extraction data and labels, our stage-wise GANs can generate samples with fewer artifacts which results in increased whistle extraction performance. We examined our data generation method by a series of experiments employing differing quantities of real and generated data, and note that using the generated data lead to consistent performance gains.
The stage-wise design may mainly contribute to the success of our data generation method. It separates the modeling of different components and the relationship between components, which eases the learning of the GANs in each stage as well as provides a straightforward way to explore different combinations of components. In our case, we generated the background noise separately and we were able to add different whistle signals to the same background. If we directly apply this idea to semantic segmentation data generation of natural images, we may first generate the appearance of background scene, then generate objects on it according to a desired segmentation map. If we extend this idea, we may generate the appearance of different objects separately and then add them to the background. In this way, we may fully explore combinations of varying objects and background appearances in the same segmentation layout. In our whistle extraction experiments, we did not use this extended idea, because the appearance of our foreground object (the whistles) is relatively simple, i.e., the variance of appearance is mainly rooted in the whistle contour shape and whistle magnitude. Therefore, we directly add whistle signals to the background using the third GAN in our framework. Our framework can be readily extended to extract calls of other whale species (e.g, baleen whales) and to other similar tasks (e.g., semantic image segmentation).
Though it may not affect the main contributions of this work, our data generation method can be improved in three aspects in the future. Firstly, we may use improved generative neural network architecture and training strategies. For example, we may use a generator architecture based on a style-transfer network which improves the generated sample quality [67]. The discriminator augmentation mechanism proposed in [68] may help stabilize training in limited data regimes. We may also explore generating larger patches of high quality with the method in [69]. Secondly, we may use real data in the data generation process to enrich the data variance. The real background and annotated whistle contours can be used as the input data of our GAN in the third stage, and we can generate whistle signals of novel shapes on real background or generate whistle signals of annotated contour shapes on GAN-generated background. Thirdly, we may improve the sample selection method. In this paper, we use a simple yet effective pixel-wise entropy method to select whistle contour of good quality. Metric measuring texture or semantic information like [70] may better measure the quality of our generated samples and improve the sample selection process.
## Acknowledgments
Thanks to John A. Hildebrand and Simone Baumann-Pickering of Scrips Institution of Oceanography and Melissa S. Soldevilla of the National Oceanic and Atmospheric Administration (NOAA) for providing the acoustic data to the DCLDE 2011 organizing committee. We appreciate the effort of Shannon Rankin and Yvonne Barkley of NOAA in producing portions of the DCLDE 2011 annotations. We thank Michael Weise of the US Office of Naval Research for the support (N000141712867 and N000142112567). |
2310.11191 | Medical Text Simplification: Optimizing for Readability with
Unlikelihood Training and Reranked Beam Search Decoding | Text simplification has emerged as an increasingly useful application of AI
for bridging the communication gap in specialized fields such as medicine,
where the lexicon is often dominated by technical jargon and complex
constructs. Despite notable progress, methods in medical simplification
sometimes result in the generated text having lower quality and diversity. In
this work, we explore ways to further improve the readability of text
simplification in the medical domain. We propose (1) a new unlikelihood loss
that encourages generation of simpler terms and (2) a reranked beam search
decoding method that optimizes for simplicity, which achieve better performance
on readability metrics on three datasets. This study's findings offer promising
avenues for improving text simplification in the medical field. | Lorenzo Jaime Yu Flores, Heyuan Huang, Kejian Shi, Sophie Chheang, Arman Cohan | 2023-10-17T12:14:03Z | http://arxiv.org/abs/2310.11191v2 | Medical Text Simplification: Optimizing for Readability with Unlikelihood Training and Reranked Beam Search Decoding
###### Abstract
Text simplification has emerged as an increasingly useful application of AI for bridging the communication gap in specialized fields such as medicine, where the lexicon is often dominated by technical jargon and complex constructs. Despite notable progress, methods in medical simplification sometimes result in the generated text having lower quality and diversity. In this work, we explore ways to further improve the readability of text simplification in the medical domain. We propose (1) a new unlikelihood loss that encourages generation of simpler terms and (2) a reranked beam search decoding method that optimizes for simplicity, which achieve better performance on readability metrics on three datasets. This study's findings offer promising avenues for improving text simplification in the medical field.
## 1 Introduction
In recent years, text simplification has become an increasingly useful application of AI [14] particularly in healthcare [17, 18, 19], where text can be technical and difficult to understand. By automating this process, we can help healthcare professionals explain key medical texts (e.g. doctor's reports, findings) to patients. Previous work in text simplification in medical domain has explored use of pretrained language models [16, 15, 14, 17, 18, 19, 20], reinforcement learning [21], and zero-shot prompting [22, 23]. Despite this progress, simplification sometimes results in the generated text having lower quality and diversity [16, 22]. Further as we find some simplification models copy sentences from the source, and thus remain do not sufficiently improve the readability (See Appendix B).
In this work, we seek to further improve medical text simplification. We first propose a new unlikelihood loss that penalizes words in proportion to their reading level using a well-established readability index. Second, we propose a modified beam search method at decoding time to rerank intermediate candidates based on their readability. Despite simplicity, our methods improve readability based on automated metrics (up to 2.43 points on Flesch-Kincaid) and human evaluation, while maintaining similar performance in terms of factual consistency and overall simplification.
We make the following contributions: (1) We propose a new form of unlikelihood loss based on well-established readability index to improve medical text simplification (2) We propose a decoding strategy that optimizes for readability in medical text simplification (3) We provide evaluation results for previous state-of-the-art on three datasets in terms of readability and factual consistency. We make our code publicly available at [https://github.com/ljylflores/simplification-project](https://github.com/ljylflores/simplification-project).
Related WorkText simplification research primarily focuses on sentence-level [24, 25, 26, 27, 28], with some attempts at paragraph or document-level datasets [15, 16]. Most datasets have been sourced from accessible Wikipedia or News articles, which are already quite accessible. However, the medical field, laden with technical jargon, can greatly benefit from simplification. Initial methods in medical text simplification employed lexical and syntactic techniques [15, 16], while recent work includes finetuning language models like BART [16, 17] and a two-stage summarize-then-simplify approach [14]. Medical
simplification has also expanded to multilingual settings Joseph et al. (2023).
In this work, following Devaraj et al. (2021) we use unlikelihood (UL) training Welleck et al. (2020) to encourage the generation of simplified terminology. This strategy has been used in other domains to penalize inaccuracy Hu et al. (2023); Nan et al. (2022), complexity Devaraj et al. (2021); Lu et al. (2023), and redundancy Lagutin et al. (2021); Li et al. (2020) in text generation. Unlike Devaraj et al. (2021), our work adapts UL to optimize for both readability and factual consistency. To improve simplification, we also intervene at the decoding stage. Previous work uses modified decoding methods to address factual inconsistency Shi et al. (2023); King et al. (2022); Sridhar and Visser (2022), or optimize fluency and diversity in text generation Kriz et al. (2019); Hargreaves et al. (2021). Our work extends this by optimizing the decoder for readability in medical text simplification.
## 2 Methods
We propose two simple but effective approaches for improving medical text simplification, one during the training phase, and the other during decoding. Specifically, we propose a modified Unlikelihood Loss Welleck et al. (2020) to incorporate readability index and encourage the model to favor the generation of simpler words. Then, we introduce a decoding approach that evaluates and re-ranks the candidate beams by considering both readability and factuality. We detail these approaches below:
### Unlikelihood Loss for Simplification
Unlikelihood loss (UL) Welleck et al. (2020) is a training objective that forces unlikely generations to be assigned lower probability by the model (See Figure 1).
Readability ULFollowing prior work Devaraj et al. (2021) we can use this loss to force the model to assign a lower probability to complex words. Unlike Devaraj et al. (2021), we use the Flesch-Kincaid (FK) readability score Kincaid et al. (1975) instead of model-predicted scores. The Flesch-Kincaid readability score is a numerical indicator that assesses the complexity of a text by estimating the US grade level needed for comprehension. Because FK considers syllable count and average phrase length, it serves as a good proxy metric even for incomplete sentences, by prioritizing text with shorter words and shorter phrases. We incorporate this score as follows: At generation step \(t\), we identify the word \(v\) in the vocabulary with the largest output probability; this is the word which the model is most likely to output at step \(t\). We compute the token-level UL for \(v\) by taking the product of the word's Flesch-Kincaid score and its standard UL term \(log(1-p(v|\hat{y}_{<t})\). The total UL (_ULR_) is the sum of the token-level penalties.
\[\textit{UL}_{R}=-\sum_{t=1}^{|\hat{y}|}\sum_{v=1}^{\mathcal{V}}\mathbbm{1}_{v,t}FK_{v}\log(1-p(v|\hat{y}_{<t}))\]
where \(\mathbbm{1}_{v,t}\) indicates whether word \(v\) has the largest output probability in the vocabulary at step \(t\), and \(FK_{v}\) is the Flesch-Kincaid score of word \(v\).
Consistency ULAs we discuss in SS4, we find that _ULR_ alone leads to hallucinations, hence we also penalize the model for generating unsupported words in some set \(e\) using an additional factual consistency UL (_ULC_).
\[\textit{UL}_{C}=-\sum_{t=1}^{|\hat{y}|}\sum_{v=1}^{\mathcal{V}}\mathbbm{1}_{v,t}\mathbbm{1}_{v,e}\log(1-p(v|\hat{y}_{<t}))\]
where \(\mathbbm{1}_{v,e}\) is an indicator for whether the word \(v\) is in the set of hallucinated words \(e\).
We determine the set \(e\) as follows: we identify the sequence which the model is most likely to generate, by finding the tokens with the highest logits at each generation step. Then, we then filter this set to the tokens which do not exist in either the input text nor label. At this point, the set contains all words which the model is likely to generate, but are not present in the input/label. Hence, it may contain words which are factually or grammatically correct, but don't match the gold summary. We'd like to penalize only the tokens which we are sure are factually incorrect, hence we filter this set down to just entities using Spacy en_core_web_lg NER models Honnibal and Montani (2017), which results in the entity set \(e\).
Overall LossThe overall loss is a weighted sum of the negative log-likelihood (\(\mathcal{L}_{NLL}\)) and UL, where \(\lambda_{R}\) and \(\lambda_{C}\) are constants.
\[\mathcal{L}=\mathcal{L}_{NLL}+\lambda_{R}\textit{UL}_{R}+\lambda_{C}\textit{ UL}_{C}\]
### Decoding for Simplification
Our proposed decoding strategy reranks candidate beams by their current readability and factual consistency scores, and retains the top \(n\) beams as the candidates for the next token (See Figure 2).
Readability ScoreWe optimize candidates' readability during decoding using Flesch-Kincaid (FK) Grade Level scores. FK represents the readability of a text measured by US grade level; hence, lower scores are more readable (Kincaid et al., 1975). These typically range from 0 to 18, but can extend past this range in practice. We compute FK of candidate beams and cap it from 4 to 20, as we find that qualitatively, beams with scores below 4 as equally simple, and above 20 as equally complex. Then, we normalize the score \(r_{F}(s)\) from 0 to 1, such that 0 is least readable, and 1 is most readable.
Consistency ScoreLike in UL training, we find that optimizing solely for readability in decoding may introduce hallucinations; hence we balance readability with consistency, as measured by BERTScore (Zhang et al., 2020). We find that beams with scores below 0.60 to have equally poor factuality, hence we cap the score \(r_{B}(s)\) between 0.60 and 1.00 and normalize it.
Composite ScoreWe compute a composite score \(r(s)\) using an F1-like metric. Note that the score is merely used to rerank the candidates.
\[r_{F}(s)=\left\{\begin{array}{ll}1,&f_{F}(s)<4\\ \frac{20-f_{F}(s)}{20-4},&4\leq f_{F}(s)\leq 20\\ 0,&f_{F}(s)>20\end{array}\right\}\]
\[r_{B}(s)=\left\{\begin{array}{ll}\frac{f_{B}(s)-0.60}{0.40},&f_{B}(s)\geq 0.60\\ 0,&f_{B}(s)<0.60\end{array}\right\}\]
\[r(s)=\left(\frac{2r_{F}(s)r_{B}(s)}{r_{F}(s)+r_{B}(s)}\right)^{2}\]
Ranking Every \(k\) StepsComputing metrics at each generation step can be inefficient, and the meaning or readability of the beam might not change after adding just one word. Hence, we reduce the frequency with which we perform the reranking to intervals of \(k\) (See Appendix E).
Hallucination HeuristicWe implement a heuristic to remove beams with unsupported entities. We identify entities with the Spacy en_core_web_lg NER model (Honnibal and Montani, 2017), check if the entities appear in the source, and set the beam's score as zero if any of the entities are not.
## 3 Experiments
DatasetsWe run our experiments on three datasets: Cochrane (Devaraj et al., 2021) consists of 4,459 pairs of abstracts from the Cochrane Database of Systematic Reviews and their corresponding summaries written by domain experts. MedEasi (Basu et al., 2023) consists of 1,697 pairs of human-annotated sentences sourced from the Merck Manuals (Cao et al., 2020) and SimpWiki (van den Bercken et al., 2019). Finally, the Radiology Reports Dataset 1 consists of 2,269 radiology reports collected from a large urban hospital and simplified by medical residents and doctors.
Footnote 1: Internal dataset
BaselinesWe compare against a BART-XSum (Lewis et al., 2020) model which we further fine-tune on our datasets, and state-of-the-art models by Lu et al. (2023); Devaraj et al. (2021), all of which we fine-tune on each of the three datasets; we chose BART-XSum to align it with previous work, in order to provide an apples-to-apples comparison and isolate the impact of our methods. We also compare with state-of-the-art large language model GPT-4-0314 (OpenAI, 2023)2.
Footnote 2: We set the system’s role as “You are a helpful assistant that simplifies text”, and the prompt as “Simplify this text:”.
Evaluation MetricsWe evaluate the readability, consistency, and overall performance as follows:
Figure 1: Training Diagram for Computing Unlikelihood Loss
For readability, we use the standard FK (Kincaid et al., 1975) and ARI scores (Smith and Senter, 1967), which use the average word and sentence length to estimate the complexity of texts.
For factual consistency, we use BERTScore (Zhang et al., 2020) and GPT-Eval (Liu et al., 2023) (See Appendix D), as these correlated well with human judgement (Scialom et al., 2021; Li et al., 2022; Liu et al., 2023). For GPT-Eval, we evaluate 50 summaries, and report the fraction of samples in which a factual inconsistency was found.
We additionally use SARI (Xu et al., 2016), an edit-based metric for text simplification, and ROUGE-LSum (Lin, 2004) for overall fluency.
## 4 Results
We fine-tune a BART model using our methods and present the results in Table 1; see Appendix A for implementation details.
Effect of Unlikelihood Loss and DecodingOn Cochrane and Radiology, our proposed methods achieve better readability scores in terms of FK and ARI. In particular, combining unlikelihood loss with the decoding strategy achieves a 2.43/1.74 point improvement in FK/ARI upon the next best model for Cochrane, and a 0.12/0.17 point improvement for Radiology. Note that in the radiology dataset, the sentences are typically short, resulting in a lower (better) baseline readability score. See sample comparison of outputs in Appendix B.
On MedEasi, our methods slightly underperform NapSS (Lu et al., 2023). We find that it sometimes generates phrases instead of full sentences, which lowers FK/ARI, since these scores depend on sentence length. In contrast, our models generate complete sentences, which improve fluency at the cost of worse (i.e. higher) FK/ARI scores.
Our methods generally improve over the prior SOTA in terms of SARI and BERTScore, however, interestingly on the radiology dataset all methods underperform a fine-tuned BART model.
We observe that using UL or the decoder individually results in fewer hallucinations than both BART-UL (Devaraj et al., 2021) and NapSS (Lu et al., 2023) on Radiology, and against NapSS on MedEasi. When the baseline models perform well, we find that it is because they tend to copy information from the input, and hence are less prone to hallucinations. In contrast, our strategies force the model to use simpler words and not copy the input, but may introduce inconsistencies with the source. We confirmed this with an experiment: we compute the % 4-gram overlap of the model written summaries with the source, and observe that large portions of previous works' output is copied from the text, whereas output in our models are not (See Table 3).
Note that some of the identified hallucination errors are relatively minor as we find GPT-Eval to be very strict. For example the phrase "26 self-treatments of 26 Chinese herbal medicine prescriptions" is found to be factually inconsistent with the source having the phrase "26 self concocoted Chinese herbal compound prescriptions" by GPT-Eval (see Table 11 for full example).
Human EvaluationWe conduct a human evaluation study to further investigate the results (See Table 2). We observe that our proposed UL and decoder improves readability over a fine-tuned BART-XSum model 43% and 27% of the time, whereas the previous SOTA NapSS (Lu et al., 2023) only demonstrated clear benefits 3% of the time. However, GPT-4 achieves the best performance, mainly because it is trained on human preference data and omits minor details, only keeping the main summary. In contrast, our models and previous SOTA tend to retain these minor details from the source, which human evaluators may find irrelevant.
We note that the low interrater agreeability aligns with the ranges reported in previous work (Goyal et al., 2023), which reflects the subjective nature of human preference, given that simplicity and readability varies based on one's technical background and style preferences. While such variability is
Figure 2: Diagram for Modified Beam Search for Decoding for Simplification
hard to avoid, the average proportions suggest that overall, our methods significantly improved upon previous SOTA (NAPSS).
Effect of Individual Unlikelihood LossesWe test using \(UL_{R}\) and \(UL_{C}\) separately (See Table 4). \(UL_{R}\) alone results in good readability but poor factual consistency, and vice versa for \(UL_{C}\), justifying the need for both losses to be used in conjunction.
## 5 Conclusion
In this paper, we propose methods to improve simplicity in medical text simplification; this improves the readability of generated summaries, and achieves comparable BERTScore and SARI scores. However, hallucination remains a challenge.
We explored augmenting the data with external knowledge (See Appendix C.2), but found no benefit. This may be because the sources and labels in the training data contains inconsistencies (Lu et al., 2023), which require further preprocessing. Addressing such hallucinations to generate more robust summaries is a critical future direction in medical text summarization, which we aim to explore further.
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline
**Dataset** & **Model** & **FK \(\downarrow\)** & **ARI \(\downarrow\)** & **BScr \(\uparrow\)** & **GPT \(\downarrow\)** & **SARI \(\uparrow\)** & **RL \(\uparrow\)** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & BART-XSum & 12.19 & 13.83 & 0.871 & **10/50** & 35.64 & **44.76** \\ & GPT-4 & 9.97 & 10.73 & 0.870 & & 39.06 & 33.90 \\ & BART-UL (Devaraj et al., 2021) & 11.02 & 12.69 & **0.873** & 15/50 & 40.08 & 39.25 \\ & NAPSS (Lu et al., 2023) & 12.12 & 13.64 & 0.869 & 21/50 & 32.94 & 45.49 \\ \cline{2-7} & UL & 8.00 & 9.76 & 0.862 & 27/50 & 42.07 & 40.16 \\ & Decoder & 8.63 & 9.61 & **0.873** & 20/50 & 41.25 & 43.88 \\ & UL + Decoder & **7.54** & **8.99** & 0.866 & 38/50 & **42.12** & 41.11 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & BART-XSum & 3.28 & 2.89 & **0.963** & **19/50** & **78.67** & **80.09** \\ & GPT-4 & 3.85 & 4.41 & 0.862 & 36.62 & 26.80 \\ & BART-UL (Devaraj et al., 2021) & 2.99 & 2.67 & 0.945 & 28/50 & 69.77 & 68.68 \\ & NAPSS (Lu et al., 2023) & 3.16 & 2.62 & 0.927 & 42/50 & 62.72 & 59.02 \\ \cline{2-7} & UL & 3.00 & 2.61 & 0.956 & **19/50** & 75.33 & 77.03 \\ & Decoder & 3.11 & 2.76 & 0.952 & 21/50 & 71.75 & 74.57 \\ & UL + Decoder & **2.87** & **2.50** & 0.953 & 23/50 & 72.40 & 74.83 \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & BART-XSum & 10.18 & 11.21 & 0.911 & 28/50 & 40.54 & 45.72 \\ & GPT-4 & 8.10 & 9.20 & 0.903 & & 38.07 & 33.28 \\ & BART-UL (Devaraj et al., 2021) & 10.57 & 11.28 & **0.915** & **2/50** & 35.33 & **47.91** \\ & NAPSS (Lu et al., 2023) & **5.66** & **6.25** & 0.868 & 33/50 & 34.04 & 24.35 \\ \cline{2-7} & UL & 8.47 & 9.67 & 0.907 & 23/50 & 42.25 & 43.30 \\ & Decoder & 8.27 & 9.66 & 0.908 & 26/50 & **42.66** & 42.91 \\ & UL + Decoder & 7.27 & 9.03 & 0.904 & 31/50 & 41.57 & 40.78 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance on Flesch-Kincaid (FK), ARI, BERTScore (BScr), GPT-Eval (GPT), SARI, and ROUGE-LSum (RL); SARI and RL are computed using the EASSE package (Alva-Manchego et al., 2019); All models except for GPT-4 are fine-tuned on the corresponding dataset in the row.
\begin{table}
\begin{tabular}{l r} \hline \hline
**Model** & **\% 4-Gram Overlap** \\ \hline BART-XSum & 52.88\% \\ BART-UL (Devaraj et al., 2021) & 39.30\% \\ NAPSS (Lu et al., 2023) & 51.77\% \\ UL (Ours) & 15.73\% \\ Decoder (Ours) & 9.80\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Human Evaluation Results on 30 Examples from Cochrane, **Readability** is the % of instances where the model summary was strictly _more_ readable than a fine-tuned BART-XSum model’s summary, \(\kappa\) is Fleiss-Kappa interrater agreement (Fleiss, 1971), \(\alpha\) is Krippendorf (Passonneau, 2006).
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Model** & **\% 4-Gram Overlap** \\ \hline BART-XSum & 52.88\% \\ BART-UL (Devaraj et al., 2021) & 39.30\% \\ NAPSS (Lu et al., 2023) & 51.77\% \\ UL (Ours) & 15.73\% \\ Decoder (Ours) & 9.80\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: An analysis of the % 4-gram overlap between the source text and model outputs reveals that previous models tend to copy directly from the source text, whereas our models do not, thereby simplifying and synthesizing
### Limitations
One limitation of our work is the persistence of hallucinations in the output. Previous literature has shown that this often originates from inconsistencies between the source and text data. For example, a number of training labels in the Cochrane dataset (Devaraj et al., 2021) contain the phrase, "The evidence is up to date as of X", despite no mention of a date in the source (Lu et al., 2023). To this end, future work can adapt strategies from literature in summarization, which have shown that preprocessing (Adams et al., 2022; Wu et al., 2022) and augmenting (Yang et al., 2023) the data can mitigate such hallucinations.
Another limitation is our paper examines medical text simplification very broadly, whereas there may be expert knowledge needed to improve specific tasks. Hence, future work can analyze such methods on a more niche set of datasets (e.g. medical literature, patient reports, health-related news). Such work can be extended to other languages, for which multiple medical text simplification datasets have been developed (Trienes et al., 2022; Grigonyte et al., 2014; Cardon and Grabar, 2019, 2020; Joseph et al., 2023a).
Finally, we note that our inter-annotator agreement on the task of readability is particularly low; this reflects both how human preferences are diverse and how the task is highly subjective, as has been shown in other domains (Goyal et al., 2023). Moreover, readability not only differs by person, but also by domain and task. Future work can define domain-specific criteria, and recruit participants from the exact target populations which the text is meant to be simplified for.
## Ethics Statement
We use publicly available datasets and make our preprocessing and training scripts available. As mentioned in the limitations section, both our methods and previous methods still exhibit varying degrees of hallucination, and have yet to undergo domain-specific examination. Hence, we do not recommend these models be applied in a practical setting at the moment.
|
2306.17206 | FarSight: A Physics-Driven Whole-Body Biometric System at Large Distance
and Altitude | Whole-body biometric recognition is an important area of research due to its
vast applications in law enforcement, border security, and surveillance. This
paper presents the end-to-end design, development and evaluation of FarSight,
an innovative software system designed for whole-body (fusion of face, gait and
body shape) biometric recognition. FarSight accepts videos from elevated
platforms and drones as input and outputs a candidate list of identities from a
gallery. The system is designed to address several challenges, including (i)
low-quality imagery, (ii) large yaw and pitch angles, (iii) robust feature
extraction to accommodate large intra-person variabilities and large
inter-person similarities, and (iv) the large domain gap between training and
test sets. FarSight combines the physics of imaging and deep learning models to
enhance image restoration and biometric feature encoding. We test FarSight's
effectiveness using the newly acquired IARPA Biometric Recognition and
Identification at Altitude and Range (BRIAR) dataset. Notably, FarSight
demonstrated a substantial performance increase on the BRIAR dataset, with
gains of +11.82% Rank-20 identification and +11.3% TAR@1% FAR. | Feng Liu, Ryan Ashbaugh, Nicholas Chimitt, Najmul Hassan, Ali Hassani, Ajay Jaiswal, Minchul Kim, Zhiyuan Mao, Christopher Perry, Zhiyuan Ren, Yiyang Su, Pegah Varghaei, Kai Wang, Xingguang Zhang, Stanley Chan, Arun Ross, Humphrey Shi, Zhangyang Wang, Anil Jain, Xiaoming Liu | 2023-06-29T16:14:27Z | http://arxiv.org/abs/2306.17206v2 | # FarSight: A Physics-Driven Whole-Body Biometric System
###### Abstract
Whole-body biometric recognition is an important area of research due to its vast applications in law enforcement, border security, and surveillance. This paper presents the end-to-end design, development and evaluation of FarSight, an innovative software system designed for whole-body (fusion of face, gait and body shape) biometric recognition. FarSight accepts videos from elevated platforms and drones as input and outputs a candidate list of identities from a gallery. The system is designed to address several challenges, including (i) low-quality imagery, (ii) large yaw and pitch angles, (iii) robust feature extraction to accommodate large intra-person variabilities and large inter-person similarities, and (iv) the large domain gap between training and test sets. FarSight combines the physics of imaging and deep learning models to enhance image restoration and biometric feature encoding. We test FarSight's effectiveness using the newly acquired IARPA Biometric Recognition and Identification at Altitude and Range (BRIAR) dataset. Notably, FarSight demonstrated a substantial performance increase on the BRIAR dataset, with gains of \(+11.82\%\) Rank-20 identification and \(+11.3\%\) TAR@1% FAR.
## 1 Introduction
The aim of whole-body biometric recognition is to develop a person recognition system that will surpass the performance of state-of-the-art (SoTA) recognition of the face, gait, and body shape alone, specifically in the challenging, unregulated conditions present in full-motion videos (_e.g._, aerial surveillance). It encompasses functionalities such as person detection, tracking, image enhancement, the mitigation of atmospheric turbulence, robust biometric feature encoding, and multi-modal fusion and matching. The wide-ranging applications of whole-body recognition in fields like law enforcement, homeland security and surveillance, further underscore its importance [16, 47, 49, 64].
To achieve these goals, we design, prototype and evaluate a software system called **FarSight** for whole-body (face, gait and body shape) biometric recognition. As illustrated in Fig. 1, FarSight accepts as input a video captured at long-range and from elevated platforms, such as drones, and outputs a candidate list of identities present in the input video.
The design of FarSight confronts a number of novel challenges which have not been adequately addressed in computer vision and biometrics literature: i) Low-quality video frames due to long-range capture (hundreds of meters) and atmospheric turbulence (with the refractive index structure parameter \(C_{n}^{2}\) in ranges of \(10^{-17}\) to \(10^{-14}\) m\({}^{-2/3}\)[51]). ii)
Figure 1: **FarSight** is a person recognition system that implements and fuses SoTA face, gait and body shape recognition modules in challenging conditions presented by full-motion videos.
Large yaw and pitch angles (\(>20\) degrees) due to elevated platforms (altitudes of up to \(400\)m). iii) Degraded feature sets due to low visual quality (the pixel range for Inter-Pupillary Distance is around \(15{-}100\)). iv) Limited domain and paucity of training data due to diversity in the operating environments resulting in a large domain gap between training and test sets.
To address these challenges, the design of FarSight heavily relies on modeling the _underlying physics_ of image formation, image degradation and human body models throughout the recognition pipeline. Further, we integrate the learned physics knowledge into the deep learning models for feature encoding. The four key modules of FarSight are i) image restoration, ii) detection and tracking, iii) biometric feature encoding, and iv) multi-modal fusion.
* Image restoration: Video streams captured at a large range are prone to degradation caused by atmospheric turbulence, platform vibration, and systematic aberrations. While many SoTA approaches are based on deep learning, we explicitly _model the physics of turbulence_. The development and validation of this modeling and simulation allow us to understand the imaging limit, estimate the turbulence parameters, and generate datasets capable of training learning-based restoration modules. Our approach offers enhanced explainability via estimated model parameters, thereby requiring fewer labeled training samples and better generalization to unseen environments.
* Detection and tracking: We develop a joint body and face detection module, which is able to associate face and body bounding boxes. Detected bounding boxes can then be fed into an appropriate feature extractor (embedding) without requiring a post-processing stage to match face and body bounding boxes.
* Biometric (face, gait and body shape) feature encoding. (i) Face: We leverage adaptive loss function, two-stage feature fusion, and controllable face synthesis models to effectively manage image quality variation, frame-level feature consolidation, and domain gap. (ii) Gait: In addition to extracting local gait features, we also take into account global correlations to enhance identification efficacy in diverse scenarios. (iii) Body shape: We learn a robust 3D shape representation that is invariant to clothing and body pose variations, leading to significant improvements in body matching.
* Multi-modal fusion: This module performs score-level fusion and score imputation in case of missing data (when no features could be extracted for one or more biometric modalities). Due to the challenge of video capture from drones, score imputation was found to provide a significant boost in whole-body recognition.
The innovates of **FarSight** system are as follows:
\(\diamond\) Explicitly modeling the physics of imaging through turbulence and image degradation and integrating physics-based models into deep learning for image restoration.
\(\diamond\) Utilizing a joint body and face detection approach, easily integrated with upstream and downstream tasks.
\(\diamond\) An effective feature encoding for face, gait and body shape, along with a novel multimodal feature fusion approach, enabling superior recognition performance.
\(\diamond\) Utilizing the Biometric Recognition and Identification at Altitude and Range (BRIAR) dataset [10], we demonstrate the superior performance of the proposed FarSight system, and its robustness and effectiveness in whole-body biometric recognition under challenging conditions.
## 2 Related Work
**Whole-Body Biometrics Recognition.** Whole-body biometric recognition merges multiple physical traits, specifically face, gait, and body shape, to bolster identification accuracy, especially in challenging scenarios. Unlike traditional biometric systems focusing on a single trait [9, 12, 14, 17, 22, 26, 34, 59, 62], this comprehensive approach can mitigate inherent weaknesses and exploit the strengths of each individual trait, leading to enhanced recognition performance. Face recognition may be hampered by pose, illumination and expression variations and low image resolution, while gait recognition can be influenced by walking speed, clothing worn by the person, and occlusion. Body shape provides consistent biometric information, albeit subject to variations in clothing and posture. Recent literature [18, 25] has seen a growing interest in an integrated approach, with studies exploring various methodologies to effectively fuse these multiple biometric traits for whole-body person recognition. However, they fall short in providing comprehensive solutions that consider all three modalities in conjunction with other essential modules such as image restoration, detection and tracking, and multi-modal fusion. This offers opportunities for advancements in holistic biometric systems for robust and accurate whole-body biometric recognition in challenging video capture conditions.
**Physics Modeling of Imaging through Turbulence.** Turbulence is modeled as a stochastic phenomenon with its modern form largely based on Kolmogorov [28]. The atmosphere can be modeled as a turbulent volume that perturbs light propagating through it [46, 53]. Since the atmosphere is a stochastic phenomenon, its effect on an image is also stochastic. Drawing realizations from this distribution requires a simulator. Simulating these effects most often comes in the form of mirroring nature: a wave is numerically propagated through a simulated atmosphere. Methods that utilize numerical wave propagation in this manner are referred to as split-step propagation [4, 19, 20, 51]. Beyond
split-step, there exist ways of drawing effects based on a combination of empirical understanding and analysis such as [30, 42, 44, 45] with some recent modification and improvement [38, 39]. Due to a lack of open-source software, we report on a novel modeling approach.
**Image Restoration.** Successful biometric recognition relies upon robust feature extraction from sensed imagery [23]. With poor-quality imagery, image restoration serves as a way to extract robust and salient features and potentially boost recognition accuracy. However, restoration methods may _change_ the person's identity based on reconstructed features as shown in attack-based work [36]. Thus, reconstruction in this biometric context is slightly different. We prefer a reconstructed image that improves downstream recognition performance. Face deblurring in the presence of invariant blur has been shown to have positive results on downstream classification [52]. Furthermore, some efforts in restoration [29, 40] have suggested that reconstruction may indeed help in the case of atmospheric turbulence degraded images. These methods, however, rely only on single frames, therefore, in the FarSight system we use multi-frame fusion to improve the quality of degraded images.
**Detection and Tracking.** Face detection has been extensively studied in the field of computer vision, with numerous endeavors aimed at detecting faces across a diverse array of scenes. Various methodologies, as presented in [11, 68, 31], have successfully employed different approaches for detecting faces in unconstrained settings. Building upon this, pedestrian tracking is another significant module in biometrics. A multitude of strategies have been developed to improve both the efficiency and effectiveness of tracking. Among them, tracking by detection paradigms has emerged as the leading approach due to its adaptability and superior performance. Motion-based methods [67, 3, 61] employ spatiotemporal information to enhance object association and improve tracking accuracy. Appearance-based methods [55, 56, 60] introduce various appearance features to facilitate accurate object matching.
**Multi-Modal Biometric Fusion.** Fusion relies on leveraging encoded biometric features or scores from multiple matchers. An example of a score-level fusion method is the sum rule, where normalized scores are weighted and summed to generate the fused score to be used for performance evaluation [21, 48].
## 3 FarSight: System Architecture
### Overview of FarSight
As illustrated in Fig. 2, FarSight operates through six modules: detection and tracking, image restoration, face, gait, and body shape feature extraction, and multi-modal fusion. These modules work within a scalable testing framework, optimizing GPU usage via adaptable batch sizes. An API utility facilitates communication between the framework and external systems, transmitting video sequences from configuration files to the framework via Google RPC calls. Essential features extracted from these sequences are stored in HDF5 files for performance evaluation.
The workflow starts with the detection and tracking modules processing the input video sequences. Bounding boxes indicating regions of interest (RoI) are cropped and sent to the gait and body modules, while the face images are restored before processing. The body and gait modules generate unique per-sequence feature vectors through average pooling, while the face module uses CAFace [27] to consolidate features across video sequences. A probe is composed of single video segments per subject. On the other hand, gallery enrollments - consisting of multiple video sequences and still images - are processed and combined into a singular subject-level feature vector for each modality.
### Challenges in FarSight
The FarSight system faces distinct challenges. Captured videos often suffer from poor quality due to long-range capture and atmospheric turbulence. Elevated platforms introduce large yaw and pitch angles, making data analysis more challenging. Extracting identity features is affected by low visual quality, and the training data's limited domain further complicates the learning task. Further, the lack of transparency in deep learning models poses a significant issue. Fig. 3 illustrates these challenges with examples from close-range, mid-range (\(100\)-\(500\)m), and UAV-captured scenarios.
Figure 2: The proposed FarSight system incorporates six components: _detection and tracking, image restoration, face, gait, and body shape feature extraction, and multi-modal biometric fusion_.
### Physics Modeling of Turbulence
Atmospheric turbulence is an unavoidable degradation when imaging at range. It is often computationally modeled by splitting the continuous propagation paths into segments via phase screens as illustrated in Fig. 4. While accurate, the spatially varying nature of the propagation makes this a computationally demanding process [19, 20, 51].
More recent works have explored the possibility of _propagation-free_ models where the turbulence effects are implemented as random sampling at the _aperture_[7, 8, 37]. As shown in Fig. 4, every pixel on the aperture is associated with a random phase function which has a linear representation using the Zernike polynomials [41]. By constructing the covariance matrix of the random process, we can draw samples of the Zernike coefficients to enforce spatial and modal correlations. Propagation-free simulation has enabled \(1000\times\) speed up compared to the split-step propagation methods while maintaining accuracy. Therefore, we adopt this simulation approach in our system.
For the generation of training data, realistic optical and turbulence parameters significantly influence the appearance of the generated defects. Therefore, our datasets are synthesized according to the metadata of various long-range optical systems. Our training dataset also consists of both dynamic and static scenes [24, 50, 66].
### Detection and Tracking
Our detection module, based on [54], uses a two-stage R-CNN detector [43] with a modified ResNet50 backbone to associate face and body bounding boxes [54]. This is done using associative embeddings to match faces and bodies, learned via **pulling** and **pushing** loss functions [13]. The pulling loss brings embeddings of the same subject closer in the presence of intra-subject variations, calculated as body-to-body, face-to-face, and face-to-body pairs. These are combined using a weighted sum of body-to-face loss, and the sum of face-to-face and body-to-body losses. Pushing loss, in contrast, pushes away bounding boxes assigned to different subjects to account for inter-subject variations. It is divided into three losses between pairs of body boxes, pairs of face boxes, and body-face pairs. These losses are combined by a weighted sum. The final associative embedding loss used to optimize these embeddings is a weighted sum of the pulling and pushing losses.
The module also predicts "head hook" coordinates for every subject to improve body and face association. The head hook loss is a weighted sum of the Smooth L1 loss [15] and a scale-invariant angular loss. The final association between body and face bounding boxes is based on similarity metrics, including embedding distance, head hook distance, and confidence scores. The RBF kernel is used for both the embedding distance and head hook distance. The confidence scores factor directly into the association loss to mitigate associating low-confidence bounding boxes with high-confidence ones. Finally, all these metrics are integrated into a final association metric. If a face prediction's maximum similarity score with any body is below a set threshold, it is concluded that the subject's face is not visible.
### Image Restoration
Image restoration aims to reverse the image formation process, as described by the equation [6]
\[I(\mathbf{x})=[\mathcal{B}\circ\mathcal{T}](J(\mathbf{x})), \tag{1}\]
Figure 4: Turbulence modeling. Comparing split-step [5, 19] and Zernike-based simulations [7, 8, 37].
Figure 3: Example frames in the BRIAR dataset [10] showing the same subject (identity) under various conditions, including different standoff distances, clothing, and image quality due to the turbulence effect. The columns represent different scenarios: controlled conditions, close range, \(100\)m-set1, \(100\)m-set2, \(200\)m, \(400\)m, \(500\)m, and UAV capture, respectively.
where, \(\mathcal{T}\) is the tilt operator and \(\mathcal{B}\) represents the blur operation, with \(J(\mathbf{x})\) and \(I(\mathbf{x})\) as the input and output images, indexed by position \(\mathbf{x}\), respectively. In this work, we have considered a single-frame image restoration method as well as a multi-frame method, both aiming to invert \(\mathcal{T}\) and \(\mathcal{B}\).
Our restoration methods for biometrics focus on preserving identity, using lightweight, real-time techniques. These are divided into single-frame and multi-frame restorations. The former provides lower throughput but relies on strong priors without altering the subject's identity. Multi-frame restoration, on the other hand, utilizes temporal cues, allowing weaker priors but requiring larger throughput.
Our multi-frame approach uses the Recurrent Turbulence Mitigation network (RTM), a bi-directional, multi-scale convolutional recurrent network with a novel Multi-head Temporal Channel self-attention (MTCSA) layer (Fig. 5).
### Multi-Modal Biometric Feature Encoding
We describe here our methods for obtaining biometric features from the face, gait and body shape, as well as the multi-modal fusion technique applied to generate fused scores for evaluation on the metrics described in Sec. 4.
#### 3.6.1 Face
Our face recognition pipeline integrates the techniques of Adaptive Margin Function (AdaFace [26]), Cluster and Aggregate (CAFace [27]), and Controllable Face Synthesis Model (CFSM [33]), addressing the challenge of recognizing faces across variable image qualities and media types.
Initially, AdaFace [26], an adaptive loss function strategy, helps manage low-quality face datasets. It adjusts the emphasis on misclassified samples based on image quality, effectively dealing with a wide range of image quality levels. Next, CAFace [27], a two-stage feature fusion technique, is crucial for integrating features from multiple frames. By grouping inputs to a few global cluster centers and subsequently fusing these features, CAFace maintains order invariance while combining multiple frames. Lastly, CFSM [33] helps bridge domain gaps between training and testing scenarios. It replicates the target datasets' distribution in a style latent space, generating synthetic face images similar to the target evaluation datasets, thereby reconciling the disparity between high-quality training data and lower-quality surveillance images. The combination of AdaFace, CAFace, and CFSM effectively navigates the challenges of face recognition across diverse image qualities, leveraging feature extraction, feature integration, and synthetic image generation to improve face recognition performance.
#### 3.6.2 Gait
We propose an innovative framework, GlobalGait, to address the limitations of existing gait recognition models that mainly focus on local features and often overlook vital global correlations. GlobalGait enriches these local features by factoring in global correlations across a gait sequence, thereby boosting recognition accuracy.
Given an input sequence, GlobalGait uses a CNN backbone to extract local spatiotemporal features, and then divides them into source and target features. These feature maps are projected into tokens for each joint, using sampling around each 2D joint. We employ a stack of multi-head self-attention layers to model the sequences' spatial and temporal correlations. Further, GlobalGait attempts to reconstruct target frame pixels based on source sequences and to choose the correct target sequence from a set of candidates. This approach harnesses the spatial and temporal correlations in gait recognition, with these supervisory signals guiding the model to learn more distinct gait features.
#### 3.6.3 Body Shape
Our method for encoding body features harnesses the power of Person Re-ID [2, 32, 58, 63], with the primary aim to effectively capture static body features. Re-ID can be complex due to variations in human poses and clothing. We propose the naked 3D body shape as the most reliable cue for body matching, despite the considerable challenges in reconstructing it from a 2D image. Taking cues from ad
Figure 5: Multi-frame image restoration by the recurrent network for turbulence mitigation (RTM).
Figure 6: Overview of the proposed body shape feature encoding framework. In the body matching process, the identity shape features \(\mathbf{z}_{id}\) are utilized for matching.
vancements in 3D feature learning, we introduce a pipeline to disentangle identity (naked body) from non-identity components (pose, clothing shape and texture) of 3D clothed humans. The core of our approach lies in a novel two-layer neural implicit function that disentangles these components in latent representations.
In particular, as illustrated in Fig. 6, the proposed method relies on a new joint two-layer neural implicit function that represents \(3\)D humans, where identity, clothing shape, and texture components are represented in disentangled latent representations. Formally, given a training set of \(T\) images \(\{\mathbf{I}_{i}\}_{i=1}^{T}\) and the corresponding identity labels \(\{l_{i}\}_{i=1}^{T}\), the image encoder \(\mathcal{E}(\mathbf{I}):\mathbf{I}\rightarrow(\mathbf{z}_{id},\mathbf{z}_{ cloth},\mathbf{z}_{tex})\) predicts the identity shape code of naked body \(\mathbf{z}_{id}\in\mathbb{R}^{Li_{id}}\), clothed shape code \(\mathbf{z}_{cloth}\in\mathbb{R}^{L_{cloth}}\) and texture code \(\mathbf{z}_{tex}\in\mathbb{R}^{L_{tex}}\). A joint two-layer implicit model decodes the latent codes to identity shape, clothing shape, and texture components, respectively. Additionally, PoseNet \(\mathcal{P}\) predicts the camera projection \(\mathbf{\Omega}\) and SMPL body pose \(\theta\): \((\mathbf{\Omega},\theta)=\mathcal{P}(\mathbf{I})\). Mathematically, the learning objective is defined as:
\[\operatorname*{arg\,min}_{\mathcal{E},\mathcal{E},\mathcal{C},\mathcal{T}} \sum_{i=1}^{T}\left(\left|\hat{\mathbf{I}}_{i}-\mathbf{I}_{i}\right|_{1}+ \mathcal{L}_{cla}(\mathbf{z}_{id},l_{i})\right), \tag{2}\]
where \(\mathcal{L}_{cla}\) is the classification loss. \(\hat{\mathbf{I}}\) is the rendered image. This objective enables us to jointly learn accurate \(3\)D clothed shape and discriminative shape for the naked body.
We utilize CAPE [35] and THuman2.0 [65] datasets to train our model, generating individual identity shape code, clothing shape code, and texture code for each training sample. For inference, the encoder processes body images to extract identity shape features \(\mathbf{z}_{id}\). The Cosine similarity of two \(\mathbf{z}_{id}\) determines if two images belong to the same person. This method, excluding the explicit 3D reconstruction during inference, is highly efficient.
#### 3.6.4 Multi-Modal Biometric Fusion
To produce a comprehensive probe-gallery score from multiple biometric modalities, we initially calculate per-modality scores for each probe-gallery pair. For the face, gait, and body, we create a singular subject-level feature using CAFace (Sec. 3.6.1), mean fusion on video-only gallery features, and mean fusion on whole-body media, excluding face-only images, respectively. This exclusion is necessary due to the prevalence of face-only gallery images and the unsuitability of gait recognition on single images. For the body, we exclude face-only images by setting a threshold for acceptable vertical padding in preprocessed images.
Probe features are then compared to gallery features, and an equal-weighted sum score fusion is employed to generate a single score from the cosine similarity scores of the three modalities. When feature extraction fails for one or more modalities, we impute missing scores to the middle of the score range, which is zero for the cosine similarity metric used in generating probe-gallery scores.
## 4 Experimental Results
All modules are run together in a configurable container environment on PyTorch version 1.13.1. We perform experiments on \(8\) Nvidia RTX A6000s, with \(48\) GiB of VRAM, over the course of \(48\) hours on \(2\) dual-socket servers with either AMD EPYC \(7713\)\(64\)-Core or Intel Xeon Silver \(4314\)\(32\)-Core processors.
BRIAR Datasets and Protocols.The IARPA BRIAR dataset [10], comprised of two collections--BRIAR Government Collections 1 (BGC1) and 2 (BGC2), is a pioneering initiative to support whole-body biometric research. It addresses the necessity for broader and richer data repositories for training and evaluating biometric systems in challenging scenarios. BRIAR consists of over \(350,000\) images and \(1,300\) hours of videos from \(1,055\) subjects in outdoor settings. The dataset, with its focus on long-range and elevated angle recognition, provides a fertile ground for algorithm development and evaluation in biometrics.
The dataset, in accordance with Protocol V2.0.1, has been partitioned into a training subset (BRS, \(411\) subjects) and a testing subset (BTS, \(644\) subjects), with non-overlapping subjects. Regarding the test subjects, we utilize the controlled images and videos as gallery, and the field-collected data as probe. The protocol provides for \(644\) subjects for closed-set search and includes two subsets of \(544\) subjects each for open-set search, both containing \(444\) distractors who lack corresponding probe subjects. In total, the probe sets contain \(20,432\) templates and are divided into two disjoint categories: FaceIncluded and FaceRestricted. The FaceIncluded set comprises data where the face is clearly visible, occupying a minimum of \(20\) pixels in head height. On the other hand, the FaceRestricted set consists of data that presents challenges such as occlusion, low resolution.
Metrics.We employ BRIAR Program Target Metrics [1] to measure FarSight's performance across multiple modalities and their fusion: verification (TAR@1% FAR), closed-set identification (Rank-20 accuracy), and open-set identification (FNIR@1% FPIR), allowing for a thorough examination of its performance across various settings.
Baselines.In our study, we utilize established benchmarks for each biometric modality to ensure a comprehensive comparison: For facial recognition, we utilize AdaFace coupled with an average feature aggregation strategy, a popular approach known for its excellent performance [26]. For gait recognition, we adopt GaitBase [14], a solution known for its efficacy. For body shape modality, we employ CAL [17], a SoTA cloth-changing person re-identification method. These benchmarks provide an excellent basis to
fairly evaluate our proposed method.
### Evaluation and Analysis
In Tab. 1, we provide a thorough comparison of our approaches and the baselines for each modality. The detailed comparison analysis clearly highlights the superior performance of our proposed FarSight system across all performance metrics when compared to the baselines. For each modality, our module outperforms the baselines by a significant margin. For instance, in the verification metric (TAR@1% FAR) on FaceIncluded sets, FarSight (Face) sees an increase of \(11.81\%\). For gait, there's an improvement of \(13.65\%\), and for body shape, we see an improvement of \(2.13\%\). Further, upon fusion, we gain an additional improvement of \(4.05\%\) (\(69.15\%\to 85.93\%\)).
The FarSight system's effectiveness across various modalities and distances is evident in Tab. 2, displaying each modality's distinct robustness at different ranges. Especially noteworthy is the integrated FarSight model, exhibiting an outstanding accuracy consistently above \(88\%\) across all investigated ranges. The observed increase in face recognition accuracy with distance is tied to the growing similarity between sensors used in training and testing data. As this sensor alignment increases with distance, it reduces the domain gap, leading to enhanced performance. This finding underscores the critical role of sensor type and domain adaptation in optimizing biometric recognition.
#### 4.1.1 Face
The efficacy of including various modules in the face recognition pipeline is shown in Tab. 3. We initially use the combination of AdaFace IR101 backbone with the average feature aggregation which has shown good performance in low-quality imagery [26]. CFSM [33] adds performance improvement by adopting training data to a low-quality image dataset WiderFace [57] (\(+1.18\) in TAR@1% FAR). CAFace [27] is a feature fusion method that improves upon the basic average pooling (+4.16). Lastly, finetuning the model on the BGC1 training dataset further improves the performance (+\(6.47\)). The inclusion of an RTM-based image restoration model, as demonstrated in Table 4, leads to noticeable performance enhancements
#### 4.1.2 Gait
In our gait recognition experiments, we observe consistent improvements compared to GaitBase [14], our baseline, across all four metrics. Our findings demonstrate significant enhancements in the model's ability to accurately verify individuals, with the TAR@1% FAR reaching an impressive improvement of \(11.90\%\) in FaceRestricted verification and \(13.65\%\) in FaceIncluded verification. Further, the rank-20 metric exhibits notable advancement, showcasing a remarkable increase of \(6.61\%\). Lastly, our model showcases improved performance in open-set search, achieving a noteworthy reduction of \(3.29\%\) in FNIR@1% FPIR. These promising outcomes reaffirm the efficacy of FarSight (Gait) to extract more discriminative features based on global features and highlight its potential for reliable and robust biometric identification in real-world applications.
\begin{table}
\begin{tabular}{|c||c|c||c|c||c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c||}{**Verification (1:1)**} & \multicolumn{2}{c||}{**Rank Retrieval (1:N)**} & \multicolumn{2}{c|}{**Open Search (1:N)**} \\ \cline{2-7} & \multicolumn{2}{c||}{TAR@1\% FAR \(\uparrow\)} & \multicolumn{2}{c||}{Rank-20, Closed Search \(\uparrow\)} & \multicolumn{2}{c|}{FNIR@1\% FPIR \(\downarrow\)} \\ \cline{2-7} & FaceRestricted & FaceIncluded & FaceRestricted & FaceIncluded & FaceRestricted & FaceIncluded \\ \hline \hline Baseline-AdaFace [26] & \(9.61\) & \(66.20\) & \(14.97\) & \(73.85\) & \(96.22\) & \(70.64\) \\
**FarSight (Face)** & \(25.04\) & \(78.01\) & \(31.78\) & \(84.12\) & \(92.11\) & \(57.39\) \\ \hline Baseline-GaitBase [14] & \(44.33\) & \(45.55\) & \(64.90\) & \(68.03\) & \(98.53\) & \(98.79\) \\
**FarSight (Gait)** & \(56.23\) & \(59.20\) & \(72.55\) & \(74.64\) & \(95.24\) & \(95.31\) \\ \hline Baseline-CAL [17] & \(48.58\) & \(51.87\) & \(66.27\) & \(71.18\) & \(96.98\) & \(96.17\) \\
**FarSight (Body)** & \(51.02\) & \(54.00\) & \(69.18\) & \(72.91\) & \(96.95\) & \(96.23\) \\ \hline
**FarSight (Face+Gait)** & \(57.30\) & \(83.98\) & \(75.15\) & \(91.19\) & **87.64** & **54.55** \\
**FarSight (Face+Body)** & \(54.68\) & **85.93** & \(73.97\) & **93.13** & \(89.57\) & \(58.99\) \\
**FarSight (Gait+Body)** & \(58.91\) & \(62.08\) & \(73.06\) & \(75.57\) & \(94.86\) & \(94.74\) \\ \hline \hline AdaFace+GaitBase+CAL & \(51.70\) & \(69.15\) & \(65.57\) & \(80.19\) & \(94.92\) & \(67.53\) \\
**FarSight** & **63.00** & \(81.88\) & **77.39** & \(91.74\) & \(90.66\) & \(67.77\) \\ \hline \end{tabular}
\end{table}
Table 1: Whole body biometric recognition results on the BRIAR dataset (N=\(644\) in retrieval and \(544\) in open search).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{FaceIncluded} & \multicolumn{2}{c|}{TAR@1\% FAR} & \multicolumn{2}{c|}{Rank-} & \multicolumn{2}{c|}{FNIR@} \\ \cline{2-5} & \multicolumn{1}{c|}{1\% FAR} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{1\% FPIR} \\ \hline AdaFace [26] & \(66.20\) & \(73.85\) & \(70.64\) \\ \hline + CFSM [33] & \(67.38\) & \(77.22\) & \(68.51\) \\ \hline + CAFace [27] & \(71.54\) & \(78.57\) & \(61.77\) \\ \hline +BRS1 **FarSight (Face)** & \(78.01\) & \(84.12\) & \(57.39\) \\ \hline \end{tabular}
\end{table}
Table 4: Face recognition with and without image restoration.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{FaceIncluded} & \multicolumn{2}{c|}{TAR@1\% FAR} & \multicolumn{2}{c|}{FaceIncluded} \\ \cline{2-5} & Face w/o Restoration & \multicolumn{1}{c|}{\(72.39\)} \\ \hline Face w/ Restoration & \multicolumn{1}{c|}{\(\mathbf{72.57}\)} \\ \hline \end{tabular}
\end{table}
Table 3: Ablation of different parts in face recognition pipeline.
#### 4.1.3 Body
Tab. 1 clearly demonstrates that our FarSight (body) consistently outperforms the CAL baseline on both FaceRestricted and FaceIncluded sets, as evidenced in both verification and Rank retrieval metrics. In Fig. 7, we show successful and failed matches in body matching. Our method copes well with clothing differences, but struggles with motion blur, turbulence, or hairstyle changes. Misidentifications in impostor pairs often happen due to similar body shapes.
#### 4.1.4 Multi-Modal Fusion
As seen in Tab. 1, the fusion of the three modalities improves over the next best-performing algorithm in the FaceRestricted condition (\(+4.09\) in TAR@1% FAR and \(+2.24\) in Rank-20). We also see the strength of combining the face and body modalities in the FaceIncluded condition, where face and body fusion excels in both verification and rank retrieval (\(+1.95\) TAR@1% FAR and \(+1.94\) Rank-20) over the next best algorithm. The open search metric performs best when fusing face and gait, scoring \(87.64\)% and \(54.55\)% in FNIR@1% FPIR for both the FaceRestricted and FaceIncluded conditions, which is in part due to the challenge that single body and gait modalities face on open-set search.
#### 4.1.5 System Efficiency
**Template Size.** Feature vectors for face, gait and body are of sizes \(512\), \(8704\) and \(6144\). Multiplying these values by \(8\) and dividing by \(1024\) provides the template size: \(4\)KB, \(68\)KB and \(48\)KB, respectively, and \(120\)KB in total.
**Processing Speed.** The speed of our FarSight system, as outlined in Tab. 5, is examined under stringent conditions to gauge both the efficiency of individual components and the overall pipeline. This system operates asynchronously and concurrently, similar to the actual deployment conditions. To precisely measure efficiency, the components are assessed in a serialized manner, even though they typically run in parallel. We conduct this assessment using representative sample videos, encompassing \(2400\) frames of \(1080\)p and \(1200\) frames of \(4\)K video, each set originating from four distinct subjects. The restoration process is primarily directed towards detected faces, which implies that any instances of undetected faces would naturally lead to reduced restoration and face module processing times. A notable observation is that our system can successfully detect bodies in \(95\)% of all frames and faces in \(26\)% of frames.
## 5 Future Research
**Image restoration.** We plan to expand our optical simulation tool to handle higher levels of distortion and explore "simulation-in-the-loop" techniques. Our goal is also to balance fidelity and perceptual quality by integrating generative and discriminative restoration methods.
**Detection and tracking.** We plan to refine our current detector or shift to YOLO-based detectors. We are also considering using separate face detectors on subject bounding boxes to reduce latency.
**Biometric feature encoding.** In our face module, we are exploring the potential of adaptive restoration based on the available information from given frames, to avoid any negative impact on performance. For our gait module, our goal is to delve further into the usage of 3D body shape and pose information, which is currently under-explored in gait recognition. This involves combining shape parameters with global features to generate 3D-aware shape features and enriching local features with 3D pose information. In our body module, we will improve 3D naked body reconstruction by taking advantage of multiple frames. Moreover, we will investigate whether 3D poses could provide complementary information to 2D skeletons or imageries.
**Multi-modal fusion.** We plan to further enhance our technique for fusing face, gait, and body features, to better exploit the strengths of each modality and alleviate challenges from the long tail of body and gait scores in the non-match open search distributions.
Figure 7: Successful and failure examples of body matching.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Module & 1080p & 4K & Average Combined \\ \hline \hline Detection \& Tracking & \(20.0\) & \(34.7\) & \(24.9\) \\ \hline Restoration & \(6.1\) & \(5.3\) & \(5.9\) \\ \hline Face & \(2.6\) & \(2.2\) & \(2.5\) \\ \hline Gait & \(3.3\) & \(2.5\) & \(3.0\) \\ \hline Body & \(3.7\) & \(3.1\) & \(3.5\) \\ \hline \hline FarSight System (fps) & \(8.4\) & \(6.3\) & \(7.8\) \\ \hline \end{tabular}
\end{table}
Table 5: FarSight module processing times (sec.) and system efficiency (fps) for 1080p (1920x1080) and 4k (3840x2160) probes.
## 6 Conclusion
We develop and prototype an end-to-end whole-body person recognition system, **FarSight**. Our solution attempts to overcome hurdles such as low-quality video frames, large yaw and pitch angles, and the domain gap between training and test sets by utilizing the physics of imaging in harmony with deep learning models. This innovative approach has led to superior recognition performance, as demonstrated in tests using the BRIAR dataset. With the far-reaching potential to enhance homeland security and forensic identification, the FarSight system paves the way for the next generation of biometric recognition in challenging scenarios.
**Acknowledgments.** This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2022-21102100004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2305.01134 | PGrad: Learning Principal Gradients For Domain Generalization | Machine learning models fail to perform when facing out-of-distribution (OOD)
domains, a challenging task known as domain generalization (DG). In this work,
we develop a novel DG training strategy, we call PGrad, to learn a robust
gradient direction, improving models' generalization ability on unseen domains.
The proposed gradient aggregates the principal directions of a sampled roll-out
optimization trajectory that measures the training dynamics across all training
domains. PGrad's gradient design forces the DG training to ignore
domain-dependent noise signals and updates all training domains with a robust
direction covering main components of parameter dynamics. We further improve
PGrad via bijection-based computational refinement and directional plus
length-based calibrations. Our theoretical proof connects PGrad to the spectral
analysis of Hessian in training neural networks. Experiments on DomainBed and
WILDS benchmarks demonstrate that our approach effectively enables robust DG
optimization and leads to smoothly decreased loss curves. Empirically, PGrad
achieves competitive results across seven datasets, demonstrating its efficacy
across both synthetic and real-world distributional shifts. Code is available
at https://github.com/QData/PGrad. | Zhe Wang, Jake Grigsby, Yanjun Qi | 2023-05-02T00:48:24Z | http://arxiv.org/abs/2305.01134v1 | # PGrad : Learning Principal Gradients For Domain Generalization
###### Abstract
Machine learning models fail to perform when facing out-of-distribution (OOD) domains, a challenging task known as domain generalization (DG). In this work, we develop a novel DG training strategy, we call PGrad, to learn a robust gradient direction, improving models' generalization ability on unseen domains. The proposed gradient aggregates the principal directions of a sampled roll-out optimization trajectory that measures the training dynamics across all training domains. PGrad's gradient design forces the DG training to ignore domain-dependent noise signals and updates all training domains with a robust direction covering main components of parameter dynamics. We further improve PGrad via bijection-based computational refinement and directional plus length-based calibrations. Our theoretical proof connects PGrad to the spectral analysis of Hessian in training neural networks. Experiments on DomainBed and WILDs benchmarks demonstrate that our approach effectively enables robust DG optimization and leads to smoothly decreased loss curves. Empirically, PGrad achieves competitive results across seven datasets, demonstrating its efficacy across both synthetic and real-world distributional shifts. Code is available at [https://github.com/QData/PGrad](https://github.com/QData/PGrad).
## 1 Introduction
Deep neural networks have shown remarkable generalization ability on test data following the same distribution as their training data. Yet, high-capacity models are incentivized to exploit any correlation in the training data that will lead to more accurate predictions. As a result, these models risk becoming overly reliant on "domain-specific" correlations that may harm model performance on test cases from out-of-distribution (OOD). A typical example is a camel-and-cows classification task (Beery et al., 2018; Shi et al., 2021), where camel pictures in training are almost always shown in a desert environment while cow pictures mostly have green grassland backgrounds. A typical machine learning model trained on this dataset will perform worse than random guessing on those test pictures with cows in deserts or camels in pastures. The network has learned to use the background texture as one deciding factor when we want it to learn to recognize animal shapes. Unfortunately, the model overfits to specific traps that are highly predictive of some training domains but fail on OOD target domains. Recent domain generalization (DG) research efforts deal with such a challenge. They are concerned with how to learn a machine learning model that can generalize to an unseen test distribution when given multiple different but related training domains. 1
Footnote 1: In the rest of this paper, we use the terms “domain” and “distribution” interchangeably.
Recent literature covers a wide spectrum of DG methods, including invariant representation learning, meta-learning, data augmentation, ensemble learning, and gradient manipulation (more details in Section 2.4). Despite the large body of recent DG literature, the authors of (Gulrajani and Lopez-Paz, 2021) showed that empirical risk minimization (ERM) provides a competitive baseline on many real-world DG benchmarks. ERM does not explicitly address distributional shifts during training. Instead, ERM calculates the gradient from each training domain and updates a model with the average gradient. However, one caveat of ERM is its average gradient-based model update will pre
serve domain-specific noise during optimization. This observation motivates the core design of our method.
We propose a novel training strategy that learns a robust gradient direction for DG optimization, and we call it PGrad. PGrad samples an optimization trajectory in high dimensional parameter space by updating the current model sequentially across training domains. It then constructs a local coordinate system to explain the parameter variations in the trajectory. Via singular value decomposition (SVD), we derive an aggregated vector that covers the main components of parameter dynamics and use it as a new gradient direction to update the target model. This novel vector - that we name the "principal gradient" - reduces domain-specific noise in the DG model update and prevents the model from overfitting to particular training domains. To decrease the computational complexity of SVD, we construct a bijection between the parameter space and a low-dimensional space through transpose mapping. Hence, the computational complexity of the PGrad relates to the number of sampled training domains and does not depend on the size of our model parameters.
This paper makes the following contributions: (1) PGrad places no explicit assumption on either the joint or the marginal distributions. (2) PGrad is model-agnostic and is scalable to various model architecture since its computational cost only relates to the number of training domains. (3) We theoretically show the connection between PGrad and Hessian approximation, and also prove that PGrad benefits the training efficiency via learning a gradient in a smaller subspace constructed from learning trajectory. (4) Our empirical results demonstrate the competitive performance of PGrad across seven datasets covering both synthetic and real-world distributional shifts.
## 2 Method
Domain generalization (Wang et al., 2021; Zhou et al., 2021) assumes no access to instances from future unseen domains. In domain generalization, we are given a set of training domains \(\mathcal{D}_{tr}=\{D_{i}\}_{i=1}^{n}\) and test domains \(\mathcal{T}_{te}=\{T_{j}\}_{j=1}^{m}\). Each domain \(D_{i}\) (or \(T_{j}\)) is associated with a joint distribution \(\mathcal{P}_{\mathcal{X}\times\mathcal{Y}}^{D_{i}}\) (or \(\mathcal{P}_{\mathcal{X}\times\mathcal{Y}}^{T_{j}}\)), where \(\mathcal{X}\) represents the input space and \(\mathcal{Y}\) is the output space. Moreover, each training domain \(D_{i}\) is characterized by a set of i.i.d samples \(\{\mathbf{x}_{k}^{i},\mathbf{y}_{k}^{i}\}\). For any two different domains sampled from either \(\mathcal{D}_{tr}\) or \(\mathcal{T}_{te}\), their joint distribution varies \(\mathcal{P}_{\mathcal{X}\times\mathcal{Y}}^{D_{i}}\neq\mathcal{P}_{\mathcal{X }\times\mathcal{Y}}^{D_{j}}\), and most importantly, \(\mathcal{P}_{\mathcal{X}\times\mathcal{Y}}^{D_{i}}\neq\mathcal{P}_{\mathcal{X }\times\mathcal{Y}}^{T_{j}}\).
We consider the prediction task from the input \(\mathbf{x}\in\mathcal{X}\) to the output \(\mathbf{y}\in\mathcal{Y}\). Provided with a model family whose parameter space is \(\Theta\subset\mathbb{R}^{d}\) and the loss function \(\mathcal{L}:\mathbf{\Theta}\times(\mathcal{X}\times\mathcal{Y})\to\mathbb{R}_{+}\), the goal is to find an optimal \(\Theta_{te}^{*}\) on test domains so that:
\[\Theta_{te}^{*}=\arg\min_{\Theta\in\Theta}\mathbb{E}_{T_{j}\sim\mathcal{T}_{ te}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}_{\mathcal{X}\times \mathcal{Y}}^{T_{j}}}\mathcal{L}[\Theta,(\mathbf{x},\mathbf{y})]. \tag{1}\]
In DG setup, any prior about \(\mathcal{T}_{te}\), such as inputs or outputs, are unavailable in the training phase.
Figure 1: Overview of our PGrad training strategy. With a current parameter \(\Theta^{t}\), we first obtain a rollout trajectory \(\Theta^{t}\to\Theta_{1}^{t}\to\Theta_{2}^{t}\to\Theta_{3}^{t}\) by sequentially optimizing across all training domains \(\mathcal{D}_{tr}=\{D_{i}\}_{i=1}^{3}\). Then PGrad updates \(\Theta^{t}\) by extracting the principal gradient direction \(\nabla_{p}\) of the trajectory. A target model’s generalization is evaluated on unseen (OOD) test domains \(T_{j}\).
Despite not considering domain discrepancies from training to testing, ERM is still a competitive method for domain generalization tasks (Gulrajani and Lopez-Paz, 2021). ERM naively groups data from all training domains \(\mathcal{D}_{tr}\) together and obtains its optimal parameter \(\Theta^{t}_{tr}\) via the following to approximate \(\Theta^{s}_{te}\):
\[\Theta^{s}_{tr}=\arg\min_{\Theta\in\Theta}\mathbb{E}_{D_{i}\sim\mathcal{D}_{tr }}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}^{D}_{\mathcal{X},\mathbf{ x}}}\mathcal{L}[\Theta,(\mathbf{x},\mathbf{y})]. \tag{2}\]
In the rest of the paper, we omit the subscript in \(\Theta_{tr}\) and use \(\Theta\) for simplicity (during DG training, only training domains \(\mathcal{D}_{tr}\) will be available for model learning).
When optimizing with ERM on DG across multiple training domains, the update of \(\Theta\) follows:
\[\Theta^{t+1}=\Theta^{t}-\frac{\gamma}{n}\sum_{i=1}^{n}\nabla_{\Theta^{t}} \mathcal{L}_{D_{i}}, \tag{3}\]
where \(\nabla_{\Theta^{t}}\mathcal{L}_{D_{i}}=\nabla\mathbb{E}_{(\mathbf{x},\mathbf{ y})\sim\mathcal{P}^{D_{i}}_{\mathcal{X},\mathbf{y}}}\mathcal{L}[\Theta^{t},( \mathbf{x},\mathbf{y})]\) calculates the gradient of the loss on domain \(D_{i}\) with respect to the current parameter \(\Theta^{t}\) and \(\gamma\) is the learning rate.
The gradient determines the learning path of a model. When using ERM in DG setting, each step of model updates uses an average gradient and may introduce and preserve domain-specific noise. For instance, if one training domain includes the trapping signals like cows always in pastures and camels always in deserts (as mentioned earlier). When investigating across multiple training domains, we envision such domain-specific noise signals will not be the main components of parameter variations across all domains. This motivates us to design PGrad as follows.
### PGrad : Principal Gradient based Model Updates
We extend ERM with a robust gradient estimation that we call PGrad. We visualize an overview in Figure 1 to better explain how it works. Briefly speaking, given the current model parameter vector, we sample a trajectory of parameters by training sequentially on each training domain. Next, we build a local principal coordinate system based on parameters obtained from the sampled trajectory. The chosen gradient direction is then built as a linear combination of the orthogonal axes of the local principal coordinates. Our design also forces the learned gradient to filter out domain-specific noise and follows a direction that maximally benefits all training domains \(\mathcal{D}_{tr}\); we refer to this extracted direction as the principal gradient. In the following, we cover details of the trajectory sampling, local coordinate construction, direction and length calibration, plus the noise suppression for our principal gradient design.
**Trajectory Sampling.** Denote the current parameter vector as \(\Theta^{t}\). We first sample a trajectory \(\mathbf{S}\) through the parameter space \(\mathbf{\Theta}\) by sequentially optimizing the model on each training domain:
\[\Theta^{t}_{0}=\Theta^{t},\ \Theta^{t}_{i}=\Theta^{t}_{i-1}-\alpha\nabla_{ \Theta^{t}_{i-1}}\mathcal{L}_{D_{i}},\ i=\{1,\cdots,n\} \tag{4}\]
We refer to the process of choosing an order of training domain to optimize as _trajectory sampling_. Different ordering arrangements of training domains will generate different trajectories.
**Principal Coordinate system Construction.** Now we have a sampled trajectory \(\mathbf{S}=\{\Theta^{t}_{i}\}_{i=0}^{n}\in\mathbb{R}^{(n+1)\times d}\), that were derived from \(\Theta^{t}\). Note: the inclusion of the starting location \(\Theta^{t}_{0}\) as part of the trajectory is necessary; see the proof in Appendix (A.3).
Then we construct a local principal coordinate system to explain the variations in \(\mathbf{S}\). We are looking for orthogonal and unit axes \(\mathbf{V}=[\mathbf{v}^{\intercal}_{z}]_{z=0}^{n}\in\mathbb{R}^{(n+1)\times d}\) to maximally capture the variations of the trajectory. Each \(\mathbf{v}_{z}\in\mathbb{R}^{d}\) is a unit vector of size \(d\), the same dimension as the parameters \(\Theta^{t}\).
\[\max_{\mathbf{v}_{z}}\mathbf{Variance}([\mathbf{Sv}_{z}]),\ s.t.\ \mathbf{V}^{T}\mathbf{V}= \mathbf{I}_{d}. \tag{5}\]
The above objective is the classic principal component analysis formulation and can be solved with singular value decomposition (a.k.a. SVD). Eq. (5)) has the closed-form solution as follows (the revised computational complexity is \(n\)):
\[\lambda_{z},\ \mathbf{v}_{z}=\mathbf{SVD}_{z}(\frac{1}{n}\mathbf{\hat{S}}^{T}\mathbf{ \hat{S}}), \tag{6}\]
Here \(\lambda_{z},\mathbf{v}_{z}\) denote the \(z\)-th largest eigenvalue and its corresponding eigenvector. \(\mathbf{\hat{S}}\) denotes the centered trajectory by removing the mean from \(\mathbf{S}\). In the above Eq. (6)), the computational bottleneck
lies in the SVD, whose computational complexity comes at \(\mathcal{O}(d^{3})\) due to \(\mathbf{\hat{S}}^{T}\mathbf{\hat{S}}\in\mathbb{R}^{d\times d}\). \(d\) denotes the size of the parameter vector and is fairly large for most state-of-the-art (SOTA) deep learning architectures. This prohibits the computation of the eigenvalues and eigenvectors from Eq. (6) for SOTA deep learning models. Hence, we refine and construct a bijection as follows to lower the computational complexity (to \(\mathcal{O}((n+1)^{3})\)):
\[\mathbf{\hat{S}}\mathbf{\hat{S}}^{T}\mathbf{e}_{z}=\lambda_{z}\mathbf{e}_{z}\quad \Longrightarrow\quad\mathbf{\hat{S}}^{T}\mathbf{\hat{S}}\mathbf{\hat{S}}^{T}\mathbf{e}_{z}= \lambda_{z}\mathbf{\hat{S}}^{T}\mathbf{e}_{z}\quad\Longrightarrow\quad\mathbf{v}_{z}=\mathbf{ \hat{S}}^{T}\mathbf{e}_{z} \tag{7}\]
Eq. (7) indicates that if \(\lambda_{z},\mathbf{e}_{z}\) are the \(z\)-th largest eigenvalue and corresponding eigenvector of \(\mathbf{\hat{S}}\mathbf{\hat{S}}^{T}\), the \(z\)-th largest eigenvalue and corresponding eigenvector of \(\mathbf{\hat{S}}^{T}\mathbf{\hat{S}}\) are \(\lambda_{z},\mathbf{\hat{S}}^{T}\mathbf{e}_{z}\) (i.e., \(\mathbf{v}_{z}=\mathbf{\hat{S}}^{T}\mathbf{e}_{z}\)). This property introduces a bijection from eigenvectors of \(\mathbf{\hat{S}}\mathbf{\hat{S}}^{T}\in\mathbb{R}^{(n+1)\times(n+1)}\) to those of \(\mathbf{\hat{S}}^{T}\mathbf{\hat{S}}\in\mathbb{R}^{d\times d}\). Since \(n\) - the number of training domains - is much smaller than \(d\), calculating eigen-decomposition of \(\mathbf{\hat{S}}\mathbf{\hat{S}}^{T}\in\mathbb{R}^{(n+1)\times(n+1)}\) is therefore much cheaper.
**Directional Calibration.** With the derived orthogonal axes \(\mathbf{V}=[\mathbf{v}_{z}^{\mathsf{T}}]_{z=0}^{n}\) from Eq. (7), now we construct a local principal coordinate system with each axis aligning with one eigenvector \(\mathbf{v}_{z}\). These principal coordinate axes \(\mathbf{V}\) are ordered based on the magnitude of the eigenvalues. This means that \(\mathbf{v}_{i}\) explains more variations of the sampled trajectory \(\mathbf{S}\) than \(\mathbf{v}_{j}\) if \(i<j\), and they are all unit vectors. In addition, these vectors are unoriented, which means either positive or negative multiple of an eigenvector still falls into the eigenvector set.
Now our main goal is to get a robust gradient direction by aggregating information from \(\mathbf{V}\). First we calibrate the directions of each eigenvectors so that they point to the directions that can improve the DG prediction accuracy. Learning an ideal direction is impossible without a reference. The choice of the reference is flexible, as long as it is pointing to a direction that climbs up the loss surface. We want the reference to guide the principal gradient in the right direction for gradient descent based algorithms. For simplicity, we use the difference between the starting point \(\Theta_{0}^{t}\) and the end point \(\Theta_{n}^{t}\) of the trajectory \(\mathbf{S}\) as our reference \(\nabla_{r}=\Theta_{0}^{t}-\Theta_{n}^{t}\). So for each coordinate axis, we revise its direction so that the resulting vector \(\mathbf{w}_{z}\) is positively related to the reference \(\nabla_{r}\) in terms of the inner product:
\[\mathbf{w}_{z}=r_{z}\mathbf{v}_{z},\;\;r_{z}=\begin{cases}1,&\text{if }\;\langle\mathbf{v}_{z}, \nabla_{r}\rangle\geq 0,\\ -1,&\text{otherwise}.\end{cases} \tag{8}\]
**Constructing Principal Gradient.** The relative importance of each \(\mathbf{w}_{z}\) is conveyed in the corresponding eigenvalue \(\lambda_{z}\). Larger \(\lambda_{z}\) implies higher variance when projecting the trajectory \(\mathbf{S}\) along \(\mathbf{w}_{z}\) direction. We weight each axis with their eigenvalues, and aggregate them together into a weighted sum. This gives us the principal gradient vector being calculated as follows:
\[\nabla_{p}=\sum_{z=0}^{n}\frac{\lambda_{z}}{||\mathbf{\lambda}||_{2}} \mathbf{w}_{z},\;\;\mathbf{\lambda}=[\lambda_{0},\lambda_{1},\cdots,\lambda_{n}] \tag{9}\]
There exists other possible aggregation besides Eq. (9). For instance, another choice of weight could be \(\lambda_{z}/||\mathbf{\lambda}||_{1}\) or simply \(\lambda_{z}\) since the eigenvalue of a semi-positive definite matrix is non-negative. Gradient normalization has been widely recommended for improving training stability (You et al., 2017; Yu et al., 2017). Our design in Eq. (9) automatically achieves \(\hat{L}_{2}\) normalization, because:
\[||\nabla_{p}||_{2}^{2}=\sum_{z=0}^{n}\frac{\lambda_{z}^{2}}{||\mathbf{\lambda}||_{2 }^{2}}||\mathbf{w}_{z}||_{2}^{2}=1, \tag{10}\]
**Length Calibration.** As training updates continue, a fixed length gradient operator may become too rigid, causing fluctuations in the loss. We, therefore, propose to calibrate the norm of \(\nabla_{p}\) with a reference, for achieving adaptive length tuning. Specifically, we propose to multiply the aggregated gradient from Eq. (9) with the \(L_{2}\) norm of \(\nabla_{r}\):
\[\nabla_{p}=\sum_{z=0}^{n}\frac{\lambda_{z}||\nabla_{r}||_{2}}{||\mathbf{\lambda}|| _{2}}\mathbf{w}_{z}, \tag{11}\]
With this length calibration via \(||\nabla_{r}||_{2}\), the norm of the proposed gradient is constrained by the multiplier, and is automatically tuned during the training process.
**Noise Suppression.** Most \(\mathbf{w}_{z}\) axes correspond to small eigenvalues and may relate to just domain-specific noise signals. They may help the accuracy of a specific training domain \(D_{i}\), but mostly hurt the overall accuracy on \(\mathcal{D}_{tr}\). We therefore define the principal gradient as follows and show how to use it to solve DG model training via gradient descent optimization (where \(k\) is a hyperparaemter):
\[\nabla_{p}=\sum_{z=0}^{k}\frac{\lambda_{z}\|\nabla_{r}\|_{2}}{\|\mathbf{\lambda} \|:k\|_{2}}\mathbf{w}_{z},\quad\Theta^{t+1}=\Theta^{t}-\gamma\nabla_{p}. \tag{12}\]
### Theoretical Analysis
In Appendix (A.5.1), we prove that \(\frac{1}{n}\mathbf{\hat{S}}^{T}\mathbf{\hat{S}}\) in Eq. (6) provides us with the mean of all training domains' domain-specific Fisher information matrix (FIM). Since FIM is the negative of Hessian under mild conditions, PGrad essentially performs spectrum analysis on the approximated Hessian matrix. Moreover, in Appendix (A.5.2), we show that PGrad improves the training efficiency of neural networks by recovering a subspace from the original over-parameterized space \(\mathbf{\Theta}\). This subspace is built from the top eigenvectors of the approximated Hessian. We visualize the evolution of the eigenvalue distributions in Figure 8.
Our theoretical analysis connects to the machine learning literature that performs spectrum analysis of Hessian matrix (Gur-Ari et al., 2018) and connects its top subspace spanned by the eigenvectors to training efficiency and generalization in neural networks (Wei and Schwab, 2019; Ghorbani et al., 2019; Pascanu et al., 2014). For example, (Hochreiter and Schmidhuber, 1997) shows that small eigenvalues in the Hessian spectrum are indicators of flat directions. Another work (Sagun et al., 2017) empirically demonstrated that the spectrum of the Hessian contains both a bulk component with many small eigenvalues and a few top components of much more significant positive eigenvalues. Later, (Gur-Ari et al., 2018) pointed out that the gradient of neural networks quickly converges to the top subspace of the Hessian.
### Variations of PGrad
There exist many ways to construct a sampled trajectory, creating multiple variations of PGrad.
* PGrad-F : The vanilla trajectory sampling method will sample a trajectory of length \(n+1\) by sequentially visiting each \(D_{i}\) in a fixed order. See appendix for the results of the rigid variation.
* PGrad : We can randomly shuffle the domain order in the training, and then perform to sample a trajectory. This random order based strategy is used as the default version of PGrad.
* PGrad-B : We can split each training batch into \(B\) smaller batches and construct a long sampled trajectory that is with length \(n*B+1\).
* PGrad-BMix : Our method is model and data agnostic. Therefore it is complementary and can combine with many other DG strategies. As one example, we combine the random order based PGrad-B and MixUp (Zhang et al., 2017) into PGrad-BMix in our empirical analysis.
In PGrad and PGrad-F, the principal gradient's trajectory covers all training domains \(\mathcal{D}_{tr}\) exactly once (per domain). There are two possible limitations. (1) If the number of training domains \(n\) is tiny, a length-\((n+1)\) trajectory will not provide enough information to achieve robustness. In the extreme case of \(n=1\), we will only be able to get one axis \(\mathbf{w}_{z}\), that goes back to ERM. (2) The current design can only eliminate discrepancies between different domains. Notable intra-domain variations also exist because empirically approximating the expected loss may include noise due to data sampling, plus batch-based optimization may induce additional bias. Based on this intuition, we propose a new design for sampling a trajectory by evenly splitting \(\{\mathbf{x}_{i}^{k},\mathbf{y}_{k}^{i}\}\) from a training domain \(D_{i}\) into \(B\) small batches. This new strategy allows us to obtain \(nB\) pseudo training domains. Such a design brings in two benefits: (1) We can sample a longer trajectory \(\mathbf{S}\), as the length changes from \(n\) to \(nB\). (2) Our design splits each domain's data into \(B\) batches and treats each batch as if they come from different training domains. By learning the principal gradient from these \(nB\) pseudo domains, we also address the intra-domain noise issue. We name this new design PGrad-B. Appendix (A.1) includes a figure comparing vanilla trajectory sampling with this extended trajectory sampling.
### Connecting to Related Works
We can categorize existing DG methods into four broad groups.
**Invariant element learning.** Learning invariant mechanisms shared across training domains provides a promising path toward DG. Recent literature has equipped various deep learning components - especially representation modules - with the invariant property to achieve DG (Li et al., 2018;e; Muandet et al., 2013). The central idea is to minimize the distance or maximize the similarity between representation distributions \(P(f(\mathcal{X})|D)\) across training domains so that prediction is based on statistically indistinguishable representations. Adversarial methods (Li et al., 2018) and moment matching (Peng et al., 2019; Zellinger et al., 2017) are two promising approaches for distributional alignment. A recent line of work explores the connection between invariance and causality. IRM (Arjovsky et al., 2019) learns an invariant linear classifier that is simultaneously optimal for all training domains. Under the linear case and some constraints, the invariance of the classifier induces causality. Ahuja et al. further extend IRM by posing it as finding the Nash equilibrium (Ahuja et al., 2020) and adding information bottleneck constraints to seek theoretical explanations (Ahuja et al., 2021). However, later works (Kamath et al., 2021) show that even when capturing the correct invariances, IRM still tends to learn a suboptimal predictor. Compared to this stream of works, our method places no assumption on either the marginal or joint distribution. Instead, the PGrad explores the promising gradient direction and is model and data-agnostic.
**Optimization methods.** One line of optimization-based DG works is those related to the Group Distributionally robust optimization (a.k.a DRO) (Sagawa et al., 2019). Group DRO aims to tackle domain generalization by minimizing the worst-case training loss when considering all training distributions (rather than the average loss). The second set of optimization DG methods is optimization-based meta-learning. Optimization-based meta-learning uses bilevel optimization for DG by achieving properties like global inter-class alignment (Dou et al., 2019) or local intra-class distinguishability (Li et al., 2018). One recent work (Li et al., 2019) synthesizes virtual training and testing domains to imitate the episodic training for few-shot learning.
**Gradient manipulation.** Gradient directions drive the updates of neural networks throughout training and are vital elements of generalization. In DG, the main goal is to learn a gradient direction that benefits all training domains (plus unseen domains). Gradient surgery (Mansilla et al., 2021) proposes to use the sign function as a signal to measure the element-wise gradient alignment. Similarly, the authors of (Chattopadhyay et al., 2020) presented And-mask, to learn a binary gradient mask to zero out those gradient components that have inconsistent signs across training domains. Sandmask (Shahtalebi et al., 2021) added a \(\tanh\) function into mask generation to measure the gradient consistency. They extend And-mask by promoting gradient agreement.
Fish (Shi et al., 2021) and Fishr (Rame et al., 2021) are two recent DG works motivated by gradient matching. They require the parallel calculation of the domain gradient from every training domain w.r.t a current parameter vector. Fish maximizes the inner product of domain-level gradients; Fishr uses the variance of the gradient covariance as a regularizer to align the per-domains' Hessian matrix. Our method PGrad differs gradient matching by learning a robust gradient direction. Besides, our method efficiently approximates the Hessian with training domains' Fisher information matrix. Appendix (A.4) includes a detailed analysis comparing parallel versus sequential domain-level training. Furthermore, we adapt PGrad with parallel training, and compare it against PGrad with sequential training and ERM to justify our analysis, see visualizations in Figure 6. We then show that gradient alignment is not necessary a sufficient indicator of the generalization ability in Figure 7.
**Others.** Besides the categories above, there exist other recently adopted to conquer domain generalization. Data augmentation (Xu et al., 2021; Zhang et al., 2021; 2017; Volpi et al., 2018), which generates new training samples or representations from training domains to prevent overfitting. Data augmentation can facilitate a target model with desirable properties such as linearity via Mixup (Zhang et al., 2017) or object focus (Xu et al., 2021). Other strategies, like contrastive learning (Kim et al., 2021), representation disentangling (Piratla et al., 2020), and semi-supervised learning (Li et al., 2021), have also been developed for the DG challenge.
## 3 Experiments
We conduct empirical experiments to answer the following questions: **Q1.** Does PGrad successfully handle both synthetic and real-life distributional shifts? **Q2.** Can PGrad handle various architectures (ResNet and DenseNet), data types (scene and satellite images), and tasks (classification and regression)? **Q3.** Compared to existing baselines, does PGrad enable smooth decreasing loss curves and generate smoother parameter trajectories? **Q4.** Can PGrad act as a practical comple
mentary approach to combine with other DG strategies? 2**Q5**. How do bottom eigenvectors in the roll-out trajectories affect the model's training dynamic and generalization ability?
Footnote 2: Note: we leave hyperparameter tuning details and some ablation analysis results in Appendix (A.7 to A.9).
### DomainBed Benchmark
**Setup and baselines.** The DomainBed benchmark (Gulrajani and Lopez-Paz, 2021) is a popular suite designed for rigorous comparisons of domain generalization methods. DomainBed datasets focus on distribution shifts induced by synthetic transformations, We conduct extensive experiments on it to compare it with SOTA methods. The testbed of domain generalization implements consistent experimental protocols for various datasets, and we use five datasets from DomainBed (excluding two MNIST-related datasets) in our experiments. See data details in Table 1.
DomainBed offers a diverse set of algorithms for comparison. Following the categories we summarized in Section 2.4, we compare with invariant element learning works: IRM (Arjovsky et al., 2019), MMD (Li et al., 2018), DANN (Ganin et al., 2016), and CORAL (Sun and Saenko, 2016). Among optimization methods, we use GroupDRO (Sagawa et al., 2019) and MLDG (Li et al., 2018). The most closely related works are those based on gradient manipulation, and we compare with Fish (Shi et al., 2021) and Fishr (Rame et al., 2021). Of the representation augmentation methods, we pick two popular works: MixUp (Zhang et al., 2017) and ARM (Zhang et al., 2021). DomainNet's additional model parameters in the final classification layer lead to memory constraints on our hardware at the default batch size of \(32\). Therefore, we use lower batch size \(24\). For our method variation PGrad-B, we set \(B=3\) for all datasets except using \(B=2\) for DomainNet. We default to Adam (Kingma and Ba, 2017) as the optimizer to roll-out a trajectory. All experiments use the DomainBed default architecture, where we finetune a pretrained ResNet50 (He et al., 2016).
**Results analysis.** We aggregate results on each dataset by taking the average prediction accuracy on all domains, and the results are summarized in Table 2. The per-domain prediction accuracy on each dataset is available in Appendix (A.7).
We summarize our observations: 1). ERM remains a strong baseline between all methods, and gradient alignment methods provide promising results compared to other categories. 2). PGrad ranks first out of 11 methods based on average accuracy. Concretely, PGrad consistently outperforms ERM on all datasets and gains \(1.8\%\) improvement on VLCS, \(2.8\%\) on OfficeHome, \(2.9\%\) on Terralncopnita, and no improvement on DomainNet. 3) Our variation PGrad-B outperform PGrad on almost all datasets except VLCS (where it is similar to PGrad ). This observation showcases that intra-domain noise suppression can benefit OOD generalization. A longer trajectory enables PGrad to learn more robust principal directions. 4) The combined variation PGrad-BMix outperforms MixUp across all datasets. On average (see last column of Table 2), PGrad-BMix is the best
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c|c} \hline \multicolumn{2}{c|}{**Dataset**} & \multicolumn{1}{c|}{**\# of Images**} & \multicolumn{3}{c|}{**Domains**} & \multicolumn{1}{c}{**\# of Classes**} \\ \hline PACS (Li et al., 2017) & 9,991 & \multirow{2}{*}{Artigunat, Cartoon, Sticechites, Photo} & \multirow{2}{*}{7} \\ VLCS (Zhang et al., 2013) & 10,729 & & PASCAL VOC 2007, LabelMe, Caltech, Sun & 5 \\ OfficeHome (Venkateswara et al., 2017) & 15,588 & Art, Clipart, Product, Real-World & 65 \\ Terralncopnita (Beery et al., 2018) & 24,788 & \multirow{2}{*}{Location 9100, 838, 448, 446} & \multirow{2}{*}{10} \\ DomainNet (Peng et al., 2019) & 586,575 & \multirow{2}{*}{Clipart, Infograph, Paining, Quickdraw, Real, Sketch} & \multirow{2}{*}{345} \\ \cline{1-1} \cline{5-5} & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: A summary on DomainBed dataset, metrics, and architectures we used.
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c} \hline Categories & Algorithms & VLCS & PACS & OfficeHome & Terralnc & DomainNet & **Avg** \\ \hline Baseline & ERM & \(77.5\pm 0.4\) & \(85.5\pm 0.2\) & \(66.5\pm 0.3\) & \(46.1\pm 1.8\) & \(40.9\pm 0.1\) & 63.3 \\ \hline \multirow{4}{*}{Invariant} & IRM & \(78.5\pm 0.5\) & \(83.5\pm 0.8\) & \(64.3\pm 2.2\) & \(47.6\pm 0.8\) & \(33.9\pm 2.8\) & 61.6 \(\_{-27}\) \\ & MMD & \(77.5\pm 0.9\) & \(84.6\pm 0.5\) & \(66.3\pm 0.1\) & \(42.2\pm 1.6\) & \(23.4\pm 9.5\) & 58.8 \(\_{-48}\) \\ & DANN & \(78.6\pm 0.4\) & \(83.6\pm 0.4\) & \(65.9\pm 0.6\) & \(46.7\pm 0.5\) & \(38.3\pm 0.1\) & 62.6 \(\_{-27}\) \\ & CORAL & \(78.8\pm 0.6\) & \(86.2\pm 0.3\) & \(68.7\pm 0.3\) & \(47.6\pm 1.0\) & \(41.5\pm 0.1\) & 64.5 \(\_{-12}\) \\ \hline \multirow{4}{*}{Optimization} & GroupDRO & \(76.7\pm 0.6\) & \(84.4\pm 0.8\) & \(66.0\pm 0.7\) & \(43.2\pm 1.1\) & \(33.3\pm 0.2\) & 60.7 \(\_{-26}\) \\ & MLDG & \(77.2\pm 0.4\) & \(84.9\pm 1.0\) & \(66.8\pm 0.6\) & \(47.7\pm 0.9\) & \(41.2\pm 0.1\) & 63.6 \(\_{-0.3}\) \\ \hline \multirow{4}{*}{Augmentation} & MixUp & \(77.4\pm 0.6\) & \(84.6\pm 0.6\) & \(68.1\pm 0.3\) & \(47.9\pm 0.8\) & \(39.2\pm 0.1\) & 63.4 \(\_{-0.1}\) \\ & ARM & \(77.6\pm 0.3\) & \(85.1\pm 0.4\) & \(64.8\pm 0.3\) & \(45.5\pm 0.3\) & \(35.5\pm 0.2\) & 61.7 \(\_{-48}\) \\ \hline \multirow{4}{*}{Gradient} & Fish & \(77.8\pm 0.3\) & \(85.5\pm 0.3\) & \(68.6\pm 0.4\) & \(45.1\pm 1.3\) & \(\mathbf{42.7}\pm 0.2\) & 63.9 \(\_{-0.6}\) \\ & Fishr & \(77.8\pm 0.1\) & \(85.5\pm 0.4\) & \(67.8\pm 0.1\) & \(47.4\pm 1.6\) & \(41.7\pm 0.0\) & 64.0 \(\_{-0.2}\) \\ \cline{1-1} \cline{5-5} & PGrad & \(\mathbf{79.3}\pm 0.3\pm 0.3\) & \(85.1\pm 0.3\) & \(69.3\pm 0.1\) & \(49.0\pm 0.3\) & \(\mathbf{41.0}\pm 0.1\) & 64.7 \(\_{-14}\) \\ \cline{1-1} & PGrad-B & \(\mathbf{78.9}\pm 0.3\) & \(\mathbf{44.8}\) & \(\mathbf{87.0}\pm 0.1\) & \(\mathbf{49.6}\pm 0.1\) & \(49.4\pm 0.8\) & 41.4 \(\_{-0.1}\) & 65.4 \(\_{-29}\) \\ \cline{1-1} & PGrad-BMix & \(\mathbf{78.9}\pm 0.2\) & \(\mathbf{44.4}\) & \(\mathbf{86.2}\pm 0.4\) & \(\mathbf{49.7}\) & \(\mathbf{69.8}\pm 0.1\) & \(\mathbf{50.7}\pm 0.6\) & 42.6 \(\_{-0.2}\) & 65.7 \(\_{-24}\) \\ \hline \end{tabular}
\end{table}
Table 2: Test accuracy (%) on five datasets from the DomainBed benchmark. We group \(20\%\) data from each training domain to construct validation set for model selection.
performing strategy. This observation indicates our method can be effectively combined with other DG categories to improve generalization further.
**Tuning \(k\) for noise suppression.** As we pointed out in Section 2.1, we achieve domain-specific noise suppression by only aggregating coordinate axes \(\{\mathbf{w}_{z}\}_{z=0}^{k}\) when learning the principal gradient \(\nabla_{p}\). To investigate the effect of \(k\), we run experiments with different values of \(k\) for both PGrad and PGrad-B. The analysis results on PACS dataset are collected in Table 3. Note that for default version of PGrad, the maximum number of training domains is \(n=3\), therefore, the length of the PGrad trajectory is upper bounded by \(4\).
Table 3 shows that the generalization accuracy initially improves and then drops as \(k\) increases. If we use \(k=n+1\) (as \(k=4\) for PGrad), domain-specific noise is included and aggregated from principal coordinate \(\mathbf{W}\) and the performance decreases compared with \(k=3\). The same pattern can also be observed in PGrad-B (note: the length of the trajectory is upper bounded by \(nB+1=10\)).
**Training loss curve analysis.** Learning and explaining a model update's behavior is an important step toward understanding its generalization ability. To answer **Q3**, we look for insights by visualizing domain-wise training losses as updates proceed. To prevent randomness, we plot average results together with the standard deviation over \(9\) runs. The results for ERM and PGrad are visualized in Figure 2. Compared to ERM, our method PGrad has smoother decreasing losses as training proceeds. Besides, all training domains benefit from each update in PGrad. On the other hand, ERM's decreasing loss on one domain can come with increases on other domains, especially late in training. We hypothesize this is because domain-specific noise takes a more dominant role as training progresses in ERM. PGrad can effectively suppress domain-specific noise and optimize all training losses in unison without significant conflict. Moreover, the loss variances across training domains are stably minimized, achieving a similar effect as V-REx (Krueger et al., 2021) without an explicit variance penalty. In Appendix (A.2), we visualize four training trajectories trained with PGrad and ERM. ERM trajectories proceed over-optimistically at the beginning and turn sharply in late training. PGrad moves cautiously for every step and consistently towards one direction.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline Method & Algorithms & P & A & C & S & Avg \\ \hline \multirow{3}{*}{PGrad} & \(k=0\) & 98.0\(\pm\)0.2 & 87.3\(\pm\)0.2 & 76.8\(\pm\)0.4 & 73.4\(\pm\)1.3 & 83.9 \\ & \(k=2\) & 97.8\(\pm\)0.0 & 87.5\(\pm\)0.3 & 78.2\(\pm\)0.8 & 74.0\(\pm\)1.5 & 84.4 \\ & \(k=3\) & 97.8\(\pm\)0.0 & 87.8\(\pm\)0.4 & 78.4\(\pm\)0.6 & 77.2\(\pm\)1.1 & 85.3 \\ & \(k=4\) & 97.4\(\pm\)0.1 & 87.6\(\pm\)0.3 & 79.1\(\pm\)1.0 & 76.3\(\pm\)1.2 & 85.1 \\ \hline \multirow{3}{*}{PGrad-B} & \(k=0\) & 97.5\(\pm\)0.1 & 89.1\(\pm\)0.8 & 80.3\(\pm\)0.6 & 77.5\(\pm\)0.4 & 86.1 \\ & \(k=2\) & 97.7\(\pm\)0.2 & 88.5\(\pm\)1.0 & 79.9\(\pm\)1.1 & 79.2\(\pm\)0.7 & 86.4 \\ \cline{1-1} & \(k=4\) & 98.0\(\pm\)0.2 & **89.9\(\pm\)**0.2 & 80.0\(\pm\)0.6 & **80.1\(\pm\)**0.9 & **87.0** \\ \cline{1-1} & \(k=7\) & 97.6\(\pm\)0.3 & 88.2\(\pm\)0.8 & **81.1\(\pm\)**1.3 & 79.0\(\pm\)1.5 & 86.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Analysis the effect of varying \(k\). The experiments are performed on PACS dataset. We highlight **first** and **second** best results.
Figure 2: Visualizing domain-wise training losses on VLCS. Curves are the average over 9 runs, surrounded by \(\pm\sigma\) the standard deviation. For comparison, the loss curves start from 1,000 epochs.
### WILDs Benchmark |
2307.10640 | A bound on the free energy of tensionless membranes | Using the proof of Willmore's conjecture by Marques and Neves, we conjecture
that the free energy of tensionless fluid membranes of arbitrary genus has an
upper bound. This implies that the average genus of such a membrane, in
equilibrium, is finite, regardless of external constraints. We propose that the
Gaussian rigidity may be determined by measuring the relative frequencies of
large-genus configurations at low temperature. | Francesco Serafin, Mark J. Bowick | 2023-07-20T07:10:19Z | http://arxiv.org/abs/2307.10640v2 | # A bound on the free energy of tensionless membranes
###### Abstract
Using the proof of Willmore's conjecture [32] by Marques and Neves [25], we conjecture that the free energy of tensionless fluid membranes of arbitrary genus has an upper bound. This implies that the average genus of such a membrane, in equilibrium, is finite, regardless of external constraints. We propose that the Gaussian rigidity may be determined by measuring the relative frequencies of large-genus configurations at low temperature.
## I Coarse-grained energy of the membrane
Fluid membranes are found in a wide variety of physical settings. Spherical shapes (genus 0) and higher genus surfaces (see e.g. [27] for genus 1, [8] for genus 2 and [9; 18; 24; 26] for higher genus) have been observed in experiments with unpolymerized phospholipid membranes, as well as in numerical simulations [16].
We treat here the ensemble of all membrane shapes at a scale \(L\gg a\) much larger than the molecular size \(a\), justifying the use of a standard continuum model. The membrane shape will be modeled as a 2-dimensional surfaces of genus \(g\) embedded in \(\mathbb{R}^{3}\,\mathbf{X}_{g}(\boldsymbol{\xi})\), with internal coordinates \(\boldsymbol{\xi}=(\xi^{1},\xi^{2})\). The subscript \(g\) labels the number of handles (genus) of the surface \(\Sigma\). The total energy associated to a configuration is
\[\mathcal{H}[\mathbf{X}_{g}]=\int_{\Sigma_{g}}\Big{[}\gamma+\frac{\kappa}{2} \vec{H}^{2}+\bar{\kappa}K_{\mathrm{G}}\Big{]}\;\mathrm{d}S\quad, \tag{1}\]
where \(\mathrm{d}S=\sqrt{\det g_{ab}}\,\mathrm{d}^{2}\xi\) and \(g_{ab}(\boldsymbol{\xi})=\partial_{a}\mathbf{X}(\boldsymbol{\xi})\cdot \partial_{b}\mathbf{X}(\boldsymbol{\xi})\) is the induced metric tensor. \(\vec{H}\) and \(K_{\mathrm{G}}\) are the mean and Gaussian curvatures, respectively:
\[\vec{H}^{2}(\boldsymbol{\xi}) =\frac{1}{4}\left(\frac{1}{R_{1}(\boldsymbol{\xi})}+\frac{1}{R_{ 2}(\boldsymbol{\xi})}\right)^{2}\quad, \tag{2}\] \[K_{\mathrm{G}}(\boldsymbol{\xi}) =\frac{1}{R_{1}(\boldsymbol{\xi})R_{2}(\boldsymbol{\xi})}\quad. \tag{3}\]
Here \(R_{1,2}(\boldsymbol{\xi})\) are the local principal curvature radii of the surface [4]. We restrict ourselves to the symmetric case for which the spontaneous curvature vanishes.
The first term in (1) is an area term controlled by the surface tension \(\gamma\). For a membrane coupled to a reservoir of constituents the surface tension will vanish in equilibrium to minimize the energy [2; 3]. In the following we will work in this ensemble and thus set \(\gamma=0\). The second term in (1) is the bending energy, measuring the cost of shape change, and is controlled by a bending rigidity \(\kappa\), with dimensions of energy.
\[\mathcal{W}[\mathbf{X}_{g}]\equiv\int_{\Sigma_{g}}\vec{H}^{2}\,\mathrm{d}S \tag{4}\]
is known as the Willmore functional. It was first proposed by Poisson and Germain [10] as an energy for elastic shells. The stationary points of \(\mathcal{W}\) are called Willmore surfaces. Remarkably, the Willmore functional is invariant under Mobius transformations of \(\mathbb{R}^{3}\)[5; 23; 33] and so the degenerate minima of \(\mathcal{W}\) are related by conformal transformations of \(\mathbb{R}^{3}\).
The term \(\bar{\kappa}K_{\mathrm{G}}\) in (1) is the energy cost of Gaussian curvature fluctuations, where \(\bar{\kappa}\) is known as the Gaussian rigidity. By Gauss-Bonnet it is a topological invariant for \(\Sigma_{g}\) a closed surface
\[\int_{\Sigma_{g}}K_{\mathrm{G}}\,\mathrm{d}S=2\pi\chi(\Sigma_{g})\quad,\quad \chi=2(1-g)\,. \tag{5}\]
The last two terms of (1) together are usually known as the Helfrich-Canham energy of fluid membranes [1; 12]:
\[\mathcal{H}_{\mathrm{HC}}\equiv\frac{\kappa}{2}\mathcal{W}[\mathbf{X}_{g}]+4 \pi\bar{\kappa}(1-g)\quad. \tag{6}\]
For \(\bar{\kappa}>0\) it is energetically favorable to form an arbitrarily large number of handles (\(g\to\infty\)) and (6) would be unbounded below. In a physical system steric effects and other constraints will, however, limit the formation of very high genus surfaces [28]. In essence the effective \(\bar{\kappa}\) is likely to be negative.
## II Statistical ensemble of tensionless fluid membranes
We consider the statistical ensemble of all possible configurations of the surface \(\mathbf{X}_{g}\) at equilibrium at temperature \(T\) and zero surface tension [2]. The canonical partition function \(Z\) is
\[Z=\int_{\mathrm{all\;configs.}}\mathscr{D}\mathbf{X}\,e^{-\beta\mathcal{H}_{ \mathrm{HC}}[\mathbf{X}]}\quad, \tag{7}\]
where the sum is over connected as well as disconnected surfaces of arbitrary topology and \(\beta=1/(k_{\rm B}T)\) with \(k_{\rm B}\) the Boltzmann constant. We will work with the Helmholtz free energy \(F=-k_{\rm B}T\log Z\), by restricting the sum in (7) to _connected_ configurations (two disjoint spheres, e.g., would not be counted). Then the sum over closed, compact, orientable embedded surfaces can be organized into an integration \(\mathscr{D}{\bf X}_{g}\) over fluctuations at fixed genus \(g\) and a sum over all genera
\[F=\sum_{g=0}^{\infty}\int_{\rm conn.}\mathscr{D}{\bf X}_{g}e^{-\beta{\cal H}_{ \rm HC}[{\bf X}_{g}]}\quad. \tag{8}\]
The first term in the sum counts all topological spheres (\(g=0\)), the second all tori with one handle (\(g=1\)), and so on. Using (6) in (8) and factorizing those terms that do not depend on the geometry, we can write
\[F=e^{-4\pi\beta\bar{\kappa}}\sum_{g=0}^{\infty}e^{4\pi\beta\bar{\kappa}g}\int _{\rm conn.}\mathscr{D}{\bf X}_{g}\,e^{-\frac{\beta\kappa}{2}{\cal W}[{\bf X}_ {g}]}\quad. \tag{9}\]
We interpret \({\cal H}_{\rm HC}[{\bf X}_{g}]\) as the Euclidean action of the field \({\bf X}\) and \(F\) as the generating function of connected diagrams for the \(n-\)point correlators \(\langle X^{\mu}(\mathbf{\xi}_{1})...X^{\nu}(\mathbf{\xi}_{n})\rangle\) where \(\mathbf{\xi}\) are the internal coordinates of the surface \({\bf X}\). Then each configuration contained in the expansion (9) is analogous to a Feynman diagram of a closed-string amplitude whose worldsheet is a closed surface of genus \(g\) weighted by the factor \((\exp\{2\pi\beta\bar{\kappa}\})^{2g}\) (see Fig. 1, \(Top\)).
Starting from a configuration of genus \(g\), the subsequent term in the sum is obtained by attaching a handle. Each handle contributes with a factor \(g_{\rm st}^{2}\), where \(g_{\rm st}\equiv(\exp\{2\pi\beta\bar{\kappa}\})\) is analogous to the string coupling constant in closed string theory. The power of 2 appears because every handle is a cylinder anchored to the surface at _two_ excised disks (see Fig. 1,_Top_). The probability of forming a handle is controlled by both the temperature \(\beta\) and the Gaussian rigidity \(\bar{\kappa}\). When \(\bar{\kappa}<0\) the formation of handles is suppressed (\(g_{\rm st}<1\)).
In the partition function (7) the role of the Euclidean action is played by the Willmore energy (4). The latter can be written as \({\cal W}[\Sigma_{g}]=\int(\Delta{\bf X})^{2}\,{\rm d}S\) and is a fourth-order operator in \(\partial_{a}\). Thus, the free propagator \(\langle X^{\mu}(\mathbf{\xi}_{1})X^{\nu}(\mathbf{\xi}_{2})\rangle\) (represented by a cylinder in Fig. 1,_Bottom_) is the inverse of the _biharmonic_ operator: \(\left(\Delta\Delta\right)^{-1}\), where \(\Delta=\partial_{a}\partial^{a}\) is the Laplacian. This is different from the propagator of closed strings: there, the Polyakov action is quadratic in the derivatives (\(S_{\rm P}=\int(\partial_{a}{\bf X})^{2}\,{\rm d}S\)) so the free propagator is the inverse of the Laplacian \(\Delta^{-1}\).
## III Upper bound conjecture
We ask: is \(F\) [Eq. (9)] finite or divergent? The answer depends on the asymptotics of the functional integral as a function of \(g\). For topological spheres \(S^{2}\) (\(g=0\)) the Willmore functional is bounded below by \(4\pi\):
\[{\cal W}[S^{2}]\geq 4\pi\quad. \tag{10}\]
The bound is saturated for the round sphere of radius \(R\), \(S^{2}(R)\) modulo uniform scaling of the sphere radius. A simple rescaling does not change shape, so we regard all round spheres as the same configuration. With this caveat, the first term in the series is bounded above by \(e^{-4\pi\beta\bar{\kappa}}e^{-(\beta\kappa/2)4\pi}\).
For \(g\geq 1\) in (9) we must consider all topological tori. In 1965 T. Willmore [32] conjectured that \({\cal W}\geq 2\pi^{2}\) for all genus-1 surfaces and that the Clifford torus [29] saturates the bound up to conformal transformations of \(S^{3}\). In 2014 Marques and Neves [25] proved that for all _embedded_ closed surfaces of genus \(g\geq 1\)
\[{\cal W}[{\bf X}_{g\geq 1}]\geq 2\pi^{2}\quad, \tag{11}\]
with equality holding only for the Clifford torus (up to conformal transformations), thus proving Willmore's conjecture. At the time of writing, there is ongoing research [19; 20] aiming to prove that Lawson's minimal surfaces (denoted \(\xi_{g,1}\)[21]) of genus \(g\geq 2\) are the minimizers of the Willmore functional among all surfaces with the same ambient symmetries as \(\xi_{g,1}\).
Using (10) and (11) in (9)
\[F\leq e^{-4\pi\beta\bar{\kappa}}\left\{e^{-2\pi\beta\kappa}+e^{-2\pi^{2}\beta \kappa}\sum_{g=1}^{\infty}e^{4\pi\beta\bar{\kappa}g}\int\mathscr{D}{\bf X}_{g}\right\} \tag{12}\]
Figure 1: _Top_. Diagrammatic representation of the free energy (9) of tensionless fluid membranes. The first terms in the sum are topological spheres (\(g=0\)), tori with 1 handle (\(g=1\)) and so on. The genus increases by one unit when a handle is attached. Every handle contributes with two factors of the coupling constant \(g_{\rm st}\). (defined in the main text), one for each excised disk (in red). _Bottom_. The free propagator (the inverse biharmonic operator) is represented as an open cylinder.
The integral over geometric fluctuations \(\Gamma(g)\equiv\int\mathscr{D}\mathbf{X}_{g}\) at fixed genus is subtle to evaluate because it requires a parametrization of the moduli space of _embedded_ surfaces. Moduli spaces of non-embedded surfaces lead to divergent partition functions - as happens in string theory [11]. Here we are concerned with the moduli space of physical, _embedded_ membranes. This constraint may lead to a finite growth of moduli space and a convergent free energy. From a physical point of view, as noted earlier, there will be natural suppression of higher-genus contributions arising both from steric constraints (handles have a minimal physical size) and the full amplitude for creating a handle, here parameterized by a single coupling constant \(\bar{\kappa}\).
In a low temperature expansion one sums over slight deformations of the ground states of the Willmore functional. For each genus there is a continuous family of ground states related by conformal transformations, thanks to the conformal invariance of (4). Since the conformal group contains unbounded elements it is necessary to sum over equivalence classes of shapes modulo rotations, translations and uniform dilatation which preserve the shape, as noted eaerlier for the sphere (\(g=0\)). One must then determine if the integration over special conformal transformations (SCTs) is finite.
As an example, we consider the embedded axisymmetric Clifford torus
\[\mathbf{X}_{\mathrm{Cl}}(\theta,\varphi)=\frac{1}{\sqrt{2}-\sin\varphi}\left( \cos\theta,\sin\theta,\cos\varphi\right) \tag{13}\]
where \(\theta,\varphi\in[0,2\pi[\) and we apply a SCT
\[\mathbf{X}\rightarrow\frac{\mathbf{X}/X^{2}+\mathbf{a}}{\left(\mathbf{X}/X^{ 2}+\mathbf{a}\right)^{2}}\quad. \tag{14}\]
The Willmore energy of the transformed shape is still equal to \(2\pi^{2}\). Moreover, it is known [7; 17] that \(\mathbf{a}=(0,0,a_{z})\) leads to a simple rescaling of (13) while \(\mathbf{a}=\rho(\cos\alpha,\sin\alpha,0)\) generates a one-parameter family of shapes (for \(0<\rho<\sqrt{2}-1\)) that break axial symmetry. \(\alpha\) simply rotates the shape around the \(z-\)axis. This space \(\Gamma(1)\) is _bounded_[17] because at \(\rho=\sqrt{2}-1\) the axisymmetric shape (13) deforms into a sphere with an infinitesimal handle. If \(\rho\) increases further the transformation retraces the shapes that appeared between \(0<\rho<\sqrt{2}-1\), so at least for \(g=1\) the degeneracy factor \(\Gamma(1)\) represents a bounded family of genus 1 shapes. We conjecture that similar arguments hold for higher-genus surfaces. If \(\Gamma(g)\) is finite for all \(g\), it is crucial to know how \(\Gamma(g)\) scales with \(g\). We comment on some representative cases below.
If \(\Gamma=\Gamma_{0}\) is independent of \(g\), the infinite sum in (12) can be rewritten as a geometric series. If \(e^{4\pi\beta\bar{\kappa}}<1\)[30] it is possible to sum the series in (9) and the bound takes the form
\[F\leq e^{-4\pi\beta\bar{\kappa}}e^{-2\pi\beta\kappa}+\frac{\Gamma_{0}e^{-2\pi ^{2}\beta\kappa}}{1-e^{4\pi\beta\bar{\kappa}}}\quad. \tag{15}\]
The result (15), with \(\Gamma_{0}=1\), becomes exact for an ensemble of closed membranes with fixed isoperimetric ratio, defined as \(r=V^{2}/A^{3}\) where \(V,A\) are the volume and area of the closed surface. When \(r\) is fixed, there is a unique ground state for the Willmore functional.
If \(\Gamma(g)\) is a polynomial, say \(\Gamma(g)=Ag^{k}\), then the bound reads
\[F\leq e^{-4\pi\beta\bar{\kappa}}\left\{e^{-2\pi\beta\kappa}+Ae^{-2\pi^{2}\beta \kappa}\mathrm{Li}_{-k}(e^{4\pi\beta\bar{\kappa}})\right\} \tag{16}\]
where \(\mathrm{Li}_{k}(z)\) is the polylogarithmic function [22].
## IV Bound on the average number of handles
Consider a connected membrane with many handles such as a microemulsion in the plumber's nightmare phase [15]. The conjectured bound (12) has observable implications for the expected number of handles. Let \(\langle g\rangle\) be the expectation value of the genus
\[\langle g\rangle\equiv\frac{1}{Z}\sum_{g=0}^{\infty}g\cdot e^{-4\pi\beta\bar {\kappa}(1-g)}\int\mathscr{D}\mathbf{X}_{g}\,e^{-\frac{\beta\kappa}{2}\mathcal{ W}[\mathbf{X}_{g}]}\quad. \tag{17}\]
This can be written in terms of the free energy \(F=-\beta\log Z\) as
\[\langle g(\bar{\kappa})\rangle=1-\frac{1}{4\pi}\frac{\partial F(\bar{\kappa}) }{\partial\bar{\kappa}}\quad, \tag{18}\]
Figure 2: Family of ground states of \(\mathcal{W}\) among genus 1 surfaces, generated by applying (14) with \(\mathbf{a}=(\rho,0,0)\) to the axisymmetric Clifford torus (13) at \(\rho=0\). Upon increasing \(\rho\) (14) the circle marked in red shrinks while the black circle inflates. The black circle’s radius diverges as \(\rho\rightarrow\sqrt{2}-1\simeq 0.414\). The shapes repeat for \(\rho>\sqrt{2}-1\) up to a global scaling.
where the free energy and its conjectured upper bound depend parametrically on the Gaussian rigidity \(\bar{\kappa}\). Suppose that the value of \(\bar{\kappa}\) could be varied in experiments, for instance by changing the nature of the surfactants - see for example [13] and Table 1 in [14]) and that the associated value of \(\langle g(\bar{\kappa})\rangle\) could be measured for each realization of the ensemble [6]. Using (18), the ensemble average of \(\langle g(\bar{\kappa})\rangle\) is
\[\overline{\langle g(\bar{\kappa})\rangle} \equiv \frac{1}{\Delta\bar{\kappa}}\int_{\bar{\kappa}_{1}}^{\bar{\kappa }_{2}}\,\langle g(\bar{\kappa}^{\prime})\rangle\ \mathrm{d}\bar{\kappa}^{\prime} \tag{19}\] \[= 1-\frac{1}{4\pi\Delta\bar{\kappa}}[F(\bar{\kappa}_{2})-F(\bar{ \kappa}_{1})]\]
where we assumed that the Gaussian rigidity can be varied in the range \(\bar{\kappa}_{1}<\bar{\kappa}<\bar{\kappa}_{2}\) and \(\Delta\bar{\kappa}\equiv\bar{\kappa}_{2}-\bar{\kappa}_{1}\). Let \(b(\bar{\kappa})\) indicate the conjectured upper bound (12) on \(F\): \(F(\bar{\kappa})\leq b(\bar{\kappa})\). Then equation (19) turns into the following bound:
\[\overline{\langle g(\bar{\kappa})\rangle}<1-\frac{1}{4\pi\Delta\bar{\kappa}}[ b(\bar{\kappa}_{2})-b(\bar{\kappa}_{1})]\quad. \tag{20}\]
The average on the left-hand side in (20) could be measured in experiments. The bound (20) predicts that \(\overline{\langle g(\bar{\kappa})\rangle}\) is bounded above and so the expected number of handles should be finite and should depend only on the interval \(\Delta\bar{\kappa}\). Since \(g\geq 0\), (20) implies that \(b(\bar{\kappa})\) is a non-increasing function of the Gaussian rigidity. Eq. (20) implies that a tensionless fluid membrane at equilibrium has a finite average genus regardless of other interactions or constraints such as self-avoidance and steric repulsion.
## V Probability of forming a handle
Using Eq. (7) and (6), the probability that a membrane of genus \(g\) occurs at equilibrium is
\[p_{g}=\sum_{\{\Sigma_{g}\}}\frac{1}{Z}e^{-\beta\frac{\bar{\kappa}}{\mathcal{W }}[\Sigma_{g}]-\beta\bar{\kappa}4\pi(1-g)}\quad, \tag{21}\]
where the sum \(\sum_{\{\Sigma_{g}\}}\) is over all membrane configurations at fixed genus. We compute the ratio between \(p_{g+1}\) and \(p_{g}\). Since \(Z\) and \(\exp\{-\beta\bar{\kappa}4\pi(1-g)\}\) don't depend on the geometry of the configuration we find
\[\frac{p_{g+1}}{p_{g}}=e^{\beta 4\pi\bar{\kappa}}\frac{\sum_{\{\Sigma_{g+1} \}}e^{-\beta\frac{\bar{\kappa}}{\mathcal{W}}[\Sigma_{g+1}]}}{\sum_{\{\Sigma_{ g}\}}e^{-\beta\frac{\bar{\kappa}}{\mathcal{W}}[\Sigma_{g}]}}\quad. \tag{22}\]
At low temperature (\(\beta\to\infty\)) we expect that the sum will be dominated by those configurations that minimize the Willmore functional:
\[\lim_{\beta\to\infty}\frac{p_{g+1}}{p_{g}}\sim e^{\beta 4\pi\bar{\kappa}}\frac{e^{-\beta\frac{ \bar{\kappa}}{2}\inf\mathcal{W}[\Sigma_{g+1}]}}{e^{-\beta\frac{\bar{\kappa}}{2 }\inf\mathcal{W}[\Sigma_{g}]}}\quad, \tag{23}\]
where \(\inf\) denotes the infimum. First, we restrict our attention to membranes of genus \(0\) (i.e. spheres, for which \(\inf\mathcal{W}[\Sigma_{0}]=4\pi\)) and \(1\) (i.e. tori, for which \(\inf\mathcal{W}[\Sigma_{1}]=2\pi^{2}\)). Then Eq. (23) becomes
\[\lim_{\beta\to\infty}\frac{p_{1}}{p_{0}}\sim e^{\beta 4\pi\bar{\kappa}}e^{-\beta\kappa(\pi^{2}-2\pi)}\quad. \tag{24}\]
The left-hand side of Eq. (24) could be determined experimentally by measuring the frequency of topological spheres and genus-1 tori at equilibrium and low temperature. If the bending rigidity \(\kappa\) could be measured independently, the Gaussian rigidity \(\bar{\kappa}\) could be determined from Eq. (24).
Another relation between relative probabilities and the Gaussian rigidity may be found by using R. Kusner's result [19] that the infimum of the Willmore's functional is bounded above by \(8\pi\) for very large genus: \(\lim_{g\to\infty}\inf\mathcal{W}[\Sigma_{g}]=8\pi\). In the large-genus limit both numerator and denominator in the right-hand side of Eq. (23) tend to the same value \(\exp(-\beta\kappa/2\cdot 8\pi)\) and the ratio \(p_{g+1}/p_{g}\) becomes independent of \(\kappa\):
\[\lim_{\beta\to\infty}\lim_{g\to\infty}\frac{p_{g+1}}{p_{g}}\sim e^{\beta 4\pi\bar{\kappa}}\quad. \tag{25}\]
Equation (25) suggests that \(\bar{\kappa}\) could be determined directly by measuring the relative frequency of large-genus configurations (which may be rarer than spheres and genus-1 tori) at low temperature. This could provide a new method for finding \(\bar{\kappa}\), whose value is notoriously difficult to determine [31; 34].
## VI Conclusions
We considered the free energy of closed, single-component membranes with vanishing surface tension. Using Willmore's bound on the bending energy we conjectured that the free energy is bounded above by a universal constant that depends on the bending rigidity and on the Gaussian rigidity. Such a bound implies that the membrane has a finite average number of handles. By interpreting the partition function as a field theory of fluid membranes we observed that the formation of handles is controlled by an effective coupling constant that depends on the Gaussian rigidity and the temperature. Finally, using the well-known upper bound on the infimum of the Willmore energy [19], we propose that the Gaussian rigidity \(\bar{\kappa}\) may be determined in experiments by measuring the relative frequencies of large-genus configurations at low temperature.
## VII Acknowledgements
This research was supported in part by the National Science Foundation under Grant No. NSF PHY
1748958. We benefited from several stimulating discussions with Rob Kusner.
|
2305.09568 | Step-based checkpointing with high-level algorithmic differentiation | Automated code generation allows for a separation between the development of
a model, expressed via a domain specific language, and lower level
implementation details. Algorithmic differentiation can be applied symbolically
at the level of the domain specific language, and the code generator reused to
implement code required for an adjoint calculation. However the adjoint
calculations are complicated by the well-known problem of storing or
recomputing the forward data required by the adjoint, and different
checkpointing strategies have been developed to tackle this problem. This
article considers the combination of high-level algorithmic differentiation
with step-based checkpointing schedules, with the primary application being for
solvers of time-dependent partial differential equations. The focus is on
algorithmic differentiation using a dynamically constructed record of forward
operations, where the precise structure of the original forward calculation is
unknown ahead-of-time. In addition, high-level approaches provide a simplified
view of the model itself. This allows data required to restart and advance the
forward, and data required to advance the adjoint, to be identified. The
difference between the two types of data is here leveraged to implement
checkpointing strategies with improved performance. | James R. Maddison | 2023-05-16T16:04:01Z | http://arxiv.org/abs/2305.09568v2 | # On the implementation of checkpointing with high-level algorithmic differentiation
###### Abstract
Automated code generation allows for a separation between the development of a model, expressed via a domain specific language, and lower level implementation details. Algorithmic differentiation can be applied symbolically at the level of the domain specific language, and the code generator reused to implement code required for an adjoint calculation. However the adjoint calculations are complicated by the well-known problem of storing or recomputing the forward model data required by the adjoint, and different checkpointing strategies have been developed to tackle this problem. This article describes the application of checkpointing strategies to high-level algorithmic differentiation, applied to codes developed using automated code generation. Since the high-level approach provides a simplified view of the model itself, the data required to restart the forward and data required to advance the adjoint can be identified, and the difference between them leveraged to implement checkpointing strategies of improved performance.
keywords: algorithmic differentiation; adjoint; reverse mode; checkpointing; automated code generation +
Footnote †: journal:
## 1 Introduction
In Farrell et al. [1] a high-level approach for algorithmic differentiation is described, combining a view of a numerical code at the level of finite element discretized partial differential equations with automated code generation. The forward code is described using a domain specific language, the Unified Form Language [2], and specific code necessary to assemble matrices and vectors is generated using the FEniCS code generator [3; 4]. The dolfin-adjoint library described in Farrell et al. [1], and its successor pyadjoint [5], use a variant of the standard operator overloading approach to algorithmic differentiation,
intercepting forward calculations and building a record of problems solved as the forward calculation progresses. Symbolic differentiation is applied to build symbolic representations of components of the associated adjoint problem, and the code generator reused to construct implementations of code necessary for an adjoint calculation.
This high-level approach has two key features. Firstly, provided the forward code is represented in a form expressible in the domain specific language, the derivation of the adjoint can be automated, significantly simplifying the development of adjoint codes. Secondly, the number of operations which need to be recorded is significantly reduced. For example the solution of a finite element discretized partial differential equation may appear as a single record.
The adjoint problem is solved in a reverse causal sense to the originating forward code, but also depends in general upon the original forward solution, which must either be stored or recomputed for use by the adjoint. In one limiting case the entire forward solution may be stored in memory for use by the adjoint. For large problems - and in particular for solvers of large time-dependent partial differential equations - this approach will lead to memory limits being exceeded. In an alternative limiting case only the forward input parameters (for example initial conditions) are stored, and other forward data is recomputed by repeatedly solving the forward code. The first "store everything" approach has storage usage which grows linearly with the number of steps, but requires no recomputation, while the second "rerun everything" approach uses a bounded amount of additional storage, but the amount of forward computation is asymptotically quadratic in the number of steps.
More general checkpointing strategies compromise between these limiting cases by storing intermediate checkpoints and rerunning the forward between checkpoints, possibly with the storage of further checkpoints as the adjoint calculation progresses (see e.g. section 12.3 of Griewank and Walther [6]). In a typical approach the forward calculation is divided into a series of steps, for example corresponding to timesteps in a time dependent numerical solver, and then a checkpointing schedule prescribes when checkpoints should be stored and loaded, when the forward should advance, and when the adjoint should advance. The revolve algorithm [7] provides an optimal schedule for the case where the number of forward steps is known ahead of the calculation, and where checkpoints store data required to restart the forward but not necessarily data required to advance the adjoint. The revolve algorithm minimizes the number of forward steps taken and then, given that the number of forward steps is minimal, further minimizes the number of times a checkpoint is stored. Further approaches define schedules for the case where the number of steps is not known ahead of the forward calculation [e.g. 8, 9]. Checkpointing schedules may also consider cases where there are different types of storage available [10, 11, 12, 13], or where compression is applied when storing checkpoints [14].
In general the data required to restart a forward calculation, and the data required by the adjoint, differ. The dependencies of the forward Jacobian matrix - here termed the "non-linear dependencies" - include all parameters and forward variables on which the adjoint depends. Non-linear dependencies can
be identified from the symbolic representation of finite element discretized partial differential equations provided by the Unified Form Language. Combined with the simplified computational graph appearing in a high-level approach, the distinction between forward restart data and non-linear dependency data allows for the construction of checkpointing schedules of increased performance. Such schedules have recently been considered in the context of multi-stage Runge-Kutta schemes in Zhang and Constantinescu [15] and Zhang and Constantinescu [16]. Here the problem is considered more generally.
This article considers the case where the forward calculation is divided into a known sequence of steps, and considers the problem of defining a schedule with advancement of the forward or adjoint over full steps. Optimized strategies in more general cases may be challenging, noting for example that the problem of optimizing an adjoint calculation is itself NP-complete [17]. However, more general approaches appear in the context of backpropagation in neural networks [e.g. 18, 19].
The revolve algorithm of Griewank and Walther [7] is applied to high-level algorithmic differentiation with automated code generation in Farrell et al. [1] and Maddison et al. [20]. In this article a more general checkpointing schedule structure, for use with a high-level approach, is described. The schedule explicitly incorporates the buffering of data in an "intermediate storage", ensuring that forward variables can be defined and computed by the forward before storage in a checkpoint. The schedule further distinguishes between storage of forward restart and non-linear dependency data. The new schedule structure is sufficiently flexible to be applied to a number of existing approaches, including the revolve algorithm, the multistage approach of Stumm and Walther [10], the two-level mixed periodic/binomial approach described in Pringle et al. [21] and in the supporting information for Goldberg et al. [22], and H-Revolve schedules [13].
This article principally focuses on the application of checkpointing for adjoint calculations associated with models written using the Unified Form Language - particularly FEniCS [3, 4] and Firedrake [23]. The described approaches are implemented in Python in the tlm_adjoint library, and can be applied to any model compatible with tlm_adjoint, without further modification of code beyond the definition of forward steps and the schedule.
The article proceeds as follows. In section 2 details of forward and adjoint calculations, when viewed in terms of a high-level structure, are described. Section 3 describes a checkpointing schedule structure which can be applied in a high-level algorithmic differentiation approach, incorporating the use of an intermediate storage. This scheduling structure is applied in section 4 to implement a schedule which makes use of the additional assumption that the sizes of forward restart data and single step non-linear dependency data are the same. The resulting schedule is a version of the CAMS-GEN algorithm of Zhang and Constantinescu [16] for \(l=1\) stage. The approach as described here generalizes the algorithm to a broader class of models. The article concludes in section 5.
## 2 Forward and adjoint calculations
### The computational graph
The forward problem is divided into a number of subproblems, which can be considered to be the equations solved in the forward model to obtain values for forward variables, and hence are here referred to as the "equations". In a high-level approach one of these equations may correspond to a discrete partial differential equation. Each equation solves for one or more variables, requiring as inputs the values of zero or more parameters or variables. The parameters or variables may, for example, consist of single scalars, vectors, or finite element discretized functions. The forward model may therefore be visualized via a computational graph - a directed acyclic graph, with nodes of the graph corresponding to the parameters or variables, and directed edges to indicate the parameters or variables on which their calculation depends (with the edge directed from the dependency) - see e.g. Mitusch [24] for a discussion in the context of high-level algorithmic differentiation applied to FEniCS. Where a code computes values for the same programmatical variable more than once then, in the computational graph, later nodes appear corresponding to later values for the variable. Equations are further collected together into larger "steps", which may for example correspond to one or more timesteps in a time dependent numerical solver. Each equation (and similarly each step) is indexed, and the forward calculation solves the equations (and steps) in index order. As in Griewank and Walther [7] it is assumed that an appropriate division of the forward model into steps is provided, and forward or adjoint advancement always occurs over full steps.
As an example, the computational graph associated with a numerical solver for the barotropic vorticity equation on a beta plane is considered. The configuration corresponds to a time-dependent non-linear Stommel-Munk problem (25, chapter 14). The model is implemented in Firedrake [23]. tlm_adjoint [20] is used to build the record of equations. Equations are discretized in space using \(P_{1}\) continuous Lagrange finite elements, and in time with third order Adams-Bashforth, started with a forward Euler step followed by a second order Adams-Bashforth step. In visualizations of the computational graph for this example, \(\psi\) corresponds to the stream function, \(\zeta\) to the relative vorticity, \(F\) to the right-hand-side of the barotropic vorticity equation, \(\psi_{0}\) to the initial stream function, \(Q\) to the wind forcing term appearing in the vorticity equation, \(\beta\) to the magnitude of the background planetary vorticity gradient, \(r\) to the linear bottom drag parameter, and \(\nu\) to the Laplacian viscosity coefficient. \(\zeta_{\rm prev}\), \(F_{0}\), \(F_{1}\), and \(F_{2}\) are auxilliary variables used in timestepping.
The computational graph for two timesteps of the numerical model is visualized in Figure 1. In this visualization the nodes of the graph correspond to the equations, computing the variable indicated in black. Directed edges indicate the earlier equations on whose solution variables computed in later equations depend. Parameters are defined to be dependencies which are required in the calculation of a forward variable, but which are not computed by solving earlier
forward equations. In the visualization these are indicated in blue. The parameters could alternatively appear as additional nodes in the graph - although later an auxiliary step indicating parameters of interest will play a similar role. Steps are indicated with red rectangles.
For example, equation 0 in step 0 corresponds to the assignment \(\psi\leftarrow\psi_{0}\), where the initial stream function \(\psi_{0}\) is a parameter. Equation 6 in step 0 corresponds to the solution of a discrete Poisson equation, inverting the relative vorticity \(\zeta\) to obtain the stream function \(\psi\). Step 0 corresponds to initialization and the first timestep, and step 1 to the second timestep and the evaluation of a functional.
### An adjoint calculation
A complete adjoint calculation consists of first solving all forward equations and then, in reverse order, solving for adjoint variables associated with each forward equation.
We first augment the forward model with two additional steps. A first step, appearing at the start of the calculation, copies the values of input parameters of interest from a given input parameter to an auxiliary model variable. A second step, appearing at the end of the calculation, copies the value of a functional of interest into an auxiliary output variable. The equations appearing in these auxiliary steps correspond to simple assignments, and their appearance simplifies the structure of the adjoint calculation to follow. An augmented model, considering the initial stream function \(\psi_{0}\) and the wind forcing parameter \(Q\) to be parameters of interest, is visualized in Figure 2.
Associated with each forward equation is a residual function which takes as input all dependencies and a candidate for the equation solution. For example for equation 3 in step 0 in Figure 2 we have a residual function \(\mathcal{F}_{3}^{0}\left(F,\beta,r,\nu,Q^{\prime},\psi,\zeta\right)\) which takes as input the parameters \(\beta\), \(r\), and \(\nu\), the forward variables \(Q^{\prime}\), \(\psi\), and \(\zeta\) computed by solving earlier equations, and a
Figure 1: Visualization of the computational graph for two timesteps in a solver for the barotropic vorticity equation. Step 0 corresponds to initialization and a forward Euler step, and step 1 to a second order Adams-Bashforth step and evaluation of a functional.
candidate value for the solution \(F\). The solution to the equation is obtained by solving the root finding problem
\[\mathcal{F}_{3}^{0}\left(\hat{F},\beta,r,\nu,Q^{\prime},\psi,\zeta\right)=0,\]
to obtain the solution \(\hat{F}\). Since no ambiguity arises, the distinction between the solution candidate argument (here \(F\)) for a residual function and the solution obtained for given values of the dependencies (here \(\hat{F}\)) is dropped.
The solution to the \(j\)th forward equation in step \(i\) is denoted \(u_{j}^{i}\), with \(i=-1\) corresponding to the auxiliary parameters step and \(i=N\) corresponding to the auxiliary functional step. For simplicity it is assumed that the \(u_{j}^{i}\) are real vectors, each with length \(M_{j}^{i}\) for some positive integer \(M_{j}^{i}\). Step \(i\) has \(N_{i}\) equations, with \(N_{i}\) a positive integer, and the residual function for the \(j\)th forward equation in step \(i\) is denoted \(\mathcal{F}_{j}^{i}\) and has codomain \(\mathbb{R}^{M_{j}^{i}}\). Residual functions in the auxiliary parameters and functional steps are defined such that their derivative with respect to the candidate solution argument is an identity matrix. Equations in each step are indexed in a forward causal sense - in a forward calculation the calculation for \(u_{j+1}^{i}\) occurs after the calculation for \(u_{j}^{i}\), and the calculation for \(u_{0}^{i+1}\) occurs after the calculation for \(u_{N_{i}-1}^{i}\). Associated with the \(j\)th forward equation in the \(i\)th step we introduce an adjoint variable \(\lambda_{j}^{i}\) and an adjoint right-hand-side \(b_{j}^{i}\), which are each real vectors with the same length as \(u_{j}^{i}\).
The adjoint calculation then proceeds according to Algorithm 1. In Algorithm 1 the element in the \(\alpha\)th row and \(\beta\)th column of a matrix \(\partial\mathcal{F}_{j}^{i}/\partial u_{l}^{k}\) contains the partial derivative of the \(\alpha\)th component of \(F_{j}^{i}\) with respect to the \(\beta\)th component of \(u_{l}^{k}\), each \(\partial\mathcal{F}_{j}^{i}/\partial u_{j}^{i}\) is assumed invertible, and all vectors are column vectors. At the end of the calculation the adjoint variable \(\lambda_{j}^{-1}\) is the derivative of the functional with respect to the parameter associated with the
Figure 2: As in Figure 1, but with the introduction of auxiliary steps to copy input parameters (the new step -1) and copy the output functional (the new step 2).
\(j\)th equation in the auxiliary parameters step.1
Footnote 1: In the complex case the matrix transposes are replaced with Hermitian transposes, and the adjoint variables \(\lambda_{j}^{-1}\) are the complex conjugate of the derivatives.
**Result:** Sensitivities \(\lambda_{j}^{-1}\)
**begin**
```
for\(i\leftarrow-1\) to \(N-1\)do for\(j\gets 0\) to \(N_{i}-1\)do \(b_{j}^{i}\gets 0\); end for end for \(b_{0}^{N}\gets 1\); for\(i\gets N\)to\(-1\)do for\(j\gets N_{i}-1\)to\(0\)do Solve the adjoint linear system \(\left(\partial\mathcal{F}_{j}^{i}/\partial u_{j}^{i}\right)^{T}\lambda_{j}^{i}= b_{j}^{i}\) for \(\lambda_{j}^{i}\); for each variable \(u_{l}^{k}\) on which the calculation for \(u_{j}^{i}\) depends do \(b_{l}^{k}\gets b_{l}^{k}-\left(\partial\mathcal{F}_{j}^{i}/\partial u_{l} ^{k}\right)^{T}\lambda_{j}^{i}\); end for end for end for
```
**Algorithm 1**An adjoint calculation for a forward model whose variables are real vectors.
An implementation of this algorithm can be optimized so that memory for a right-hand-side \(b_{j}^{i}\) is allocated only when the first adjoint term is added, and to handle the (commonly encountered) case where \(\partial\mathcal{F}_{j}^{i}/\partial u_{j}^{i}\) is an identity matrix. An activity analysis can be applied (e.g. Griewank and Walther [6], section 6.2) to avoid calculating adjoint terms or variables which do not depend implicitly on the adjoint initial condition \(b_{0}^{N}\), and which do not implicitly influence the \(\lambda_{j}^{-1}\) - that is, \(\lambda_{j}^{i}\) and terms contributing to \(b_{j}^{i}\) need only be computed if, in the computational graph, \(J\) is reachable from \(u_{j}^{i}\) and \(u_{j}^{i}\) is reachable from any \(u_{k}^{-1}\).
A subset of the parameters and forward variables is identified, consisting of only those needed to compute the matrices \(\partial\mathcal{F}_{j}^{i}/\partial u_{j}^{i}\) and \(\partial\mathcal{F}_{j}^{i}/\partial u_{l}^{k}\) which appear in Algorithm 1. These are here referred to as the "non-linear dependencies".
The adjoint variables \(\lambda_{j}^{i}\) are associated with forward equations, rather than with parameters or variables. This is as expected, since adjoint variables are in the space dual2 to the codomain of the forward residual functions. For example, if evaluation of a forward residual yields coefficients for a linear functional, mapping from a finite element discrete function space and expanded in a dual basis, then the associated adjoint variable corresponds to coefficients for a discrete
function expanded in the associated finite element basis (and vice versa).
The adjoint calculations associated with the auxiliary functional step with index \(N\) consist of an assignment \(u_{0}^{N}\gets b_{0}^{N}\) and an addition to one element of \(b_{l}^{k}\), \(b_{l,\alpha}^{k}\gets b_{l,\alpha}^{k}+u_{0}^{N}\) for some \(\alpha\), after which \(b_{l,\alpha}^{k}=1\). The auxiliary functional step facilitates initialization of the adjoint - in practice this may simplify implementation in code, as code used to process the computational graph can be used to assist in initialization of the adjoint. The adjoint calculations associated with the auxiliary functional step with index \(N\) are therefore extremely simple. The adjoint calculations associated with the auxiliary parameters step with index \(-1\) are simple assignments, \(\lambda_{j}^{-1}\gets b_{j}^{-1}\). Again, in practice the introduction of this step may simplify implementation in code, as the calculation of sensitivies need not be considered separately from other elements of the adjoint calculation. Since the cost of calculations in the auxiliary steps is trivial, checkpointing schedules need consider advancement of the adjoint only over the \(N\) steps with indices \(N-1\) to \(0\) inclusive. For the remainder of this article we therefore do not explicitly include the auxiliary parameter and functional steps, but note that these can be added and used to facilitate the initialization of an adjoint calculation or the calculation of a sensitivity.
## 3 Defining a checkpointing schedule
A checkpointing schedule prescribes the combination of an original forward calculation together with one adjoint calculation, prescribing for example when checkpoints should be stored and loaded, and when the forward or adjoint should advance. In this section the requirements of a checkpointing schedule are discussed, incorporating a distinction between the data required to reinitialize and advance the forward, and data required to advance the adjoint.
Here forward advancement refers to the solution of equations in one or more steps, proceeding in a forward causal sense. Adjoint advancement refers to the solutions of adjoint linear systems in one or more steps, together with the addition of contributions to adjoint right-hand-sides (possibly appearing in earlier steps), proceeding in a reverse causal sense.
### Forward and adjoint dependencies
It is important to identify two classes of dependencies: the dependencies required to restart and advance the forward calculation, and the dependencies required to advance the adjoint calculation.
Returning to the numerical solver for the barotropic vorticity equation, the computational graph associated with three timesteps is visualized in Figure 3. We can now consider, as an example, the dependencies which would need to be stored in a checkpoint associated with the start of step 1, sufficient to solve forward equations in steps 1 and 2. These are given by the parameters \(\beta\), \(r\), \(Q\), and \(\nu\), as well as the variables \(F_{0}\), \(\zeta\), and \(\psi\), computed respectively in equations 4, 5, and 6 in step 0. Note that, in general, the data which needs to be stored in a checkpoint to restart the forward calculation is dependent not only upon
where the forward is to be restarted, but also to where it is to be advanced - since later steps can depend on new parameters, or on additional variables not encountered in preceding steps. In particular there may be "long-range" dependencies on variables computed in much earlier steps. In a typical example a variable might be computed in the first step, during initialization of a model, but only be used much later in the calculation. In Figure 3 we see, for example, that the calculation for \(\zeta\) in equation 3 in step 2 depends on \(F_{0}\) computed in equation 4 in step 0.
We next consider the dependencies sufficient to advance the adjoint calculation over equations in steps 2 and 1 - that is, to solve for adjoint variables associated with each equation, and to add terms to other adjoint equations (noting that we may add terms to adjoint right-hand-sides associated with earlier steps - here step 0). The non-linear dependencies of the forward are sufficient for an adjoint calculation. For this example, ignoring possible non-linear dependencies associated with the functional \(J\), for advancement of the adjoint over step 2 it suffices that the parameters \(\beta\), \(r\), and \(\nu\), are stored, together with \(\zeta\) and \(\psi\) computed respectively in equations 3 and 4 in step 1. For advancement of the adjoint over step 1 it suffices that the parameters \(\beta\), \(r\), and \(\nu\), are stored, together with \(\zeta\) and \(\psi\) computed respectively in equations 5 and 6 in step 0.
We can now consider making a choice between storing a forward restart checkpoint, sufficient to restart the forward at the start of step 1 and advance over steps 1 and 2, versus storing the non-linear dependencies for steps 1 and 2. In the case of a forward restart checkpoint, after loading the checkpoint the forward must in general advance to recompute non-linear dependency data. In the case where non-linear dependency data is stored no additional forward advancement is needed. If parameters are ignored (which might typically be stored separately in memory), and ignoring possible non-linear dependencies associated with the functional \(J\), the restart checkpoint requires storage of three variables, while storing the non-linear dependencies directly requires storage of only two variables per adjoint step.
In Griewank and Walther [7] it is indicated that steps should be chosen
Figure 3: Visualization of the computational graph for three timesteps in a solver for the barotropic vorticity equation. Step 0 corresponds to initialization and a forward Euler step, step 1 to a second order Adams-Bashforth step, and step 2 to a third order Adams-Bashforth step and evaluation of a functional.
so that amount of data required to advance the adjoint is at least as large as the amount of data stored in a checkpoint. To some extent these requirements can be met through a redefinition of the steps - for example one might choose to include equations corresponding to more than one timestep in a step. A high-level view of the forward model, or a static analysis of the forward, might perhaps facilitate such a definition of the steps.
It should also be noted that the high-level approach might typically be expected to reduce the size of the adjoint dependency data. For example, since the model is viewed in terms of high-level operations, complete solvers for time-dependent partial differential equations can be written and differentiated without the need for the algorithmic differentiation tool to build a record of procedure calls. Any intermediate variables involved in the lower-level calculations are also invisible to the high-level algorithmic differentiation tool, and so cannot generate further data dependencies. This observation is later used to motivate a checkpointing schedule in which the size of forward restart data and single-step non-linear dependency data is assumed to be the same.
### Choosing the data to store in a checkpoint
The revolve algorithm [7] is optimal, in the sense that it requires a minimal number of forward steps to solve the forward and adjoint problems, and then satisfies the secondary condition that the number of times a checkpoint is stored is minimized. Optimality of the number of forward steps is dependent upon the assumption that the forward must always advance after loading data from a checkpoint. H-Revolve [13] provides more advanced schedules, balancing computation and storage costs, but defines the forward problem in terms of a chain involving the full forward solution on each step.
If checkpoints include non-linear dependency data, then the forward may not always need to advance after loading from a checkpoint. An approach of this type is used in Zhang and Constantinescu [15] and Zhang and Constantinescu [16], in the context of multi-stage Runge-Kutta schemes, to arrive at checkpointing strategies which can be more efficient than the revolve algorithm. The additional performance is achieved by permitting Runge-Kutta stage data to be stored in a checkpoint - the stage data being non-linear dependencies when a Runge-Kutta method is applied to a non-linear ordinary differential equation.
Alternatively, instead of storing the data required to restart a forward calculation at the start of a step, for advancement over a consecutive sequence of steps, one may instead store some or all of the forward variables computed _within_ those steps - which may be advantageous if the calculation of some forward variables is expensive. This is potentially distinct from the storage of non-linear dependency data, but requires an additional balancing between storage and recomputation costs.
When storing data for later use in an adjoint calculation there is therefore a choice as to whether to store dependencies required to initialize the forward calculation, to store variables computed within the forward calculation, or to store dependencies required by the adjoint - these may all differ and may overlap. A fully optimized strategy may need to apply some combination of approaches,
balancing storage and recalculation costs. This is then further complicated by noting that the data to be stored in a forward restart checkpoint depends in general on the set of steps to which it applies, and not only on the index of the first step.
Any data stored to facilitate later adjoint advancement is here referred to as a "checkpoint", noting that in general such a checkpoint may not include sufficient data for the forward to be restarted.
### A revolve schedule
An example of a revolve schedule, generated using the crevolve component of pyRevolve [26], is shown in Table 1 for the case of 4 steps and a maximum of 2 checkpoints. The notation action(parameters) denotes an action and its parameters. Actions are as described in Griewank and Walther [7].
* takeshot(n_0, check, where): Store a checkpoint associated with restarting the forward at the start of step n_0 in a checkpoint with index check. If where is true then the checkpoint is stored in memory, otherwise it is stored on disk.
* advance(n_0, n_1): Advance the forward from the start of step n_0 to the start of step n_1.
* restore(n_0, check, where): Restore a checkpoint associated with restarting the forward at the start of step n_0 from the checkpoint with index check. If where is true then the checkpoint is stored in memory, otherwise it is stored on disk.
* firsturn(): Advance the forward over the final step, storing data needed by the adjoint, and then advance the adjoint over the final step.
* youturn(): Advance the forward one step, storing data needed by the adjoint, and then advance the adjoint one step.
* terminate(): The end of the schedule.
The where parameters for the takeshot and youturn actions are used for the multi-stage approach of Stumm and Walther [10].
In the operator overloading approach to algorithmic differentiation applied in Farrell et al. [1], the record of forward equations is constructed dynamically at runtime. This leads to an issue with the schedule: the schedule indicates that a checkpoint associated with the start of a given step should be stored _before_ indicating that the forward should advance over that (and possibly later) steps. For example action 2 in Table 1 indicates that a checkpoint associated with the start of step 1 should be stored, while action 3 indicates that the forward should advance from the start of step 1 to the start of step 3. The data which should be stored in the checkpoint at action 2 is defined by the equations solved in action 3 - consequently the data to be stored in a checkpoint is not known when the schedule indicates that the checkpoint should be stored. This issue is addressed
in Maddison et al. [20] by deferring storage of the checkpoint - data is buffered, and the checkpoint associated with action 2 is written only at action 4, when the data to be stored is known.
### Controlling intermediate storage via the schedule
In order to define a scheduling structure for use with a high-level operator overloading approach to algorithmic differentiation, an intermediate storage is introduced. The intermediate storage is used both to buffer forward data, assembling the data required for a checkpoint, and also to store non-linear dependency data for use by an adjoint calculation. The state of the intermediate storage is controlled via the introduction of additional actions in the checkpointing schedule, which enable or disable the storage of forward restart data, enable or disable storage of non-linear dependency data, and clear the intermediate storage.
The revolve algorithm assumes that additional storage - beyond that allocated for checkpoints - is available to store the dependencies of the adjoint necessary to advance the adjoint one step. In Griewank and Walther [7] it is indicated that steps should be defined so that the amount of data required to advance the adjoint one step is at least as large as the amount of data stored in a checkpoint. With this assumption, and for checkpointing schedules that do not store both forward restart data and non-linear dependency data at the same time (which includes revolve schedules), the additional storage can be reused to buffer data for forward restart checkpointing. This additional storage is here referred to as the "intermediate storage".
\begin{table}
\begin{tabular}{c|l|c|c} index & action(parameters) & forward state & adjoint state \\ \hline
0 & takeshot(0, 0, False) & - & - \\
1 & advance(0, 1) & \(0\to 1\) & - \\
2 & takeshot(1, 1, False) & - & - \\
3 & advance(1, 3) & \(1\to 3\) & - \\
4 & firsturn() & \(3\to 4\) & \(4\to 3\) \\
5 & restore(1, 1, False) & \(\to 1\) & - \\
6 & advance(1, 2) & \(1\to 2\) & - \\
7 & youturn() & \(2\to 3\) & \(3\to 2\) \\
8 & restore(1, 1, False) & \(\to 1\) & - \\
9 & youturn() & \(1\to 2\) & \(2\to 1\) \\
10 & restore(0, 0, False) & \(\to 0\) & - \\
11 & youturn() & \(0\to 1\) & \(1\to 0\) \\
12 & terminate() & - & - \\ \end{tabular}
\end{table}
Table 1: Checkpointing schedule generated using the crevolve component of pyRevolve. The original forward calculation consists of 4 steps, and there are a maximum of 2 permitted checkpoints at any one time. The forward and adjoint states refer to the start of the given steps, indexing from zero. The forward advances 8 steps in total.
The schedule now consists of additional actions which control the storage and deletion of checkpoints, advancement of the forward and adjoint, and also the state of the intermediate storage. Specifically a schedule now consists of the following actions and parameters.
* Clear(clear_ics, clear_data): Clear the intermediate storage. If clear_ics is true, clear the buffer used to store forward restart data. If clear_data is true, clear storage of non-linear dependency data.
* Configure(store_ics, store_data): Configure the intermediate storage. If store_ics is true, enable buffering of forward restart data. If store_data is true, enable storage of non-linear dependency data.
* Write(n, storage): Transfer the intermediate storage to a checkpoint stored in the storage indicated by storage, yielding a checkpoint associated with step n.
* Forward(n_0, n_1): Advance the forward from the start of step n_0 to the start of step n_1.
* Read(n, storage, delete). Load a checkpoint associated with step n from the storage indicated by storage, and store in the intermediate storage. If delete is true then the checkpoint should be deleted.
* 1 to n_0 inclusive).
* EndForward(): Indicates that the original forward calculation has concluded.
* EndReverse(exhausted): Indicates that an adjoint calculation has concluded. If exhausted is true then no further adjoint calculation is possible without a complete recalculation of the forward, and the schedule concludes. Otherwise further actions can be supplied for additional adjoint calculations.
The schedules considered in this article always make use of checkpoint data by loading a checkpoint before it is deleted, and hence the delete parameter of the Read action can be used to delete a checkpoint the last time it is loaded. For more general schedules an extra action to delete checkpoints could be added.
For checkpoint deferment, actions 2 and 3 in Table 1 can be replaced with:
* Configure(True, False): Enable storage of forward restart data, disable storage of non-linear dependency data.
* Forward(1, 3): Advance the forward from the start of step 1 to the start of step 3.
* Write(1, disk): Transfer the intermediate storage to a checkpoint, yielding a checkpoint associated with the start of step 1 sufficient to advance the forward over steps 1 and 2.
* Clear(True, True): Clear the intermediate storage.
A revolve firsturn action, for example action 4 in Table 1, can be replaced with:
* Configure(False, True): Disable storage of forward restart data, enable storage of non-linear dependency data.
* Forward(3, 4): Advance the forward over the final step.
* Reverse(4, 3): Advance the adjoint over the final step.
* Clear(True, True): Clear the intermediate storage.
A revolve youturn action can be handled similarly.
Read actions direct that the loaded checkpoint data should be stored in the intermediate storage. This approach allows the following construction
* Read(2, 'RAM', True)
* Write(2, 'disk')
to transfer a checkpoint from memory to disk. Such transfers can occur, for example, in H-Revolve schedules [13]. When loading forward restart data the forward calculation itself must also be initialised to allow later forward advancement. The details of this re-initialisation are not specified as part of the schedule, but the forward may be initialized when loading forward restart checkpoints. In the schedules to follow Clear actions are added after Read actions, and it can alternatively be inferred that the forward calculation can be re-initialised here, from the intermediate storage, before the intermediate storage is cleared.
### Indicating the number of model steps
The generation of a schedule always requires one piece of information from the application code: the total number of steps. In the following an auxiliary action (not part of the schedule, but instead provided by the application code) Initialize(max_n) indicates, at the start of the calculation, that the number of steps is max_n. The auxiliary action Finalize(max_n) indicates, at the end of the original forward calculation, that the number of steps is max_n.
### Example schedules
Checkpointing schedules are implemented in tlm_adjoint using Python generators. This approach allows a schedule to indicate a checkpointing action to perform, and to hand back control to the application code, while also maintaining the current state of the scheduler. The logical flow of code defining the schedule itself is maintained, simplifying the implementation of new schedules.
The schedules can be applied in general to any model compatible with the tlm_adjoint algorithmic differentiation tool. No further modification of application code, beyond the specification of forward steps and the definition of the schedule, is needed. All checkpointing schedules can, moreover, be applied to
higher order adjoint calculations. In the reverse-over-forward approach used by tlm_adjoint tangent-linear equations are derived and then processed as new forward equations [20]. This allows, for example, more advanced checkpointing schedules to be applied in the calculation of Hessian matrix actions.
The "non-linear dependencies" as considered here are defined to be the forward dependencies of the adjoint. tlm_adjoint substitutes these with dependencies of all derivatives of forward residuals, which defines a superset of the desired non-linear dependencies. This may in particular include excess parameters or variables when an activity analysis is applied.
#### 3.6.1 Full storage
Table 2 provides a schedule associated with the "store everything" approach, where data is stored in memory. This is trivially applicable to a model with an arbitrary number of steps. In the schedule this is evident in action 1, where the forward is permitted to advance a large number of steps (here the value of sys.maxsize in Python). No checkpoints are stored, and instead both forward restart data and non-linear dependency data are retained in the intermediate storage At the end of the calculation the data in the intermediate storage is retained, permitting a subsequent adjoint calculation without needing a new forward calculation.
#### 3.6.2 Periodic storage
Table 3 provides a schedule associated with the periodic storage of checkpoint data on disk, with checkpoints stored every 2 steps, and with forward restart data buffered in memory. In this schedule the non-linear dependency data for multiple steps, between the forward restart checkpoint locations, is stored in the intermediate storage when advancing the adjoint. If pairs of consecutive steps were to be combined into a single step then this approach is equivalent to a "store all forward restarts" approach, storing data necessary to restart the forward at the start of any given step. The number of forward steps in the original forward calculation is considered initially unknown, leading to additional advancement of the forward over steps 3 and 4. The checkpoints on disk are retained, permitting a subsequent adjoint calculation without needing a new forward calculation.
\begin{table}
\begin{tabular}{c|l|c|c} index & action(parameters) & forward & adjoint \\ & state & state \\ \hline
0 & Configure(True, True) & - & - \\
1 & Forward(0, 9223372036854775807) & \(0\to 4\) & - \\ - & Finalize(4) & - & - \\
2 & EndForward() & - & \\
3 & Reverse(4, 0) & - & \(4\to 0\) \\
4 & EndReverse(False) & - & - \\ \end{tabular}
\end{table}
Table 2: A “store everything” checkpointing schedule for a model consisting of 4 steps.
\begin{table}
\begin{tabular}{c|l|c|c} index & action(parameters) & forward & adjoint \\ & state & state \\ \hline
0 & Configure(True, False) & - & - \\
1 & Forward(0, 2) & \(0\to 2\) & - \\
2 & Write(0, disk) & - & - \\
3 & Clear(True, True) & - & - \\
4 & Configure(True, False) & - & - \\
5 & Forward(2, 4) & \(2\to 4\) & - \\ - & Finalize(4) & - & - \\
6 & Write(2, disk) & - & - \\
7 & Clear(True, True) & - & - \\
8 & EndForward() & - & - \\
9 & Read(2, disk, False) & \(\to 2\) & - \\
10 & Clear(True, True) & - & - \\
11 & Configure(False, True) & - & - \\
12 & Forward(2, 4) & \(2\to 4\) & - \\
13 & Reverse(4, 2) & - & \(4\to 2\) \\
14 & Clear(True, True) & - & - \\
15 & Read(0, disk, False) & \(\to 0\) & - \\
16 & Clear(True, True) & - & - \\
17 & Configure(False, True) & - & - \\
18 & Forward(0, 2) & \(0\to 2\) & - \\
19 & Reverse(2, 0) & - & \(2\to 0\) \\
20 & Clear(True, True) & - & - \\
21 & EndReverse(False) & - & - \\ \end{tabular}
\end{table}
Table 3: A checkpointing schedule associated with periodic storage on disk, for a model consisting of 4 steps, where the number of steps is only indicated to the scheduler at the end of the original forward calculation.
#### 3.6.3 A revolve schedule
The schedule in Table 4 corresponds to revolve schedule in Table 1. The intermediate storage has one of three states: storing only forward data for a forward restart checkpoint (e.g. actions 0-3), storing only non-linear dependency data for the adjoint (e.g. actions 18-21), and storing no data (actions 15-17). The revolve algorithm only permits the adjoint to be run once per original forward calculation, and hence the EndReverse action has exhausted=True.
Note that, while it may break the assumptions of the revolve algorithm and storage limits might be exceeded, the schedule permits a complete adjoint calculation even with an arbitrary computational graph. For example action 4 indicates that data for a forward restart checkpoint should be stored in the intermediate storage. This occurs in action 5 as the forward advances. The data is transferred to a checkpoint in action 6, and then the intermediate storage is cleared in action 7. Even if calculations in later steps depend on additional forward variables, not recorded in the forward restart checkpoint, these need not appear in the checkpoint. In this example any additional forward variables required to advance over step 3 need not be stored in the checkpoint, as the forward only advances over step 3 once (in action 9). More generally data required for advancement of the forward over later steps may appear in later forward restart checkpoints. Through explicit control of both the intermediate storage and forward advancement via the schedule, it is possible to store only data necessary to advance the forward over a specific sequences of steps.
There is, however, potential inefficiency during the adjoint calculation, as the range of steps over which the forward needs to advance from a forward restart checkpoint may reduce, either as the adjoint advances or as new forward restart checkpoints are created. For example the forward restart checkpoint created in actions 4-7 contains data sufficient to advance over steps 1 and 2. At action 20 the adjoint advances over step 2, and any additional data required to advance the forward over step 2 could then in principle be removed from the checkpoint.
The storage used for checkpoints is indicated by the storage parameter for the Read or Write actions. tlm_adjoint includes an implementation of the mixed memory-disk approach of Stumm and Walther [10], the approach combining periodic disk and binomial checkpointing described in Pringle et al. [21] and in the supporting information for Goldberg et al. [22], and also provides two-level mixed memory-disk schedules by interfacing with the H-Revolve library [13].
## 4 Mixing storage of forward restart and non-linear dependency data
Fully optimal schedules, where mixed storage of forward restart and non-linear dependency data are considered, require detailed knowledge of different costs. Here, instead, assumptions as used in the revolve algorithm are considered, with the additional assumption that forward restart and single-step non-linear dependency data sizes are the same. This demonstrates the potential of using more detailed knowledge of the computational graph for improved checkpointing performance.
\begin{table}
\begin{tabular}{c|c|c|c} index & action(parameters) & forward & adjoint \\ & state & state \\ \hline - & Initialize(4) & - & - \\
0 & Configure(True, False) & - & - \\
1 & Forward(0, 1) & \(0\to 1\) & - \\
2 & Write(0, disk) & - & - \\
3 & Clear(True, True) & - & - \\
4 & Configure(True, False) & - & - \\
5 & Forward(1, 3) & \(1\to 3\) & - \\
6 & Write(1, disk) & - & - \\
7 & Clear(True, True) & - & - \\
8 & Configure(False, True) & - & - \\
9 & Forward(3, 4) & \(3\to 4\) & - \\
10 & EndForward() & - & - \\
11 & Reverse(4, 3) & - & \(4\to 3\) \\
12 & Clear(True, True) & - & - \\
13 & Read(1, disk, False) & \(\to 1\) & - \\
14 & Clear(True, True) & - & - \\
15 & Configure(False, False) & - & - \\
16 & Forward(1, 2) & \(1\to 2\) & - \\
17 & Clear(True, True) & - & - \\
18 & Configure(False, True) & - & - \\
19 & Forward(2, 3) & \(2\to 3\) & - \\
20 & Reverse(3, 2) & - & \(3\to 2\) \\
21 & Clear(True, True) & - & - \\
22 & Read(1, disk, True) & \(\to 1\) & - \\
23 & Clear(True, True) & - & - \\
24 & Configure(False, True) & - & - \\
25 & Forward(1, 2) & \(1\to 2\) & - \\
26 & Reverse(2, 1) & - & \(2\to 1\) \\
27 & Clear(True, True) & - & - \\
28 & Read(0, disk, True) & \(\to 0\) & - \\
29 & Clear(True, True) & - & - \\
30 & Configure(False, True) & - & - \\
31 & Forward(0, 1) & \(0\to 1\) & - \\
32 & Reverse(1, 0) & - & \(1\to 0\) \\
33 & Clear(True, True) & - & - \\
34 & EndReverse(True) & - & - \\ \end{tabular}
\end{table}
Table 4: A checkpointing schedule corresponding to that in Table 1, with explicit control of intermediate storage, with deferred checkpointing, and with deletion of checkpoint data.
### Assumptions
It is assumed that the non-linear dependency data associated with any single step has the same size as the forward restart data associated with the start of any step. Other assumptions are as for the revolve algorithm. It is assumed that the number of forward steps in the original forward calculation is known at the start of the forward calculation. It is assumed that the forward restart data size is always the same, irrespective of the steps over which the forward is subsequently to be advanced (ignoring in particular additional costs associated with "long-range" dependencies). The performance of the schedule is defined in terms of the total number of forward steps taken - which is a measure of runtime performance, excluding the cost of adjoint advancement, if the time taken to solve each forward step is the same, and all runtime costs associated with storage are ignored. The forward and adjoint always advance over full steps, and (as in section 12.3 of Griewank and Walther [6]) we do not permit the calculation of forward data for earlier steps using forward data for later steps. It is not assumed that non-linear dependency data for step \(m\) suffices to restart the forward at the start of step \(m+1\).
Without increasing the total number of forward steps required, we can exclude the case where both forward restart data associated with the start of a step, and non-linear dependency data associated with the same step, are stored in checkpoints at the same time. This can be concluded by observing that we need only store forward restart data for a step \(m\) in a checkpoint, allowing a restart at the start of step \(m\), if we later wish to recompute forward data within that step. Since here the forward always advances over full steps (excluding the possibility of only some forward data within a step being recomputed), rerunning the forward over step \(m\) recomputes the non-linear dependency data for that step. Hence if the forward is later rerun over step \(m\) we need not simultaneously store the non-linear dependency data in a checkpoint, and if the forward is not later rerun over step \(m\) we need not store the forward restart data in a checkpoint. If a schedule results in both being checkpointed at the same time, we can therefore always modify the schedule so that only one is checkpointed at a time, without increasing the total number of forward steps.
As in Zhang and Constantinescu [16] we consider \(s\) checkpointing units, which may be used to store forward restart or non-linear dependency data. Storing forward restart data, or non-linear dependency data for one step, each use one unit of storage, and are referred to respectively as a forward restart checkpoint or non-linear dependency data checkpoint. It is important to note, however, that while stored non-linear dependency data is sufficient to advance the adjoint over a step, it may not be sufficient to restart and advance the forward calculation.
Given the observation above, we limit consideration to the case where a checkpoint associated with a step is used either to restart the forward at the start of the step, or to store non-linear dependency data for the step, but not both. We assume one additional unit of storage is available to store non-linear dependency data used to advance the adjoint one step, and use this as the intermediate storage.
### Dynamic programming problem
Given \(s\) remaining checkpointing units, we consider the problem of advancing the adjoint over \(n\) steps - i.e. advancing the adjoint to start of step \(n_{0}\), given that the forward is initially at the start of step \(n_{0}\) and the adjoint is initially at the start of step \(n_{0}+n\).
A schedule is constructed by considering three cases.
1. If \(n\leq s+1\) then non-linear dependency data for the first \(n-1\) steps can be stored in checkpoints, and non-linear dependency data for the last step stored in the intermediate storage. \(n\) forward steps are required.
2. If \(s=1\) and \(n>2\) then a forward restart checkpoint is stored at the start of the first step, and the forward is repeatedly advanced to recompute non-linear dependency data. When the adjoint has two steps left to advance over, the restart checkpoint is deleted and replaced with storage of non-linear dependency data for the first step, saving one forward step. \(n\left(n+1\right)/2-1\) forward steps are required.
3. Otherwise data is stored in a checkpoint, and the forward advances, using one of the following approaches. 1. Store a forward restart checkpoint associated with the start of the first step and advance \(m\) steps, for some \(m\in\left\{1,\ldots,n-1\right\}\). There are \(s\) checkpointing units to advance the adjoint over the first \(m\) steps, and \(s-1\) checkpointing units to advance the adjoint over the remaining \(n-m\) steps. 2. Advance the forward one step, storing non-linear dependency data associated with the step in a checkpoint. There are \(s-1\) checkpointing units to advance the adjoint over the remaining \(n-1\) steps.
Note that it is assumed that a forward restart checkpoint is not initially stored at the start of the first step.
The minimal number of forward steps taken is defined by the dynamic programming problem3 (defined for positive integer \(n\) and integer \(s\geq\min\left(1,n-1\right)\))
Footnote 3: This notation differs from Griewank and Walther [7] – here the _total_ number of forward steps is considered.
\[p\left(n,s\right)=\begin{cases}n&\text{if }n\leq s+1,\\ \frac{1}{2}n\left(n+1\right)-1&\text{if }s=1\\ &\text{and }n>2,\\ \min\left\{\begin{array}{ll}\min\limits_{m\in\left\{2,\ldots,n-1\right\}} \left[m+p\left(m,s\right)+p\left(n-m,s-1\right)\right]&\\ 1+p\left(n-1,s-1\right)&\text{otherwise.}\end{array}\right.\end{cases} \tag{1}\]
This has been simplified slightly in range of \(m\) considered in the inner minimum, which follows as \(p\left(m,s\right)>0\) - that is, the forward must always advance at least two timesteps after a forward restart checkpoint is stored.
Cases 2 and 3a are similar to the cases that appear in the dynamic programming problem associated with the revolve algorithm, except for the reduction by one step in the former. This can be seen in the similarity of elements of the dynamic programming problem (1) to the dynamic programming problem appearing in Griewank and Walther [7] (their equation (2)). The performance of a schedule satisfying (1) is demonstrated relative to the revolve algorithm in Figure 4.
The solution to the dynamic programming problem (1) is equivalent to the solution of the CAMS-GEN double dynamic programming problem for \(l=1\) stage [16, Lemmas 2 and 3 and Theorem 2], provided appropriate terminating cases corresponding to cases 1 and 2 above are used. However these terminating cases require the ability to later replace a forward restart checkpoint with a non-linear dependency data checkpoint, for case 2 above. Further here a dynamic programming problem is defined, in place of the double dynamic programming problem used to define the CAMS-GEN algorithm.
Before constructing a schedule a further problem is first considered, encountered after loading, but not deleting, a forward restart checkpoint. In this case, given \(s\) remaining checkpointing units, the total number of forward steps required to advance the adjoint over \(n\) steps is (defined for non-negative integer \(s\) and integer \(n>s+1\))
\[p_{0}\left(n,s\right)=\begin{cases}\frac{1}{2}n\left(n+1\right)-1&\text{if }s=0,\\ \min_{m\in\left\{1,\ldots,n-1\right\}}\left[m+p\left(m,s+1\right)+p\left(n-m,s\right)\right]&\text{otherwise.}\end{cases} \tag{2}\]
This is similar to the problem (1), but takes account of the additional restart checkpoint already available at the start of the first step. Note that \(n\leq s+1\) is excluded, as then case 1 above would apply. The possibility that a non-linear dependency data checkpoint associated with the first step is stored, before the restart checkpoint associated with the first step is deleted, is excluded, since it
Figure 4: Comparison of revolve versus a schedule satisfying (1). Left: Total number of forward steps for a revolve schedule (black line) and a schedule satisfying (1) (red line). Right: The ratio: the total number of forward steps for the revolve schedule, divided by the total number of forward steps for a schedule satisfying (1). 500 forward steps in the original forward calculation are considered.
has already been asserted that both forward restart and non-linear dependency data for the first step cannot be stored in checkpoints at the same time.
### Constructing a schedule
For the original forward run a schedule is constructed by solving the dynamic programming (1), keeping a record of the cases which lead to optimal solutions. Ties are broken by prioritizing storage of non-linear dependency data over storage of forward restart data, and by maximizing forward advancement when forward restart checkpoints are stored. In tlm_adjoint the dynamic programming problem (1) is solved using tabulation, with the key section of code just-in-time compiled using Numba [27]. Memoization is used if Numba is not available. After the forward run the adjoint can advance one step, using non-linear dependency data stored in intermediate storage.
When advancing the adjoint (after the first adjoint step, and assuming the original forward calculation consists of more than one step) the last checkpoint is loaded, and then one of the following cases applies, where here the checkpoint contains data for step \(n_{0}\), and the adjoint is at the start of step \(n_{1}\).
1. If the checkpoint contains non-linear dependency data, we have \(n_{0}=n_{1}-1\). The checkpoint can be deleted.
2. If the checkpoint contains forward restart data and, possibly after deleting the checkpoint, there is sufficient storage (in checkpointing units and the intermediate storage) to store all non-linear dependency data for steps \(n_{0}\) to \(n_{1}-1\) inclusive, the checkpoint can be deleted, and rerunning of the forward used to recompute non-linear dependency data for steps \(n_{1}\) to \(n_{1}-1\) inclusive. Non-linear dependency data for steps \(n_{0}\) to \(n_{1}-2\) inclusive is stored in checkpoints. Non-linear dependency data for step \(n_{1}-1\) is stored in the intermediate storage.
3. Otherwise, the checkpoint contains forward restart data, and the forward is advanced from the start of step \(n_{0}\) to the start of step \(n_{1}\) using a solution to (2). Ties are broken as for the schedule for the original forward run.
After each of these cases the adjoint can advance one step using data stored in the intermediate storage, and the process then repeats for the complete adjoint calculation.
A schedule for the case of 4 forward steps and with 2 checkpointing units is shown schematically in Figure 5, and the full schedule listed in Table 5. Note that non-linear dependency data is stored in checkpoints using actions 0-3 and 15-18, and loaded in actions 23 and 26. Note also that a forward restart checkpoint associated with the start of step 1 is deleted in action 13, and the freed checkpointing unit is used to store non-linear dependency data for step 1 using actions 15-18.
A schedule for the case of 5 forward steps and with 2 checkpointing units is shown schematically in Figure 6. This schedule takes 8 forward steps in total. A forward restart checkpoint is stored at the start of the forward calculation, and is later deleted and the storage reused to store non-linear dependency data. A revolve schedule for this case takes 11 forward steps in total.
Figure 5: Schematics of checkpointing schedules for the case of 4 forward steps and 2 checkpointing units. The schedules proceed from top to bottom. In each case the black arrows pointing to the right, at the top, indicate forward advancement. Below this a filled cross indicates a forward restart checkpoint, and a filled line with end bars a non-linear dependency data checkpoint, with checkpoints either stored as part of the indicated forward advancement, or retained from previous forward advancement. Dashed versions of these indicate a checkpoint which is loaded and then deleted. Red arrows pointing to the left indicate adjoint advancement, occurring after loading of checkpoints and forward advancement. Left: A revolve schedule, taking 8 forward steps in total. Right: A schedule satisfying (1), taking 6 forward steps in total.
\begin{table}
\begin{tabular}{c|c|c|c} index & action(parameters) & forward & adjoint \\ & & state & state \\ \hline - & Initialize(4) & - & - \\
0 & Configure(False, True) & - & - \\
1 & Forward(0, 1) & \(0\to 1\) & - \\
2 & Write(0, disk) & - & - \\
3 & Clear(True, True) & - & - \\
4 & Configure(True, False) & - & - \\
5 & Forward(1, 3) & \(1\to 3\) & - \\
6 & Write(1, disk) & - & - \\
7 & Clear(True, True) & - & - \\
8 & Configure(False, True) & - & - \\
9 & Forward(3, 4) & \(3\to 4\) & - \\
10 & EndForward() & - & - \\
11 & Reverse(4, 3) & - & \(4\to 3\) \\
12 & Clear(True, True) & - & - \\
13 & Read(1, disk, True) & \(\to 1\) & - \\
14 & Clear(True, True) & - & - \\
15 & Configure(False, True) & - & - \\
16 & Forward(1, 2) & \(1\to 2\) & - \\
17 & Write(1, disk) & - & - \\
18 & Clear(True, True) & - & - \\
19 & Configure(False, True) & - & - \\
20 & Forward(2, 3) & \(2\to 3\) & - \\
21 & Reverse(3, 2) & - & \(3\to 2\) \\
22 & Clear(True, True) & - & - \\
23 & Read(1, disk, True) & \(\to\star\) & - \\
24 & Reverse(2, 1) & - & \(2\to 1\) \\
25 & Clear(True, True) & - & - \\
26 & Read(0, disk, True) & \(\to\star\) & - \\
27 & Reverse(1, 0) & - & \(1\to 0\) \\
28 & Clear(True, True) & - & - \\
29 & EndReverse(True) & - & - \\ \end{tabular}
\end{table}
Table 5: A checkpointing schedule which yields a solution to the dynamic programming problem (1). The forward consists of 4 steps, and there are 2 checkpointing units. A change in forward state \(\to\ast\) indicates that, in general, the data loaded from a checkpoint is insufficient to restart the forward. The forward advances 6 steps in total.
While it may break the data size assumptions and exceed storage limits, the checkpointing schedule described in this section permits a complete adjoint calculation even with an arbitrary computational graph.
## 5 Conclusions
A high-level algorithmic differentiation approach, particularly when combined with automated code generation, can significantly simplify the development of an adjoint model. However it also provides a high-level, and simplified, view of the model itself. Instead of considering a very large number of relatively elementary operations, one can consider a relatively small number of much more complicated operations, with much of the complexity of the latter handled by the code generator.
The discussion of this article focuses on how the simplified structure - appearing in the form of a computational graph with fewer nodes and edges - can be used when applying checkpointing strategies. In particular it is possible to distinguish between the storage required to restart and advance the forward, and storage required to advance the adjoint. This allows the use of checkpointing strategies which mix storage of forward restart and non-linear dependency data in checkpoints.
An example of such a strategy was demonstrated, where it was assumed that the sizes of forward restart data and single-step non-linear dependency
Figure 6: Schematic of a checkpointing schedule satisfying (1) for the case of 5 forward steps and 2 checkpointing units. For interpretation see Figure 5. The schedule takes 8 forward steps in total.
data were the same. In terms of the total number of forward steps the resulting schedule outperforms the revolve algorithm, and is equivalent to the CAMSGEN algorithm for \(l=1\) stage providing appropriate terminating cases are used in defining the dynamic programming problem. If the size of the single-step non-linear dependency data differs from the size of forward restart data, but the sizes of each are fixed for each step and their relative size is known, then a generalization could be considered. More generally it is natural that one could apply a dynamic approach, using knowledge of the computational graph of the forward, as the forward calculation proceeds, to determine the checkpointing schedule structure. At the end of the original forward calculation further optimization, based on the now complete knowledge of the computational graph, could be applied. However further optimization in terms of the total number of forward steps could easily be offset in more practical cases by, for example, storage performance limits, particularly in large parallel calculations.
If an equation represents the solution of a non-linear problem, for example the solution of a finite element discretized non-linear partial differential equation, and if linear systems involving the adjoint of the equation Jacobian matrix can be solved efficiently, then an adjoint calculation can be particularly efficient. For example if Newton's method is applied to solve an equation, then the forward performs multiple linear solves involving the Jacobian matrix, while the adjoint performs a single linear solve involving the adjoint of the Jacobian matrix. This approach leads to a case of a particularly efficient adjoint calculation in Farrell et al. [1]. For such models the cost of a forward step may be much larger than the cost of an adjoint step, and it may be particularly important to avoid additional forward steps when performing an adjoint calculation. Allowing non-linear dependency data to be stored in checkpoints is one way to avoid expensive additional non-linear forward solves when performing the adjoint calculation.
An optimal schedule is defined only for a given performance model. In practice the relative costs of different elements of the calculation will depend on details of the implementation. Runtime performance, and storage performance and limits, will also depend on the details of the system on which the calculation is performed. In the context of automated code generation these details appear below the level of the domain specific language meaning that, for a separation between application development and implementation optimization to be maintained, the development of higher performance checkpointing approaches itself requires automation.
_Data availability_
tlm_adjoint is available at [https://github.com/jrmaddison/tlm_adjoint](https://github.com/jrmaddison/tlm_adjoint). The version as described in this article is available at [28].
_Acknowledgements_
This work was supported by the Natural Environment Research Council [NE/T001607/1].
This research was funded in whole, or in part, by the Natural Environment Research Council [NE/T001607/1]. For the purpose of open access, the author has applied a creative commons attribution (CC BY) licence to any author accepted manuscript version arising.
JRM acknowledges useful communications with, and code contributions by, David A. Ham.
|
2304.06572 | Star-formation-rate estimates from water emission | (Abridged) The star-formation rate (SFR) quantitatively describes the
star-formation process in galaxies. Current ways to calibrate this rate do not
usually employ observational methods accounting for the low-mass end of stellar
populations as their signatures are too weak. Accessing the bulk of
protostellar activity within galactic star-forming regions can be achieved by
tracing signposts of ongoing star formation. One such signpost is molecular
outflows, which are bright in molecular emission. We propose to utilize the
protostellar outflow emission as a tracer of the SFR. In this work, we
introduce a novel version of the galaxy-in-a-box model, which can be used to
relate molecular emission from star formation in galaxies with the SFR. We
measured the predicted para-H2O emission at 988 GHz and corresponding SFRs for
galaxies with LFIR = $10^8$ - $10^{11}$ L$_\odot$ in a distance-independent
manner, and compared them with expectations from observations. We evaluated the
derived results by varying the star formation efficiency, the free-fall time
scaling factor, and the initial mass function. For the chosen H2O transition,
relying on the current Galactic observations and star formation properties, we
are underestimating the total galactic emission, while overestimating the SFRs,
particularly for more starburst-like configurations. The current version of the
galaxy-in-a-box model accounts for a limited number of processes and
configurations, that is, it focuses on ongoing star formation in massive young
clusters in a spiral galaxy. Therefore, the inferred results, which
underestimate the emission and overestimate the SFR, are not surprising: known
sources of emission are not included in the model. To improve the results, the
next version of the model needs to include a more detailed treatment of the
entire galactic ecosystem and other processes that would contribute to the
emission. | K. M. Dutkowska, L. E. Kristensen | 2023-04-13T14:31:20Z | http://arxiv.org/abs/2304.06572v1 | # Star-formation-rate estimates from water emission
###### Abstract
Context:The star-formation rate (SFR) quantitatively describes the star-formation process in galaxies throughout cosmic history. Current ways to calibrate this rate do not usually employ observational methods accounting for the low-mass end of stellar populations as their signatures are too weak.
Aims:Accessing the bulk of protostellar activity within galactic star-forming regions can be achieved by tracing signposts of ongoing star formation. One such signpost is molecular outflows, which are particularly strong at the earliest stages of star formation. These outflows are bright in molecular emission, which is readily observable. We propose to utilize the protostellar outflow emission as a tracer of the SFR.
Methods:In this work, we introduce a novel version of the galaxy-in-a-box model, which can be used to relate molecular emission from star formation in galaxies with the SFR. We measured the predicted para-water emission at 988 GHz (which is particularly bright in outflows) and corresponding SFRs for galaxies with \(L_{\rm FIR}=10^{8}-10^{11}\rm L_{\odot}\) in a distance-independent manner, and compared them with expectations from observations.
Results:We evaluated the derived results by varying star-forming parameters, namely the star formation efficiency, the free-fall time scaling factor, and the initial mass function. We observe that for the chosen water transition, relying on the current Galactic observations and star formation properties, we are underestimating the total galactic emission, while overestimating the SFRs, particularly for more starburst-like configurations.
Conclusions:The current version of the galaxy-in-a-box model only accounts for a limited number of processes and configurations, that is, it focuses on ongoing star formation in massive young clusters in a spiral galaxy. Therefore, the inferred results, which underestimate the emission and overestimate the SFR, are not surprising: known sources of emission are not included in the model. To improve the results, the next version of the model needs to include a more detailed treatment of the entire galactic ecosystem and other processes that would contribute to the emission. Thus, the galaxy-in-a-box model is a promising step toward unveiling the star-forming properties of galaxies across cosmic time.
## 1 Introduction
Star formation lies at the very center of the baryon cycle and plays a pivotal role in shaping galactic ecosystems. There are different measures of this process, all of which help to understand and characterize its behavior through cosmic history. One example is the star-formation rate (SFR), as it provides a quantitative description of the star-forming properties of a given object by relating the total mass of stars formed in a give time unit, that is, \(M_{*}/\Delta t\). The SFR is used to establish the cosmic star formation history (e.g., Lilly et al., 2013; Madau and Dickinson, 2014), which in turn is used to understand and quantify the evolution of galaxies.
The key epoch of cosmic star formation history, which is when star formation peaked (known as cosmic noon), marks a critical stage during the evolution of today's galaxy population (e.g., Shapley, 2011; Madau and Dickinson, 2014; Forster Schreiber and Wuyts, 2020). Cosmic-noon galaxies, lying at redshifts of 2-3, exhibit extremely different SFRs from those observed in the local Universe, reaching \(>1000\rm\,M_{\odot}\,yr^{-1}\), while the Milky Way is forming stars at a rate of \(\sim 1\rm\,M_{\odot}\,yr^{-1}\)(e.g., Kennicutt and Evans, 2012).
There are various ways of deriving the SFRs in galaxies from nebular line, UV, infrared, radio, and X-ray emission (Madau and Dickinson, 2014). These methods all assume that there is a scaling between the luminosity in a given band and the SFR. However, the observed emission is usually dominated by high-mass stars, which easily outshine low-mass stars due to their energetic output, and so an initial mass function is applied to correct for low-mass stars, which is where most of the mass resides. In the local Universe, the SFR is readily traced and calibrated with H\(\alpha\), H\(\beta\), [O ii], and [O iii] emission (e.g., Kennicutt, 1998; Tresse et al., 2002; Kewley et al., 2004; Salim et al., 2007; Villa-Velez et al., 2021). However, in the past 20 years, advances in astrochemistry have provided additional ways to trace star formation, even in its most embedded stages, and allow us to trace the episodes of current star formation in galaxies (e.g., Herbst and van Dishoeck, 2009; Jorgensen et al., 2020).
Molecular emission from protostars is not yet commonly used as a SFR tracer. Nevertheless, this emission has the potential to trace even low-mass populations directly. At the earliest stages, the forming star itself is deeply embedded in gas and dust and is thus completely obscured. Therefore, the key is to trace signposts of these early stages that are not obscured. One of these signposts is outflows, which are launched from protostars in their main accretion phase when the interaction between the infalling envelope, winds, and jets launched from the protostar are particularly strong (Bally, 2016). These outflows are launched from close to the protostar, but quickly punch their way through
to the surrounding molecular cloud, where they are not obscured (Bachiller et al., 1990). In our Galaxy, one of the best tracers of this protostellar component is water (van Dishoeck et al., 2021), which is predominantly locked up as ice on dust grains, but is released from the grain mantles into the gas phase, causing a jump in the abundance of many orders of magnitude. At the same time, the physical conditions are conducive to water being readily excited into rotational states (e.g., Suntarinen et al., 2014).
Water emission is also observed toward high-redshift galaxies (e.g., Yang et al., 2013, 2016; Jarugula et al., 2019), where it too has been calibrated to serve as an SFR tracer (Jarugula et al., 2019). However, at high redshift, water is thought to trace dusty molecular clouds illuminated by either massive stars or a central galactic nucleus, and therefore the excitation is assumed to be via far-infrared (FIR) pumping (e.g., Gonzalez-Alfonso et al., 2008, 2014). However, toward the Galactic sources, which were extensively observed with the _Herschel_ Space Observatory (e.g., the Water In Star-forming regions with Herschel survey (WISH); van Dishoeck et al., 2011, 2021), water emission is almost uniquely associated with outflows, where its excitation is collisionally dominated, and other processes, such as FIR pumping, have a negligible contribution to the excitation (Mottram et al., 2014; Goicoechea et al., 2015).
With the goal of tracing active star formation in galaxies with molecular emission from protostars, Dutkowska and Kristensen (2022) created a galactic model, the so-called galaxy-in-box model, simulating emission from star-forming regions. Using up-to-date knowledge of Galactic star formation and state-of-the-art astrochemical observations of Galactic protostars, the galaxy-in-a-box model simulates emission from young clusters in a chosen galaxy, and at the same time provides insight into the statistics of the star formation process. The default molecular emission is that from water at 988 GHz (\(J_{K\mathrm{z,Kc}}=2_{02}-1_{11}\)), which is readily observed even at high redshifts where its emission is thought to be dominated by the FIR-pumping-dominated regions, as outlined above.
In this work, we present an extension to the galaxy-in-a-box model, which allows us to derive SFRs from simulated galaxies and their individual star-forming clusters, and to put constraints on local and global SFRs. We focus on water emission at 988 GHz, and simulate emission for galaxies with \(L_{\mathrm{FIR}}=10^{8}-10^{11}\mathrm{L_{\odot}}\) for varying star-formation parameters.
This paper is organized as follows. Section 2 describes all of the changes introduced to the galaxy-in-a-box model. Subsequently, in Section 3 we present the results of this study and test them against observations and the literature; we then discuss these comparisons in Section 4. Finally, we present our conclusions in Section 5.
## 2 Methods
In this study, we explore the relation between the SFR, water luminosity (\(L_{\mathrm{H,0}}\)), and far-infrared luminosity (\(L_{\mathrm{FIR}}\)) using the galaxy-in-a-box model (Dutkowska and Kristensen, 2022, for an overview of the model see Appendix A). This is a novel, state-of-the-art astrophysical modeling tool that simulates emission from young clusters in a galaxy and provides detailed insights into the constituents of the star-formation process and derived parameters. The model relies on relatively few input parameters, giving the user great flexibility to define global and local galactic parameters.
For deriving the SFR and relating \(L_{\mathrm{FIR}}\) to the virial mass of galaxies, we implemented a number of upgrades to the model, which we describe in Sect. 2.1. In Sect. 2.2 we describe the choice of parameters for the simulated galaxies.
### Changes to the galaxy-in-a-box model
For the purposes of this study, we introduced the SFR as an input and output parameter in the galaxy-in-a-box model. We only used the output SFRs. However, in the following, we describe the full extent of the new SFR feature. The SFR tells us how much material is turned into stars per unit of time. With that in mind, we defined the SFR for a cloud in a galaxy as
\[\mathrm{SFR_{cloud}} =N_{*}\left(\frac{\langle M_{*}\rangle}{\mathrm{M_{\odot}}} \right)\left(\frac{t_{\mathrm{cloud}}}{\mathrm{yr}}\right)^{-1} \tag{1}\] \[=N_{*}\left(\frac{\langle M_{*}\rangle}{\mathrm{M_{\odot}}}\right) \left(\frac{t_{\mathrm{ff}}^{\mathrm{sc}}t_{\mathrm{ff}}}{\mathrm{yr}}\right)^ {-1},\]
where \(N_{*}\) is the number of formed protostars, \(\langle M_{*}\rangle\) is the average protostellar mass, \(t_{\mathrm{cloud}}\) is the age of the cloud, \(t_{\mathrm{ff}}^{\mathrm{sc}}\) is the unitless free-fall scaling factor (Dutkowska and Kristensen, 2022), and \(t_{\mathrm{ff}}\) is the free-fall time of the cloud. In the case of the galaxy-in-a-box model, the age is randomized, that is, it randomly scales the ages such that they range from newly formed to completely collapsed. The global SFR of the entire galaxy is therefore the sum of the individual rates for each cloud.
In the model, we assume that each cloud goes on to form one cluster; in nature, clouds may go on to form several generations of clusters, but for the purposes of this study, where we consider global star formation, this is not relevant. With this implementation of the SFR, we introduce the possibility to also constrain the SFR at the cloud or cluster level. The cluster module can now be run with a fixed SFR, where the age of the cluster is adjusted through the free-fall time scaling factor, which can be easily derived from Eq. (1):
\[\tau_{\mathrm{ff}}^{\mathrm{sc}}=N_{*}\left(\frac{\langle M_{*}\rangle}{ \mathrm{M_{\odot}}}\right)\left(\frac{t_{\mathrm{ff,random}}}{\mathrm{Myr}} \right)\left(\frac{\mathrm{SFR_{cloud}}}{\mathrm{M_{\odot}}\mathrm{Myr}^{-1} }\right). \tag{2}\]
In this equation, \(t_{\mathrm{ff}}\) is already randomized (\(t_{\mathrm{ff,random}}\)) to avoid poor SFR adjustment due to age assignment that takes place later in the model. However, Eq. (2) is not used in this study.
On a global scale, that is, when introducing constraints on the total galactic SFR, the new version of the galaxy-in-a-box model monitors the total SFR of the given galaxy and computations stop when the specified SFR is reached. The allowed deviation from the specified SFR is \(\pm 10\%\). There may be situations where the galaxy-in-a-box model will not converge: these situations are unphysical, and an example would be a very low-mass galaxy with a very high SFR. There needs to be enough gas that can be turned into stars at the desired rate.
One of the changes to the galaxy-in-a-box model that was essential for this study was to set the number of clusters as limited by the total molecular gas reservoir, rather than setting it as a fixed number in the input file. We infer the number of molecular clouds from which star forming clusters form by putting an upper limit on the total mass of the molecular clouds, which are randomly generated using the molecular cloud mass distribution. When the limit is reached, the clouds are no longer passed to the cluster part of the calculations (see Fig. 1 of Dutkowska and Kristensen, 2022). This way we ensure that the mass of clouds does not exceed the available molecular reservoir.
Lastly, the mass properties of the galaxy can now be set by defining the \(L_{\mathrm{FIR}}\) of the galaxy. Following Scoville and Good
(1989), the model derives the mass of the molecular reservoir through the observed \(M_{\rm vir}-L_{\rm IR}\) relation (see Fig. 1). The viral mass of the galaxy can be expressed as:
\[\frac{M_{\rm vir}}{\rm M_{\odot}}=10^{0.5\pm 0.6}\left(\frac{L_{\rm IR}}{\rm L_{ \odot}}\right)^{0.81\pm 0.08}, \tag{3}\]
where \(L_{\rm IR}\) is the total far-infrared luminosity of the cloud. With Eq. (3), we can simulate \(L_{\rm H_{2}O}\) for galaxies with different \(L_{\rm IR}\), including typical galaxy types observed with H\({}_{2}\)O emission, that is, subluminous infrared galaxies (subLIRGs; \(L_{\rm IR}<10^{11}\) L\({}_{\odot}\)), LIRGs (\(10^{11}\) L\({}_{\odot}\leq L_{\rm IR}<10^{12}\) L\({}_{\odot}\)), ultraluminous infrared galaxies (ULIRGs; \(10^{12}\) L\({}_{\odot}\leq L_{\rm IR}<10^{13}\) L\({}_{\odot}\)), and hyperluminous infrared galaxies (HylLRGs; \(L_{\rm IR}\geq 10^{13}\) L\({}_{\odot}\)). In this study, we are interested in relative values for derived luminosities and SFRs, and therefore we make an assumption that \(L_{\rm FIR}\) is a proxy for \(L_{\rm IR}\), and use these luminosities interchangeably.
### Considered parameters
The goal of this study is to explore the SFRs derived with the galaxy-in-a-box model and how they relate to derived luminosities. To achieve this goal, we decided to use the template galaxy from Dutkowska and Kristensen (2022) with emission from the para-H\({}_{2}\)O \(2_{02}-1_{11}\) line at 987.927 GHz, and tweak the star formation efficiency, the free-fall-time scaling factor, and the initial mass function. The exact ranges of the probed parameters are described in Table 1.
For the galactic masses, or in this case luminosities, we decided to probe galaxies with \(L_{\rm FIR}=10^{8}-10^{11}\) L\({}_{\odot}\), where for the range \(10^{8}-10^{10}\) L\({}_{\odot}\) we continued with an increment corresponding to the given order of magnitude (i.e., \(10^{8},2\times 10^{8},3\times 10^{8}\), etc.), and we stopped at \(10^{11}\) L\({}_{\odot}\). We made this choice because we wanted to probe the chosen regime in a relatively uniform way. Moreover, the lower limit was dictated by low galactic mass (\(10^{8}\) L\({}_{\odot}\) corresponds to \(\sim 10^{7}\) M\({}_{\odot}\)), while the upper one was dictated by the limitations of the computational power. As shown below, the inferred SFRs can readily be extrapolated to even higher luminosities.
In the model, we use the relation between mass and H\({}_{2}\)O line luminosity obtained only from Galactic sources to estimate the amount of emission generated by protostars. As a sanity check, we can include the high-\(z\) observations in this correlation, as shown in Fig. 2. Including the high-\(z\) measurements shifts the correlation slightly, such that low-mass protostars are assigned less emission, and vice versa for high-mass protostars. Therefore, if these high-\(z\) sources are included, this has implications for the assumed IMF. In order to obtain luminosity distances for high-\(z\) objects, we used a Planck 2018 flat \(\Lambda\)CDM cosmology with \(H_{0}=67.7\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{M}=0.310\), as implemented in the Astropy package (The Astropy Collaboration et al. 2018).
## 3 Results
By extracting SFRs together with \(L_{\rm H_{2}O}\), while at the same time defining galaxies according to their luminosity rather than their mass directly, we are able to confront expectations based on the literature about the star formation process as seen in the Milky Way while simultaneously testing the galaxy-in-a-box model. Therefore, in this proof-of-concept study, we ran a number of simulations spanning a range of parameters representing different galactic and star formation properties (see Table 1).
As mentioned in Sect. 2.1, we used two different mass-line luminosity correlations: in the first, we only use the Galactic data points, and in the second we include the high-\(z\) data points. We excluded certain parameters from the high-\(z\) test, because they were either computationally heavy or unnecessary for testing the impact of the high-\(z\) extrapolation (for further discussion, see Sect. 4). For galaxies with \(L_{\rm FIR}=10^{8}-9\times 10^{8}\) L\({}_{\odot}\), we ran 40 simulations for each setup, while for other luminosity ranges we ran 20 simulations per setup. The increased number of simulations for this specific galactic type was dictated by higher SFR variations, as the molecular reservoir is relatively low, which is reflected in larger variations in the number of formed stars. We also excluded calculations for galaxies with \(L_{\rm FIR}=10^{10}-9\times 10^{10}\) L\({}_{\odot}\), which would have \(\varepsilon_{\rm SF}=30\%\), because they were the most computationally heavy, and including them would not affect any conclusions of this study. In total, we ran 15200 simulations, including 12240 main runs and 2960 runs for the high-\(z\) test.
\begin{table}
\begin{tabular}{c|c|c|c c c c} \hline \hline & & & \multicolumn{3}{c}{Galactic type (log(\(L_{\rm FIR}/L_{\odot}\)))} \\ \cline{4-7} & & \(8-8.9\) & \(9-9.9\) & \(10-10.9\) & \(11\) \\ \hline \multirow{5}{*}{**Notes**} & \multirow{5}{*}{\(\varepsilon_{\rm SF}\)} & 1\% & x & x & x & x \\ & & 10\% & x & x & x & x \\ & & 30\% & x & x & & x \\ \cline{1-1} & & 1 & x & x & x & x \\ \cline{1-1} & & 5 & x & x & x & x \\ \cline{1-1} & IMF & \(s\) & x & x & x & x \\ \cline{1-1} & & \(t\)-\(h\) & x & x & x & x \\ \cline{1-1} & & \(b\)-\(h\) & x & x & x & x \\ \hline \end{tabular}
\end{table}
Table 1: Parameters considered in this study
Figure 1: Correlation between log \(M_{\rm vir}\) and log \(L_{\rm FIR}\) following Scoville and Good (1989). The solid straight line represents the best-fit power law to the data points, the darker shaded region corresponds to the 95% confidence region of the correlation, and the lighter shaded region represents the region that contains 95% of the measurements.
Uncertainties for the simulation results are calculated as a standard deviation from the mean value, which is derived for all runs with the same set of parameters. The best fits were obtained using linear regression while accounting for the spread in the y-direction. If the spread is not shown, this means that the size is smaller than the marker or the line size. When recalculating fluxes to luminosities, we naturally account for the propagation of uncertainties.
We describe the literature sample chosen for this study in Sect. 3.1. We then present the results through derived \(L_{\rm FIR}-L_{\rm H_{2}O}\) (Sect. 3.3), \(L_{\rm FIR}-\rm SFR\) (Sect. 3.4), and \(L_{\rm H_{2}O}-\rm SFR\) (Sect. 3.5) relations, which we compare to those provided in the literature.
### Literature sample
As a default source of Galactic observations, we use data from the Water Emission Database (Dutkowska and Kristensen, 2022) for the para-H\({}_{2}\)O \(2_{02}-1_{11}\) line at 987.927 GHz, which consists of Galactic low-, intermediate-, and high-mass protostars observed as part of WISH (van Dishoeck et al., 2011) and the William Herschel Line Legacy Survey (WILL; Mottram et al., 2017).
The sample of extragalactic sources used in the high-\(z\) test was taken directly from van der Werf et al. (2011), Combes et al. (2012), Omont et al. (2013), Riechers et al. (2013), Yang et al. (2013), Yang et al. (2016), Apostolovski et al. (2019), and Jarquila et al. (2019). This sample includes both nearby sub-LIRGs, LIRGs, and quasars, as well as high-z quasars, ULIRGS, and HyLIRGs, with the farthest one being the HyLIRG, namely HFLS3, at \(z=6.337\) (\(D_{\rm L}=62834.75\) Mpc; for more details see Riechers et al., 2013). A detailed description of the sample and exact values used in this study can be found in Kristensen et al. (2022).
### Total stellar mass versus SFR
We evaluated the derived SFRs by exploring their relation with the total stellar mass of the corresponding galaxies. From Fig. 3, we see that we are overestimating the SFRs when looking at functions derived by for example Salmon et al. (2015) for the main sequence galaxies and Rinaldi et al. (2022) for the starbursts.
With the chosen set of properties, galaxies with \(M_{*}<10^{6.5}\) M\({}_{\odot}\) seem to lie close to the main sequence estimates from Salmon et al. (2015), at least in their lower limits. However, going to cases where the combination of considered parameters resulted in an increase in SFRs, especially galaxies with \(M_{*}>10^{6.5}\) M\({}_{\odot}\), we start overestimating SFRs by at least one order of magnitude when compared to the literature (Rinaldi et al., 2022).
We also observe two distinct populations that appear to be dictated by the value of the free-fall-time scaling factor. For \(\tau_{\rm ff}^{\rm sc}=1\), we let the efficiency of the free-fall time depend only on the density of the progenitor molecular cloud, while by introducing \(\tau_{\rm ff}^{\rm sc}=5\) we prolong the time required to form most of the stellar population, resulting in a more diverse range of protostellar ages. From Fig. 3, we see how a decrease in the free-fall-time scaling factor influences the derived SFR. Considering the relatively low efficiency of the star formation process, the lower-SFR branch is likely to be more consistent with the nature of the star formation process. We discuss this topic further in Sect. 4.4.
### \(L_{\rm FIR}-L_{\rm H_{2}O}\) correlation
To compare the predicted fluxes with observations, we first converted them to luminosities using the following expression:
\[\frac{L_{\rm line}}{\rm L_{\odot}}=99.04\left(\frac{I}{\rm Jy\;km\;s^{-1}} \right)\left(\frac{\lambda_{0}}{\rm\mu m}\right)^{-1}\left(\frac{D_{\rm L}}{ \rm Mpc}\right)^{2}, \tag{4}\]
where \(I\) is the total intensity in Jy km s\({}^{-1}\), \(\lambda_{0}\) the wavelength in microns (303.4557 \(\mu\)m for the para-H\({}_{2}\)O \(2_{02}-1_{11}\) line), and \(D_{\rm L}\) the luminosity distance of the source in megagarsecs. By converting fluxes, we can quantitatively compare our results with observations, as they are no longer distance dependent.
Using linear regression, we derived best-fit lines to the following expression:
\[\log_{10}\left(L_{\rm H_{2}O}/\rm L_{\odot}\right)=a\times\log_{10}\left(L_{ \rm FIR}/\rm L_{\odot}\right)+b. \tag{5}\]
Table 2 provides all of the derived slopes and intercepts. In the following, we focus on the two setups exhibiting the highest and lowest water emission. These are the models with \(\varepsilon_{\rm SF}\)=30%, IMF = top-heavy, \(\tau_{\rm ff}^{\rm sc}\)=1, and \(\varepsilon_{\rm SF}\)=1%, IMF = bottom-heavy, \(\tau_{\rm ff}^{\rm sc}\)=5, respectively. For the least emitting case, we derive \(a=0.809\pm 0.003\) and \(b=-7.269\pm 0.029\), while for the most emitting case we derive \(a=0.809\pm 0.001\) and \(b=-5.135\pm 0.012\). In both cases, \(R^{2}=99.9\)%. For all of the simulations, the slope stays roughly constant with \(a\approx 0.81\), and therefore the span of luminosities is described by the intercept falling in the range of \(-7.269\) to \(-5.135\). We can derive the general relation for water-line luminosity depending on the intercept value:
\[L_{\rm H_{2}O}/\rm L_{\odot}=10^{b}\left(L_{\rm FIR}/\rm L_{\odot}\right)^{0.81}. \tag{6}\]
From Fig. 4, we see that we deviate from extragalactic observations by between a factor of a few and about two orders
Figure 2: Two types of correlations and observational samples used in this study. The blue solid line corresponds to the best fit to the Galactic with the data points taken from the Water Emission Database (Dutkowska and Kristensen, 2022), while the red solid line represent the best fit that also includes the extragalactic sample consisting of nearby subLRIGs, LIRGs, and quasars, as well as high-\(z\) quasars, ULIRGs, and HyLIRGs for details see Sect. 3.1). Markers correspond to the observations from each sample. Shading follows that from Fig. 1.
of magnitude. We observe that the expectations built on the extragalactic sample taken from Jarugula et al. (2019) --where \(L_{\rm H_{2}O}/L_{\rm FIR}=1.69^{+0.79}_{-0.54}\times 10^{-5}\) (we explore this more extensively in Sect. 4.1)-- are especially far from our expectations for the brightest high-\(z\) galaxies. We discuss this further in Sect. 3.3. Also, in Sect. 4.2, we explore the possible impact of the inclusion of high-\(z\) starbursts on the correlation between the envelope mass and intensity (\(M_{\rm em}-I\) relation) --which is the basis of emission assignment in the galaxy-in-a-box model-- and whether it could explain the observed differences.
### \(L_{\rm FIR}\) - SFR correlation
To further evaluate derived SFRs, we explored their relation with corresponding far-infrared luminosities (Fig. 5). We clearly see that the derived SFRs create different populations depending on the star formation efficiency and the free-fall-time scaling factor. Again, we are clearly overestimating the SFRs. However, relations in the literature, for example those of Kennicutt & Evans (2012) and Casey et al. (2014), fall into our lower prediction regime, meaning that at least for the star forming galaxies with lower star formation activity (with respect to the standard setup in the galaxy-in-a-box model), we are roughly recovering the expected star formation process.
The span of the SFRs derived in this study depends strongly on the efficiency of the process. The discrepancy between the literature values and our simulations can be as high as two orders of magnitude. We focused on and derived relations analogous to Eq. (5) for the setups with the lowest and highest emission, as well as the standard model setup from the galaxy-in-a-box model. We provide all of the derived relations in Table 2. Here, we do not derive almost identical slopes, as we did for \(L_{\rm FIR}\) - \(L_{\rm H_{2}O}\). For the most extreme cases of the \(L_{\rm FIR}\)-SFR relation, we derive slopes of \(0.94\pm 0.04\) and \(0.90\pm 0.03\), which agree within the uncertainties, while the derived intercepts (here, the intercept refers to the term \(b\) in Eq. (5), which is further used as showed in Eq. (6)) are equal to \(-8.50\pm 0.35\) and \(-5.75\pm 0.33\), respectively. We further discuss the apparent excess in SFR in Sect. 5.
Figure 3: SFR as a function of stellar mass of each galaxy. Full color markers represent results, where the free-fall-time scaling factor was set to 1, while markers in the same but lighter colors correspond to \(\tau_{\rm H}^{\rm ex}\) of 5. Circles represent setups with the standard IMF (Chabrier 2003), while triangles pointing upwards and downwards represent setups with its top-heavy and bottom-heavy versions, respectively. Different colors of the markers refer to different star formation efficiencies, where green, orange, and red mean an \(\varepsilon_{\rm SF}\) of 1%, 10%, and 30%, respectively. Solid lines represent best-fit lines from Rinaldi et al. (2022) to their starburst (SB) population, while dashed lines represent best fits to the main sequence galaxies from Salmon et al. (2015).
### \(L_{\rm H_{2}O}-\) SFR correlation
The last explored dependence was that of \(L_{\rm H_{2}O}\) and the corresponding SFRs. We see from Fig. 6 that all of the derived SFRs fall into the same population, which is expected considering the fact that the greater the luminosity, the more actively star-forming and massive the corresponding galaxy. By fitting all of the derived points to Eq. (5), we get a slope of \(1.11\pm 0.01\) and an intercept of \(-0.083\pm 0.018\), indicating a near-proportionality between the SFR and \(L_{\rm H_{2}O}\).
However, Fig. 6 suggests that we are systematically overestimating SFRs by approximately four orders of magnitude with respect to the findings of Jarugula et al. (2019), where \(\rm SFR\left(M_{\odot}yr^{-1}\right)=7.35^{+5.74}_{-3.22}\times 10^{-6}L_{\rm H _{2}O}\left(L_{\odot}\right)\). If extrapolating their relation to Galactic star-forming regions, we would underestimate SFRs by orders of magnitude (Kristensen et al., 2022). We discuss this discrepancy in Sect. 4.5.
## 4 Discussion
In the following, we discuss derived SFRs and water luminosities. We also evaluate how the star-formation parameters considered here could affect the results and compare our results with the literature. Moreover, we discuss what other physical processes not considered in this study could impact the derived values and explore other possible influences.
### Insights from \(L_{\rm H_{2}O}/L_{\rm FIR}\) ratios
The ratio of \(L_{\rm H_{2}O}\) and corresponding \(L_{\rm FIR}\) could be used to understand the source of the observed water emission (this is shown in Fig. 7). This in turn can help us to understand whether or not water behaves differently in different galactic regions and galactic types. With this in mind, we calculated the ratios derived from the galaxy-in-a-box model and compared them with our Galactic and extragalactic samples.
The derived values \(10^{-8}<L_{\rm H_{2}O}/L_{\rm FIR}<10^{-6}\) fall below those from all objects considered in the extragalactic sample,
Figure 4: Simulated water-line luminosity as a function of \(L_{\rm FIR}\). The dashed black line represents the most emitting galaxy in our simulations (\(\epsilon_{\rm SF}\)=30%, IMF = top-heavy and \(\tau_{t}^{\kappa}\)=1), while the dashed gray line corresponds to the least emitting one (\(\epsilon_{\rm SF}\)=1%, IMF = bottom-heavy and \(\tau_{t}^{\kappa}\)=5). The gray-shaded area between these two lines refers to the probed parameter space, and all possible outcomes considered in this study would fall in that regime. Solid blue and red lines refer to the results derived for setups with the top-heavy IMF form for the Galactic and extragalactic \(M_{\rm star}-I\) relations, respectively. Dotted lines show the results for these two correlations, when the standard IMF is applied. In both, i.e., the standard and the top-heavy cases, the free-fall-time scaling factor is set to 1. Circles refer to observational samples (for more details we refer the reader to Sect. 3.1), while the purple line represents the expected relation from Jarugula et al. (2019).
but coincide with the Galactic sample at its high-mass/high-luminosity end (see Fig. 4). We know from Galactic observations (e.g., van Dishoeck et al. 2021) that water emission from young stellar objects predominantly comes from the shocked material in outflows. Therefore, a natural assumption would be that the Galactic sample is consistent in terms of the calculated ratios. Instead, what we see is that low- to intermediate-mass protostars exhibit roughly the same ratios as the extragalactic sample, and we see a clear drop for the most luminous end of the Galactic objects.
Available water observations of Galactic high-mass young stellar objects are limited because of both their number and sensitivity. One of the most detailed studies was conducted with a survey towards the Cygnus X star-forming region (PI: Bontemps; San Jose-Garcia 2015). Cygnus-X is one of the nearest massive star-forming complexes (\(D\sim\) 1.3-1.4 kpc, e.g., Rygl et al. 2012). However, even these observations do not recover the total emission that would come from a high-mass-star-forming complex because of the spatial resolution and sensitivity limitations of the HIFI instrument on the _Herschel_ Space Observatory. This latter survey, one of the most complete, only consists of single-pointing observations. Therefore, new instruments are needed to fully estimate the amount of H\({}_{2}\)O emission coming from a forming Galactic cluster.
To take another approach, we estimate the amount of H\({}_{2}\)O emission from the nearby W3 high-mass-star-forming region. Its distance is 2 kpc and its age is 2 Myr (Bik et al. 2012). We used a mass of 4\(\times 10^{5}\) M\({}_{\odot}\) for the entire cluster (Rivera-Ingraham et al. 2013), corresponding to a total luminosity of 2\(\times 10^{6}\) L\({}_{\odot}\) using Eq. 3. To estimate the missing emission from all protostars, we ran a model for just one cluster instead of an entire galaxy. The cluster model predicts a total line intensity of 120 K km s\({}^{-1}\), which may be compared to the observed value of the high-mass protostar W3-IRS5 of 21.9 K km s\({}^{-1}\)(van der Tak et al. 2013), which has a luminosity of \(10^{5}\) L\({}_{\odot}\), or 5% of that of the cluster. The simulated value is highly sensitive to the adopted age of the cluster, for example, 1 Myr would result in a predicted intensity of 250 K km s\({}^{-1}\). This implies that for an individual cluster, we need to know the age accurately to within 10%, which is not currently possible. It is reasonably possible that the amount of water emission we are missing is between a factor of 6 and 12. Without being able to map the entire cluster in water emission, we will not know exactly how much.
### High-\(z\) test
Knowing that the relation between water emission and \(L_{\rm FIR}\) spans over many orders of magnitude starting from the low-mass
Figure 5: SFR as a function of \(L_{\rm FIR}\). Colors and markers as in Fig. 3. Dotted lines refer to the upper prediction band for the setup with the highest SFR and the lower prediction band for the setup with the lowest SFR. Shading of the best-fit lines corresponds to the 95% confidence region of each correlation. Solid lines represent the literature estimates.
Figure 6: SFR as a function of water line luminosity. Colors and markers are as in Fig. 3. The shaded region corresponds to the 95% confidence region of the correlation, while dotted lines indicate where 95% of the measurements should fall. The solid purple line represents the expected relation from Jarugula et al. (2019).
Figure 7: \(L_{\rm H_{2}O}/L_{\rm FIR}\) as a function of \(L_{\rm FIR}\). Blue and red points refer to Galactic and extragalactic observations, respectively. Yellow points refer to our simulations with star formation efficiencies of 10% and 30%.
protostars to high-\(z\) HyLIRGs, we probed the influence of the extragalactic observations on the \(M_{\rm env}-I\) relation, and explore how this extrapolated form of the formula impacts the derived intensities.
In Fig. 2, we see that by including the extragalactic observations, we effectively lower the contribution from the low-mass end of the correlation and we see that it will only positively impact the high-mass protostars. On the other hand, the purely Galactic correlation lowers the emission from the high-mass protostars. Therefore, considering that we are underestimating water emission, we focused only on the standard and top-heavy IMF forms. We did this because the standard IMF is already dominated by the low-mass end of the distribution, and we also know from Dutkowska & Kristensen (2022) that the emission derived for the bottom-heavy IMF is practically indistinguishable from the standard one. At the same time, the top-heavy IMF would increase the emission even for the normal form of the correlation, and the inclusion of the extragalactic sources increases the slope by \(\sim~{}10\%\) (see Fig. 2).
The results of the test indicate that inclusion of the extragalactic sources results in lowered emission, on average, and that the difference with the results with the purely Galactic correlation starts to diminish for higher galactic masses and higher star-formation efficiencies. This effect is not surprising as the star-formation process is dominated in both total mass and number by low-mass protostars, while in terms of total bolometric luminosity the high-mass stars completely dominate the picture (e.g., Kroupa, 2002). Therefore, the inclusion of the extragalactic sources, which lowers the emission from the low-mass protostars, naturally lowers the water emission derived from the simulated galaxies, as this is the main star-forming component if we consider Milky Way-like star formation. However, for the high-\(z\) starbursts with high star-formation efficiencies and seemingly top- or even extremely top-heavy IMFs, this extrapolation could make a difference, when simulating star formation and its emission. Nevertheless, we do not investigate this further, as this is beyond the scope of this paper.
### SFR estimates
From the results derived in this study, we are consistently overestimating SFRs for given galactic types. However, when considering the assumptions behind the model, and the fact that in the current version of the model, current SFRs are simulated without correcting for star formation histories or existing populations, the overestimation is no longer prominent.
The galaxy-in-a-box model was created as a tool for simulating emission from active and current star formation in galaxies. Therefore, even though the model accounts for dynamical differentiation of (proto)stellar ages, the model does not account for already existing, older stellar populations that normally would contribute to observations from which the rates are calculated. Moreover, as seen in Figs. 3 and 5, the results lie close to the literature estimates, if we assume low star formation activity. A calibration of the SFRs of galaxies depends on their current star formation activity. If the bulk of galaxies are observed during a period of low star formation, we would naturally fall on the lower SFR side. Also, there are many factors influencing star formation activity in galaxies that are not taken into account in the current version of the galaxy-in-a-box model.
Another important aspect is that when calibrating SFRs from \(L_{\rm FIR}\), one has to make assumptions about parameters such as the IMF and star formation history, which are sources of additional uncertainty in the final estimation of the SFR. Moreover, \(L_{\rm FIR}\) is likely to underestimate the SFR in young clusters (Gutermuth et al., 2011) by up to an order of magnitude, and these are the main objects of interest in this study. If this is the case, our SFR estimates are roughly consistent with expectations.
Lastly the galaxy-in-a-box model accounts for all stellar products, from brown dwarfs to high-mass stars. Therefore, it is not subject to observational limitations and the apparent overestimation could be an effect of accounting for all objects, including those that are normally unobservable, as illustrated in the W3 example above. The scenario we are considering slightly more closely resembles the high-\(z\) situation, where galaxies are filled with active star-forming regions and are described as 'full of Orions' (Rybak et al., 2020). In this case, having relatively young star-forming regions, we trace only active and current star formation without accounting for higher differentiation of ages and stellar populations.
### Impact of the star-formation parameters
In this study, we explored simulations for different galaxy types, and as such explored a broad parameter space (see Sect. 2.2 and Table 1). As in the first galaxy-in-a-box study (Dutkowska & Kristensen, 2022), we observe no strong effect of the IMF, even though we included nearby subLIRGs, LIRGs, and quasars, as well as high-z quasars, ULIRGS, and HyLIRGs in the correlation that is used to assign molecular emission to protostars. This is expected as the extrapolation to the high-\(z\) regime changes the slope of the correlation only by \(\sim\)10%.
We observe a strong impact of the star-formation efficiency and the free-fall-time scaling factor, both for the derived emission and SFRs. This is of no surprise as both parameters impact the stellar population of each cluster. The free-fall-time scaling factor will effectively lower the ages of the clouds and thus increase the emission, while the star-formation efficiency regulates how much of the molecular reservoir will be turned into stars, hence increasing the number of stars.
One of the new input parameters in the galaxy-in-a-box model is the mass of the galaxy, as derived from Eq. (3). Clearly, the more massive the galaxy, the more emission we derive from the model. However, this parameter has its own uncertainty, which would be especially important when considering the predicted water emission. The relation between the mass and luminosity was also derived for young stellar objects by Pitts et al. (2022), where:
\[\log\left(M_{\rm em}/{\rm M}_{\odot}\right)=0.30^{+0.07}_{-0.06}+0.79^{+0.01}_ {-0.02}\log\left(L_{\rm bol}/{\rm L}_{\odot}\right). \tag{7}\]
Although this expression was inferred for individual protostellar envelopes, it clearly agrees with Eq. 3 within the uncertainty. Here, we make the assumption that \({\rm L}_{\rm bol}\) represents \({\rm L}_{\rm FIR}\) as young protostars are deeply embedded in gas and dust, and \({\rm L}_{\rm bol}\) will be dominated by the contribution from \({\rm L}_{\rm FIR}\). Hence, if the relation between mass and luminosity is more universal, underestimating or overestimating can respectively underestimate or overestimate the available molecular reservoir.
### Comparison with observations
When comparing the derived values with observations, we clearly see that we are underestimating the water emission by at least one to two orders of magnitude (see Fig. 4) and overestimating the SFRs from a factor of a few to two orders of magnitude (see Fig. 3 and 5). We discuss the possible explanations for the difference in SFR in Sect. 4.3 extensively, and here
we focus solely on the difference between our estimate and that of Jarugula et al. (2019). The SFR calibration of Jarugula et al. (2019) utilizes the \(\rm L_{FIR}-SFR\) relation from Kennicutt & Evans (2012):
\[SFR\ (\rm M_{\odot}\,yr^{-1})=1.47\times 10^{-10}L_{\rm IR}\ (\rm L_{\odot}), \tag{8}\]
which, as mentioned in Sect. 4.3, is subject to various uncertainties. This is especially important when considering the IMF in the high-\(z\) ULIRGs and HyLIRGs, as found in many studies (e.g., Zhang et al.2018), adding uncertainty to the calibration. Moreover if we were to apply the calibration from Jarugula et al. (2019), we would heavily underestimate SFRs towards well-studied, resolved Galactic clouds, where the relation inferred for water emission and luminosity is \(\approx 3000\) times higher than that of Jarugula et al. (2019) (further discussion in Kristensen et al.2022).
Focusing on the water emission, there are a few factors that could contribute to the observed difference and we discussed some of them in Sect. 4.1. Additionally, one of the reasons for not recovering the emission is that we do not convert 100% of the galactic mass to an emitting source. There is a number of parameters standing in the way, with the star formation efficiency being the most obvious one. Moreover, currently we consider emission only from Class O and Class I protostars. Therefore, when considering emitting components that constitute only a small percentage of a whole galaxy, we are naturally going to lose a certain amount of emission.
In galaxies there are more emitting components than simply protostars. These include photodissociation regions, galactic outflows, and supernovae. Even though their contribution is likely to be lower than that from star formation, their inclusion in calculations is essential in order to fully reproduce the emission, and as such, is a part of planned future improvements.
## 5 Conclusions
We extended the galaxy-in-a-box model to relate the predicted molecular emission from forming stars with SFRs. In this paper, we demonstrate the introduced extension and evaluate the derived results for galaxies with \(L_{\rm FIR}=10^{8}-10^{11}\rm L_{\odot}\) and various levels of star formation activity. We complemented the SFR study by extracting predicted emission for the para-\(\rm H_{2}O\ 2_{02}-1_{11}\) line at 987.927 GHz. Our main results are as follows:
* The star formation efficiency and the free-fall-time efficiency have a strong impact on the SFR and emission, whereas the opposite holds for the IMF.
* For the most extreme star-forming cases, the galaxy-in-a-box model overestimates the SFRs by up to two orders of magnitude. However, this difference could be lowered depending on the extent to which the current calibrations using \(\rm L_{FIR}\) as a star formation tracer underestimate the actual SFR values.
* The model underestimates the water emission by up to two orders of magnitude, and especially for the high-\(z\) quasars, ULIRGs, and HyLIRGs.
* For the moment, the model does not account for additional sources of emission, including supernovae, photodissociation regions, and galactic outflows. Moreover, we need to revisit the derived water emission for Galactic high-mass-star-forming regions, as we might miss the bulk of emission.
Our estimates deviate from observations and the literature. However, the apparent differences are consistent with expectations in the sense that known sources of emission are not included in the model, and therefore the galaxy-in-a-box model is a promising step toward shedding light on the star-forming properties of galaxies across cosmic time. In the near future, we plan to introduce a number of extensions that will account for other sources and processes that could contribute to the emission. The planned extensions include accounting for galactic outflows -- both AGN and starburst driven--, shocks from supernovae, and emission from photodissociation regions. Moreover, we are introducing \(\rm H_{2}\) and high-\(J\) CO emission, which is going to be especially important in the JWST era.
To properly account for water emission in our own Galaxy in the future, we will need a new far-infrared probe with the sensitivity of JWST. Such a probe is the planned PRIMA1 mission. Only then will we be able to fully recover the emission from star-forming clusters in the Galaxy, and properly estimate the contribution from protostars in all stellar mass ranges.
Footnote 1: [https://prima.ipac.caltech.edu](https://prima.ipac.caltech.edu)
###### Acknowledgements.
The research of KMD and LEK is supported by a research grant (19127) from VILLUM FONDEN.
|
2306.08326 | Early Detection of Late Blight Tomato Disease using Histogram Oriented
Gradient based Support Vector Machine | The tomato is one of the most important fruits on earth. It plays an
important and useful role in the agricultural production of any country. This
research propose a novel smart technique for early detection of late blight
diseases in tomatoes. This work improve the dataset with an increase in images
from the field (the Plant Village dataset) and proposed a hybrid algorithm
composed of support vector machines (SVM) and histogram-oriented gradients
(HOG) for real-time detection of late blight tomato disease. To propose a
HOG-based SVM model for early detection of late blight tomato leaf disease. To
check the performance of the proposed model in terms of MSE, accuracy,
precision, and recall as compared to Decision Tree and KNN. The integration of
advanced technology in agriculture has the potential to revolutionize the
industry, making it more efficient, sustainable, and profitable. This research
work on the early detection of tomato diseases contributes to the growing
importance of smart farming, the need for climate-smart agriculture, the rising
need to more efficiently utilize natural resources, and the demand for higher
crop yields. The proposed hybrid algorithm of SVM and HOG has significant
potential for the early detection of late blight disease in tomato plants. The
performance of the proposed model against decision tree and KNN algorithms and
the results may assist in selecting the best algorithm for future applications.
The research work can help farmers make data-driven decisions to optimize crop
yield and quality while also reducing the environmental impact of farming
practices. | Yousef Alhwaiti, Muhammad Ishaq, Muhammad Hameed Siddiqi, Muhammad Waqas, Madallah Alruwaili, Saad Alanazi, Asfandyar Khan, Faheem Khan | 2023-06-14T07:58:14Z | http://arxiv.org/abs/2306.08326v3 | Early Detection of Late Blight Tomato Disease using Histogram Oriented Gradient based Support Vector Machine
###### Abstract
_The tomato is one of the most important fruits on earth. It is rich in nutrients, has the best taste, and has other good health benefits. It plays an important and useful role in the agricultural production of any country. An increase in tomato diseases causes an increase in the import of tomato, which affects the economy of the country. This research improved the dataset with an increase in images from the field (the Plant Village dataset) and proposed a hybrid algorithm composed of support vector machines (SVM) and histogram-oriented gradients (HOG) for real-time detection of late blight tomato disease. The use of the proposed hybrid algorithm in the smart agriculture field for the detection of late blight on tomato would increase and protect its production. To enhance the image dataset through the inclusion of early-affected and late-blight tomato leaves. To propose a HOG-based SVM model for early detection of late blight tomato leaf disease. To check the performance of the proposed model in terms of MSE, accuracy, precision, and recall as compared to Decision Tree and KNN. The integration of advanced technology in agriculture has the potential to revolutionize the industry, making it more efficient, sustainable, and profitable. This research work on the early detection of tomato diseases contributes to the growing importance of smart farming, the need for climate-smart agriculture, the rising need to more efficiently utilize natural resources, and the demand for higher crop yields. The proposed hybrid algorithm of SVM and HOG has significant potential for the early detection of late blight disease in tomato plants. The performance of the proposed model against decision tree and KNN algorithms and the results may assist in selecting the best algorithm for future applications. The research work can help farmers make data-driven decisions to optimize crop yield and quality while also reducing the environmental impact of farming practices._
_HOG,KNN, MSE, SVM_
## I Introduction
Tomatoes are the most commonly cultivated crop all over the world. The annual estimated production in recent years has been over 100 million metric tons. For production to increase, we need smart agricultural techniques. There are five of the most common tomato diseases. These diseases have the worst effect on tomato production. A fungus causes late blight, which is one of the common diseases that reduce production. Early detection of such dangerous microbes enables us to spray the required pesticides and ensure crop protection. Due to climate change, the numbers of pollinators are decreasing (Bir et al., 2020). Some insects that help in pollination may possibly attack leafy parts of the plants and destroy crops.
In smart agriculture, there are improved computational methods that can help us detect pathogens and parasites in their early stages (Hasan et al., 2019). Smart farming is the concept that helps the farming industry with the infrastructure it requires to utilize modern technology for automating, tracking, and analyzing activities, such as big data, the cloud, and the internet of things (IOT). Smart farming, also referred to as precision agriculture, is software-managed and sensor-monitored (Ellassouny&Smarandache, 2019). The need for climate-smart agriculture, the rising need to more efficiently utilize natural resources and the demand for higher crop yields, the growing use and sophistication of information and communication technology, and the expanding global population all contribute to the growing importance of smart farming.
Machine learning is a branch of artificial intelligence that enables computer systems to learn from data and apply that learning to predict. It has proven to be a valuable tool in various domains, including image recognition, natural language processing, robotics, and data analytics. The fundamental principle behind machine learning is to enable the computer system to learn from past experiences and apply that learning to future situations. The process of machine learning involves feeding large amounts of data into an algorithm, which then uses statistical methods to identify patterns and relationships within the data.
The research uses Support Vector Machine (SVM) and Histogram Oriented Gradient (HOG), the best computational algorithms, to detect pathogens in tomato leaf disease (Zhang et al., 2018). Dalal&Triggs (2005) use HOG to detect human techniques used to identify different diseases in tomato leaf disease. The use of technologies like smart agriculture refers to the use of the Internet of Things, sensors, navigational aids, and artificial intelligence on your farm. The ultimate purpose is to increase the use of human workers while raising crop yield and quality (Kumar &Vani, 2019).
The histogram of oriented gradients (HOG) technique detects edges and corners in an image by analyzing the gradient of the image intensity (Tyagi, 2021). HOG generates a histogram of the directions of the gradients, which is used as a feature vector (Tyagi, 2021).
The pathogen PhytophthoraInfestans is the source of the fungal disease known as late blight that affects tomato plants. It can result in severe crop losses and is a critical issue in warm, humid settings. Gray spots on the leaves, stems, fruit, and flowers constitute the symptoms. A red ring may appear around the white center of certain diseases. As the disease spreads quickly in warm, moist environments, it's crucial to take precautions to lower the risk of infection. Infected plants should be removed, crops rotated, and fungicides should be used. This research uses a Support Vector Machine (SVM) classifier and a Histogram of Oriented Gradients (HOG) feature descriptor to identify the late blight tomato disease. The dataset used includes 1900 images of late blight disease in tomato leaves obtained from the field and the Kaggle Plant Village dataset.
The research methodology includes dataset preparation, feature extraction using HOG descriptors, and training of an SVM classifier. The model's performance is evaluated using accuracy, precision, recall, and mean square error (MSE). Data augmentation techniques such as flipping and rotation are applied to enhance the dataset. The proposed HOG-based SVM algorithm is implemented in Jupyter Notebook using Python. The dataset is split into training (70%) and testing
(30%) subsets. This research can be helpful for real-time detection of late blight tomato disease and for crop protection and productivity in tomato farming.
### _Research Background_
Tomatoes are a widely cultivated crop, with an annual production of over 100 million metric tons. However, tomato production is severely impacted by five common diseases, including the fungus-caused late blight. Early detection of these diseases is crucial for crop protection and increased production. Climate change and reduced pollinator populations can also harm tomato crops. Smart agriculture techniques and improved computational methods can aid in the early detection of pathogens and parasites. Smart farming, which utilizes cutting-edge technology for automating, tracking, and analyzing activities, is becoming increasingly important due to the need for climate-smart agriculture, efficient resource utilization, higher crop yields, and the expanding global population.
The pathogen Phytophthorainfestans is the source of the fungal disease known as late blight that affects tomato plants. It can result in severe crop losses and is a critical issue in warm, humid settings. Gray spots on the leaves, stems, fruit, and flowers constitute the symptoms. A red ring may appear around the white center of certain diseases. As the disease spreads quickly in warm, moist environments, it's crucial to take precautions to lower the risk of infection. Infected plants should be removed, crops rotated, and fungicides should be used. Figure1 shows the symptoms of late blight tomato disease.They usually begin with dark green to brownish-black water-soaked areas on the leaves, which may later become yellow or brown.
In the field of smart agriculture, using machine learning algorithms for early tomato disease detection is becoming more vital. The use of these technologies can help farmers detect diseases at an early stage and take the necessary actions to prevent crop loss. In this study, support vector machine (SVM) and histogram-oriented gradient (HOG) algorithms are used to detect late blight disease in tomato plants. The concept of smart farming has gained traction in recent years, as it provides farmers with the ability to automate and monitor their agricultural activities using sensors, the cloud, and the Internet of Things (IOT). This research can help farmers make data-driven decisions to optimize crop yield and quality while also reducing the environmental impact of farming practices. The impact of climate change has also made it necessary to adopt climate-smart agriculture practices, which promote sustainable farming methods and reduce the carbon footprint of farming. The integration of advanced technology in agriculture has the potential to revolutionize the industry, making it more efficient, sustainable, and profitable.
### _Motivation_
Being an agricultural country, we are well aware of the damage caused by late blight disease in tomatoes. This disease affects the balance of supply and demand in the local marketplace. Early detection through smart techniques may help farmers and the relevant agricultural pathology authority take preemptive measures. The tomato crop will naturally increase in productivity with the early detection of late blight disease through the proposed hybrid strategy. This study used HOG and SVM as hybrid strategies to detect or identify affected leaves in the early stages. HOG extracts features from tomato leaf images, and SVM determines whether a tomato leaf is healthy or infected with the late blight disease.
### _Research Significance_
This study highlights the importance of smart agriculture and the need for early detection of tomato diseases, specifically the late blight disease caused by the pathogen Phytophthorainfestans, which results in severe crop losses. The annual production of tomatoes is over 100 million metric tons worldwide, and with the increasing global population and demand for higher crop yields, there is a growing need for smart farming techniques. The development of dark, sunken, and greasy spots on infected fruit is a common symptom of late blight in tomatoes, as shown in figure 2.
This study presented a hybrid approach composed of support vector machines (SVM) and histogram-oriented gradients (HOG) with the aim of detecting the late blight disease in tomato plants in real time. The research objectives are to improve the dataset by including early-affected late blight tomato leaves and to achieve early detection of the disease using the proposed hybrid strategy. Early detection of the disease through this hybrid algorithm could help farmers and relevant authorities take preemptive measures to protect the tomato crop, which could result in an increase in productivity and protect the country's economy.
## II Paper Organization
This paper is structured into seven sections, beginning with an introductory section that discusses the research topic of late blight disease detection and classification in tomato plants using machine learning techniques. It covers the research background and the significance of the study.
The third and fourth sections provide an extensive literature review on late blight disease in tomato plants, covering its management techniques, machine learning algorithms, feature extraction methods, and classification algorithms such as HOG, SVM, Decision Tree, and KNN. These sections also include a comparison of related work.
Figure 1: Late Blight Tomato Leaves
Figure 2: Late Blight Tomato
Section five outlines the research methodology, including data preprocessing, the research flow chart, the simulation environment, dataset details, data splitting, and the proposed HOG-based SVM algorithm.
Section six presents the results and discussions of the research study, including an introduction, preliminary study, performance evaluation, and a summary of the findings. The results demonstrate that the proposed HOG-based SVM algorithm outperforms other classification algorithms, achieving an accuracy rate of 82%.
The final section offers a conclusion to the research study, discussing the research objectives, findings, and contributions in the field of machine learning in plant disease detection and classification. It also discusses future directions, highlighting the importance of improving algorithm accuracy and expanding the research scope to encompass other plant diseases.
## III Literature Review
Arakeri et al. (2015) introduced a technique for detection of tomato leaf diseases based on image processing. They worked on a special disease caused by fungi called late blight. The early signs of this disease are non-uniform leaf shape, water soaked lesions. Their purposed model was to detect and analysis of that disease based on novel computer vision using thresholding algorithm and KMeans clustering algorithm to identify whether a leaf is effected or healthy and achieved accuracy from 80% to 85%.
Durmus et al. (2017) worked on the detection of tomato leaf diseases in their tomato farm green house. The team investigated those different diseases of the tomato leaves can also be detected by taking a close photograph of the subject using sensors. The main issue with them was the selection of deep learning architecture. So two approaches AlexNet and SqueezeNet were tested, trained and validated on NVidia Jetson TX1. They used healthy plant village tomato leaf dataset and tested their trained network on random online images.
Zhang et al. (2018) used reinforcement learning to identify a disease affecting tomato leaves. They used deep convolution neural network (CNN) that successfully detect tomato leaf disease. The foundation of CNN at that time was made up by AlexNet, GoogleNet, and ResNet. ResNet was the best model with stochastic gradient descent that has the precision of 97.28% for finding tomato leaf disease.
Elmassouny and Smarandache (2019) developed a smart phone application for tomato leaf disease detection. The application was based on deep learning using convolution neural network inspired by MobileNet which is faster and have small architecture. The dataset contains more than 7100 tomato leaves.
Hasan et al. (2019) used transfer learning to detect the tomato leaf disease. They worked on deep learning precision farming using CNN. Most of his work was based on convolution neural network. In order to identify precisely high effective areas, he introduced a drone based farming system. Their dataset consist of 500 images of the effected leaves from a farm and 2100 images from the internet. They used transfer learning to train CNN with 99% accuracy by increasing the training dataset to 85%.
Kumar and Vani (2019) proposed CNN model for the detection of tomato leaf diseases by performing experiments with conventional neural network. To train the deep CCN network plants village dataset was used with 4900 images of dataset including healthy and effected leaves with accuracy of 99%.
Bir et al. (2020) introduced a mobile application to detect the tomato leaf diseases based on transfer learning. Convolution neural network was used for classification and detection of healthy and effected leaves. This method is now recently and widely used methods with the goal to classify and detect the leaves based on smartphones and have found many applications in the field of robotics, healthcare and agriculture.
Zaki et al. (2020) classified tomato leaves diseases based on MobileNetv2 that has more than 50 layers. The small size allows the network to perform much faster. The model was able to detect three types of tomato leaf diseases trained on more than 4500 images of leaves from Plant Village dataset. The algorithm was tested with the same dataset and achieved accuracy of 92%.
Ashok et al. (2020)diagnosis tomato leaves diseases using deep learning methodology with intensive field work. Most of their work was based on image processing techniques using image segmentation, clustering and open source algorithms. This work helped in identifying different tomato leaves diseases caused by fungi and virus. The disease diagnosis method was very reliable and accurate especially with tomato leaves diseases.
Gadade and Kirange (2021)detected tomato leaf diseases at different phases of development using machine learning.The effectiveness of several classification approaches, such as SVM, KNN, and Naive Bayes, Decision Tree, and LDA was evaluated. The study showed that the model offers useful method for classifying the level of tomato leaf spot. They used machine learning approach while the accuracy is not mentioned.
Sharmila et al. (2021) carried out their work on the classification of pest that causes diseases in tomato leaves. The classification was based on seven models where the proposed CNN model showed an average accuracy of 93% and more accurate than other conventional machine learning models.
Nanni et al. (2022) tried different learning model such as ResNet, MobileNet and GoogleNet with different dataset. The data they used was Deng small and IP102 dataset mostly used for pest recognition. They modified their model with Adam optimizer with 96% accuracy for small dataset and 78% accuracy for IP102 dataset.
## IV Related Work
Several techniques have been proposed for identifying and categorizing tomato leaf diseases using machine learning algorithms. Arakeri et al. (2015) used image processing to detect late blight disease with 80-85% accuracy. Durmus et al. (2017) employed deep learning methods, specifically the AlexNet and SqueezeNet architectures, on the Healthy Plant Village dataset and achieved high precision. Zhang et al. (2018) utilized a deep CNN model with reinforcement learning to attain 97.28% accuracy in detecting tomato leaf disease.
Similarly, Hasan et al. (2019) achieved 99% accuracy on their dataset using transfer learning and CNN. Kumar and Vani (2019) reported similar accuracy rates of 99% using the Healthy Plant Village dataset with a CNN model. Bir et al. (2020) created a mobile application for tomato leaf disease detection by employing transfer learning and CNN. Elhassouny and Smarandachee (2019) developed a smartphone application using MobileNet-based CNN architecture that performed with high accuracy on a dataset containing more than 7100 tomato leaves. Nanni et al. (2022) compared the accuracy of various CNN models on different datasets, achieving 96% and 78% for small and IP102 datasets, respectively. Sharmila et al. (2021) employed a CNN model for the classification of pests that cause diseases in tomato leaves with an average accuracy of 93%. Zaki et al. (2020) reported 92% accuracy using MobileNetv2 for tomato leaf disease classification. Gadade and Kirange (2021) evaluated the efficacy of several classification methods such as SVM, KNN, and Naive Bayes, Decision Tree, and LDA at different stages of tomato leaf disease development, while Ashok et al. (2020) applied deep learning techniques to identify various tomato leaf diseases caused by fungi and viruses reliably and accurately.
The utilization of machine learning algorithms, such as SVM with HOG, can be a cost-effective alternative to neural networks for tomato leaf disease classification.
## V ResearchMethodology
### _Study and Problem_
The research methodology employed in this study, which primarily revolves around evaluating the efficacy of the HOG-based SVM algorithm for detecting late blight tomato disease. The main objective of the research is to address the issue of early detection and management of the disease, with the aim of improving tomato production outcomes.
### _Data Collection_
In the second phase of the research, the focus is on assembling the dataset for late blight tomato disease. This is achieved by gathering a collection of 1200 images from the Plant Village dataset obtained from Kaggle as shown in figure 4, along with an additional 100 images captured directly from the tomato field as shown in figure 5. To enrich the custom dataset, data augmentation techniques, including horizontal/vertical flipping and rotation, are applied to the 100 field images. Consequently, the dataset expands, resulting in a total of 700 images. This comprehensive dataset serves as a foundation for subsequent analysis and experimentation in the study.
### _Data Preprocessing_
In the research methodology, the third stage involves data preprocessing aimed at augmenting the custom dataset. This
Figure 4: Dataset from Kaggle
Figure 3: Proposed Methodology
Figure 6: Field of Dataset Collection
Figure 7: QR-Code to Map of Tomato Field
is accomplished by applying transformations such as horizontal/vertical flip and rotation at specific angles (45, 135, 225, and 315 degrees). These preprocessing techniques are implemented to enhance the dataset's diversity and facilitate improved performance in subsequent stages of the study.
### _Data Partitioning_
The dataset is partitioned into two subsets: training and testing. The training subset comprises 70% of the total dataset, while the remaining 30% is allocated for testing purposes. It is important to note that the dataset consists of a total of 1900 images. This partitioning strategy ensures a suitable distribution of data for training the model and evaluating its performance accurately during testing.
### _HOG based SVM_
The proposed HOG-based SVM algorithm is utilized to enable early detection of late blight tomato disease. Implemented in Jupyter Notebook and Python, the algorithm undergoes training using a labeled dataset comprising 1330 images. The HOG technique acts as a feature descriptor, partitioning the image into smaller cells and calculating gradient orientation and magnitude within each cell. This extraction process captures pertinent image information. The resulting histograms of gradient orientations are combined to form a feature vector representing the image. Subsequently, the SVM classifier is trained using these feature vectors to classify new images as infected or healthy.
To facilitate training the SVM classifier for late blight tomato disease classification, a labeled dataset containing images of both infected and healthy tomato plants is employed. The HOG feature descriptor is applied to extract distinctive features from these labeled images, enabling the subsequent training of the SVM classifier as shown in figure 9.
To enhance the model's performance, the hyperparameters of the SVM classifier, including the regularization parameter and the choice of kernel function, can be adjusted. Additionally, incorporating image processing techniques like image segmentation can further improve system accuracy. These additional techniques complement the HOG-based SVM approach, refining the classification process.
The graphical representation of the proposed HOG-based SVM model for the classification of late blight tomato leaf disease is shown in figure 10. The proposed model's pseudo code is provided for more detailed overview;
#### Pseudo code:HOG based SVM for Classification of Late Blight Tomato Leaf Disease
1. Start
2. Input: Late blight tomato leaf dataset
3. Output: Detection of late blight tomato disease
4. Input: Image of tomato leaf
5. Initialize: HOG descriptor for feature extraction and SVC for detection
\begin{table}
\begin{tabular}{|c|c|} \hline
**Dataset** & **No. of Images** \\ \hline
**Training** & 1330 \\ \hline
**Testing** & 570 \\ \hline
**Total** & 1900 \\ \hline \end{tabular}
\end{table}
Table 1: Data Partition
Figure 8: Data Distribution
Figure 10: Graphical sketch of the proposed model
Figure 9: Flow diagram
HOG: Extract features from the images and convert into vector format
**7**: **SVM:** Detect late blight disease in tomato leaf image
**8**: **Evaluate:** MSE, accuracy, precision, and recall
**9**: **Output:** Prediction of late blight disease for tomatoleaf
**10**: **End**
The given pseudo code provides an overview of the steps involved in applying HOG-based SVM to classify late blight tomato leaf disease. The process begins by initializing the HOG descriptor and SVM classifier using a dataset of late blight tomato leaf images. Then, a tomato leaf image is processed using the HOG technique to extract its features, which are transformed into a vector format. The SVM classifier is employed to determine whether the image exhibits late blight disease. Performance evaluation is conducted using metrics such as MSE, accuracy, precision, and recall. Finally, the model predicts the presence or absence of the late blight disease in the examined tomato leaf.
### _Evaluation Parameters_
Performance evaluation parameters, including the confusion matrix, accuracy, precision, recall, and Mean Square Error (MSE), are essential for assessing model effectiveness and guiding improvements.
#### V-F1 Accuracy
Accuracy refers to the ability of an instrument to measure a value with precision, indicating how closely the measured value aligns with the reference or actual value. To enhance accuracy, smaller readings can be taken, which helps to minimize the level of inaccuracy in the calculation. By incorporating smaller readings, the precision of the measurement is improved, thereby reducing the margin of error in the calculation.
Accuracy\(=\frac{(TP+TN)}{(TP+TN+FP+FN)}\)(Eq. 1)
#### V-F2 Precision
Precision is a metric used to assess the accuracy of positive predictions by evaluating the ratio of true positive instances to the total number of instances that were predicted as positive. It quantifies the proportion of correctly identified positive cases among all the instances classified as positive.
Precision\(=\frac{TP}{(TP+FP)}\)(Eq. 2)
#### V-F3 Recall
Recall measures the proportion of true positive instances correctly predicted among all the actual positive instances present in the dataset. It quantifies the ability of a model to capture and identify positive cases accurately.
Recall\(=\frac{TP}{(TP+FN)}\)(Eq. 3)
#### V-F4 Mean Squared Error (MSE)
The Mean Squared Error (MSE) of estimators is computed as the square root of the average error between the predicted and expected values. It offers a convenient method for calculating the gradient. By considering both large and small values, we can emphasize how closely the fitted line aligns with the data. For the specific formula used in calculating the mean square error, please refer to the provided link.
Mean Square Error (MSE) \(=\frac{1}{n}\sum_{i=1}^{n}(xi-yi)^{2}\)(Eq. 4)
#### V-F5 Confusion Matrix
The confusion matrix provides valuable insights into the classifier's accuracy and misclassifications (Maheswaran et al., 2022; Ting et al., 2010). It is a two-dimensional matrix that represents the classification performance of a classifier, indexed by the true class of an object and the class assigned by the classifier. For this study,the matrix is divided into four cells: true positive, false negative, false positive and true negative. True positive indicates the number of positive samples correctly identified as positive by the model. False negative represents the instances where a positive sample was incorrectly classified as negative. False positive represents the cases where a negative sample was wrongly classified as positive. True negative indicates the number of negative samples correctly identified as negative by the model.
## VI Experimental Results
### _Accuracy and MSE_
The research examined the accuracy and MSE values for three models: the proposed HOG based SVM, HOG with Decision Tree (DT), and HOG with K-nearest neighbors (KNN). The HOG + SVM model achieves the highest accuracy of 0.82 and the lowest MSE of 0.177193, indicating its superior performance in detecting late blight tomato disease. In comparison, the HOG + DT model has a higher MSE of 0.340351 and a lower accuracy of 0.66. The HOG + KNN model performs even worse, with an accuracy of 0.442105 and an MSE of 0.56. The numbers clearly shows that HOG based SVM model outperforms the other models in terms of accuracy and MSE as shown in table 2.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Accuracy**} & \multicolumn{1}{c|}{**MSE**} \\ \hline HOG + SVM & 0.82 & 0.177193 \\ \hline HOG + DT & 0.340351 & 0.66 \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy and MSE of HOG based SVM
Figure 11: Confusion Matrix
### _Classification Report_
The research presents the classification report of the HOG based SVM model, including precision, recall, and F1-score values for two classes. Class 0 has a precision of 0.804636, recall of 0.852632, and F1-score of 0.827939, while class 1 has a precision of 0.843284, recall of 0.792982, and F1-score of 0.81736. The model achieves an overall accuracy of 0.82396, with macro-averaged precision, recall, and F1-score of 0.804636, 0.852632, and 0.827939, respectively as shown in table 3.
The classification report of the HOG-based DT model, with class 0 precision, recall, and F1-score of 0.64918, 0.694737, and 0.671186, and class 1 precision, recall, and F1-score of 0.671698, 0.624561, and 0.647273. The model's accuracy is 0.660439, and its macro-averaged precision, recall, and F1-score are 0.64918, 0.694737, and 0.671186, respectively as shown in table 4.
Table 5 provides the classification report of the HOG-based KNN model, indicating class 0 precision, recall, and F1-score of 0.537079, 0.838596, and 0.654795, and class 1 precision, recall, and F1-score of 0.632, 0.277193, and 0.385366. The model achieves an accuracy of 0.584539, and its macro-averaged precision, recall, and F1-score are 0.537079, 0.838596, and 0.654795, respectively.
### _Confusion Matrix_
The figure 12 reveals that the HOG with SVM model for late blight tomato accurately predicted 243 instances of healthy tomatoes as true negatives (TN) and 226 instances of late blight tomatoes as true positives (TP). However, there were 42 instances of healthy tomatoes that were falsely predicted as late blight (false positives, FP) and 59 instances of late blight tomatoes that were falsely predicted as healthy (false negatives, FN).
The figure 13 represents the confusion matrix of the HOG with Decision Tree model for late blight tomato shows that the model accurately classified 198 instances of healthy tomatoes as true negatives and 178 instances of late blight tomatoes as true positives. However, there were 87 false positive predictions, indicating that 87 instances of healthy tomatoes were incorrectly classified as late blight. Additionally, the model had 107 false negative predictions, meaning that 107 instances of late blight tomatoes were wrongly classified as healthy.
\begin{table}
\begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **Precision** & **Recall** & **F1-Score** \\ \hline
0 & 0.64918 & 0.694737 & 0.671186 \\ \hline
1 & 0.671698 & 0.624561 & 0.647273 \\ \hline
**accuracy** & 0.660439 & 0.659649 & 0.65923 \\ \hline
**macro avg** & \multirow{2}{*}{0.64918} & \multirow{2}{*}{0.694737} & \multirow{2}{*}{0.671186} \\ \cline{1-1} \cline{5-5}
**weighted avg** & & & \\ \hline \end{tabular}
\end{table}
Table 4: Classification Report of HOG based DT
\begin{table}
\begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **Precision** & **Recall** & **F1-Score** \\ \hline
**0** & 0.804636 & 0.852632 & 0.827939 \\ \hline
**1** & 0.843284 & 0.792982 & 0.81736 \\ \hline
**accuracy** & 0.82396 & 0.822807 & 0.822649 \\ \hline
**macro avg** & \multirow{2}{*}{0.804636} & \multirow{2}{*}{0.852632} & \multirow{2}{*}{0.827939} \\ \cline{1-1} \cline{5-5}
**weighted avg** & & & \\ \hline \end{tabular}
\end{table}
Table 3: Classification Report of HOG based SVM
Figure 12: Confusion Matrix of HOG based SVM
\begin{table}
\begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **Precision** & **Recall** & **F1-Score** \\ \hline
**0** & 0.537079 & 0.838596 & 0.654795 \\ \hline
**1** & 0.632 & 0.277193 & 0.385366 \\ \hline
**accuracy** & 0.584539 & 0.557895 & 0.52008 \\ \hline
**macro avg** & \multirow{2}{*}{0.537079} & \multirow{2}{*}{0.838596} & \multirow{2}{*}{0.654795} \\ \cline{1-1} \cline{5-5}
**weighted avg** & & & \\ \hline \end{tabular}
\end{table}
Table 5: Classification Report of HOG based KNN
The confusion matrix of HOG based KNN for late blight tomato reveals that the model accurately classified 239 instances of healthy tomatoes and 79 instances of late blight tomatoes. However, there were 46 false positive predictions, indicating that 46 instances of healthy tomatoes were incorrectly classified as late blight. In addition, there were 206 false negative predictions, meaning that 206 instances of late blight tomatoes were wrongly classified as healthy. The model exhibited a higher number of false positives for late blight tomatoes and a higher number of false negatives for healthy tomatoes.
## VII Conclusion
The research study compared the effectiveness of Histogram of Oriented Gradients (HOG) with Support Vector Machine (SVM), Decision Tree, and K-Nearest Neighbors (KNN) models in predicting late blight disease in tomatoes. The experimental results demonstrated that the HOG with SVM model outperformed the other models, achieving a lower Mean Squared Error (MSE) of 0.1772 and a higher accuracy of 0.82. The classification report revealed that the HOG with SVM model exhibited high precision, recall, and f1-score values for both healthy and late blight tomatoes disease, indicating its effectiveness in predicting the disease. The macro average precision, recall, and f1-score values were also high at 0.82, demonstrating the model's performance across all classes. The confusion matrix further supported the findings, showing accurate identification of true positives and true negatives by the HOG with SVM model.
The proposed HOG based SVM model can be utilized to create efficient and automated detection systems, which can greatly reduce crop losses and enhance agricultural productivity. The enriched image dataset resulting from this research can contribute to future studies aiming to improve the accuracy of machine learning models in detecting the disease. It makes valuable contributions to the advancement of effective methods for detecting and preventing late blight tomato leaf disease, thereby benefiting the agricultural industry.
|
2305.07545 | KmerCo: A lightweight K-mer counting technique with a tiny memory
footprint | K-mer counting is a requisite process for DNA assembly because it speeds up
its overall process. The frequency of K-mers is used for estimating the
parameters of DNA assembly, error correction, etc. The process also provides a
list of district K-mers which assist in searching large databases and reducing
the size of de Bruijn graphs. Nonetheless, K-mer counting is a data and
compute-intensive process. Hence, it is crucial to implement a lightweight data
structure that occupies low memory but does fast processing of K-mers. We
proposed a lightweight K-mer counting technique, called KmerCo that implements
a potent counting Bloom Filter variant, called countBF. KmerCo has two phases:
insertion and classification. The insertion phase inserts all K-mers into
countBF and determines distinct K-mers. The classification phase is responsible
for the classification of distinct K-mers into trustworthy and erroneous K-mers
based on a user-provided threshold value. We also proposed a novel benchmark
performance metric. We used the Hadoop MapReduce program to determine the
frequency of K-mers. We have conducted rigorous experiments to prove the
dominion of KmerCo compared to state-of-the-art K-mer counting techniques. The
experiments are conducted using DNA sequences of four organisms. The datasets
are pruned to generate four different size datasets. KmerCo is compared with
Squeakr, BFCounter, and Jellyfish. KmerCo took the lowest memory, highest
number of insertions per second, and a positive trustworthy rate as compared
with the three above-mentioned methods. | Sabuzima Nayak, Ripon Patgiri | 2023-04-28T10:14:01Z | http://arxiv.org/abs/2305.07545v1 | # KmerCo: A lightweight K-mer counting technique with a tiny memory footprint
###### Abstract.
K-mer counting is a requisite process for DNA assembly because it speeds up its overall process. The frequency of K-mers is used for estimating the parameters of DNA assembly, error correction, etc. The process also provides a list of district K-mers which assist in searching large databases and reducing the size of de Bruijn graphs. Nonetheless, K-mer counting is a data and compute-intensive process. Hence, it is crucial to implement a lightweight data structure that occupies low memory but does fast processing of K-mers. We proposed a lightweight K-mer counting technique, called KmerCo that implements a potent counting Bloom Filter variant, called countBF. KmerCo has two phases: insertion and classification. The insertion phase inserts all K-mers into countBF and determines distinct K-mers. The classification phase is responsible for the classification of distinct K-mers into trustworthy and erroneous K-mers based on a user-provided threshold value. We also proposed a novel benchmark performance metric. We used the Hadoop MapReduce program to determine the frequency of K-mers. We have conducted rigorous experiments to prove the dominion of KmerCo compared to state-of-the-art K-mer counting techniques. The experiments are conducted using DNA sequences of four organisms. The datasets are pruned to generate four different size datasets. KmerCo is compared with Squeakr, BFCounter, and Jellyfish. KmerCo took the lowest memory, highest number of insertions per second, and a positive trustworthy rate as compared with the three above-mentioned methods.
**PVLDB Reference Format:**
Sabuzima Nayak and Ripon Patgiri. KmerCo: A lightweight K-mer counting technique with a tiny memory footprint. PVLDB, 14(1): XXX-XXX, 2020. doi:XX.XX/XXXXXX
**PVLDB Artifact Availability:**
The source code, data, and/or other artifacts have been made available at [https://github.com/patgiri/KmerCo-Main](https://github.com/patgiri/KmerCo-Main).
## 1. Introduction
Gregor Mendel discovered the genes in peas (Majaj et al., 2017) whereas rules of genes were discovered in red bread mold (Brandrandt, 2017). DNA was discovered in salmon (Brandt, 2017) and some information regarding the encapsulation of DNA was known from tardigrades (Brandt, 2017). Chromosomes were first noticed in mealworms, likewise, sex chromosomes were discovered in beetles (Brandt, 2017) whereas its function and replication were explored in platypus and fish (Sabuzima et al., 2017). This illuminates the importance of DNA sequencing of organisms. Genome sequencing enhances our understanding regarding the complexity of the evolution of life, its functioning, and the protection of our biodiversity. The DNA sequencing of all organisms is essential because it helps to compare the DNA sequences and some unique features of the organisms. Although this may be important, DNA sequencing is a complex process. The next generation sequencing is efficiently generating genomic data by reading short reads. A _read_ is a DNA subsequence of length 20-30000 bases. The _read_s have many overlapping regions with other _read_s. Moreover, errors are introduced in the _read_s during the electrical and chemical processing. DNA assembler removes errors and arranges the _read_s to obtain the DNA sequence. Hence, the foremost tasks of DNA assembler are error removal and identification of distinct K-mers. A single process, i.e., K-mer counting completes both tasks.
### K-mer counting
The "mer" is a Greek word that means "part". K-mer means a sub-string of length K. K-mers of a DNA sequence are all possible consecutive K-mers of length K. Figure 1 illuminates an example for better understanding. Suppose **GGTCTCTAT** is a DNA sequence. The left side of the figure represents the method to consider the consecutive 3-mers and the right side of the figure lists the 3-mers. The number of K-mers in a DNA sequence is sequence length-K+1 where sequence length is the number of nucleotide present in the DNA sequence. K-mer counting is a process of counting the frequency of the K-mers in a DNA sequence (Kumar et al., 2017). This process takes DNA files as input, extracts K-mers, and counts their frequency. As output, it generates distinct K-mers and acts as a classifier to eliminate the low-frequency K-mers.
**Why do we count the K-mers?** The answer is as follows- (a) Speedup DNA Assembly: Some DNA assembly techniques speed up the overall process using K-mer counting. For instance, overlap
Figure 1. Graphical explanation of K-mer.
layout consensus (OLC) searches the read overlaps which is a slow process. K-mer counting speeds up this process (Kamer, 2016). (b) Calculating DNA Assembly parameters: The count of distinct and trustworthy K-mers helps in determining the parameters required in the DNA Assembly process. (c) Error correction: K-mers are highly repetitive even in a small fragment of a DNA sequence (Kamer, 2016). For this reason, a few times occurring K-mer is an error. An error occurs during genomic data collection due to incorrect reading of a few bases, or the addition or removal of a DNA fragment. (d) Metagenomics: It identifies the K-mers present in the DNA sequence (Kamer, 2016), for example, verifying the presence of a protein in a sample DNA sequence. (e) Searching in large datasets: The distinct K-mers generated by the K-mer counting techniques are used to search in a DNA bank to identify the DNA sequence having those K-mers. (f) Small size de Bruijn graph: K-mer counting provides the distinct K-mers and their frequency which helps in quick construction and size reduction of the de Bruijn graph.
### Bloom Filter
Bloom Filter (Bloom, 1977; Bloom, 1978) is a probabilistic simple bit array data structure used for determining the membership of an item. Bloom Filter does not store the original data; rather the input item is mapped to Bloom Filter bits which helps to store many items using a small-sized bit array. Figure 2 illustrates the architecture and operation of the standard Bloom Filter. Bloom Filter is a bit array where each bit is set to either 0 or 1. Initially, all slots are set to 0. Bloom Filter performs two operations: insertion and query. An input item is hashed by the hash function(s), say \(k_{h}\). The hashed value determines the slot location which is set to 1. The \(k_{h}\) slots are set to 1, as illustrated by inserting items X and Y in Figure 2. The query operation follows the same procedure as the insertion operation to obtain the bit locations. If all slots are 1, then the item is present; if at least one slot is 0, then the item is absent. In Figure 2, item X is present whereas item Z is absent. The time complexity of insertion and query operation is \(O(k_{h})\approx O(1)\). Consider the query of item U in the figure, U is not inserted but during the query operation, all slots are 1. This situation is created by the insertion of X and Y. The slots obtained by the hashed values of U are colliding with the slots of X and Y. The true response returned by Bloom Filter in such a query operation is called a false positive. Therefore, the main aim while proposing a new Bloom Filter variant is to reduce the false positive probability (FPP). A variant of Bloom Filter is Counting Bloom Filter (CBF) (Kamer, 2016) which is proposed to reduce FPP. Each slot is partitioned into a bit and a counter of a few bits. Initially, all slots are set to 0. It follows the same procedure to obtain the slots, the bit and counter are set to 1. Only the counter is incremented if a new item is hashed to the same slot. The counter keeps the frequency of the items. However, CBF has a counter overflow issue.
### Challenges
K-mer counting is a data-intensive and compute-intensive task. K-mer counting takes half of the total computation time in DNA assembly techniques (Bloom, 1977). It is data-intensive because a genomic file has millions or billions of K-mers. Each K-mer needs to be processed to verify whether it is encountered for the first time to include in the list of distinct K-mers; otherwise, increment the frequency of the K-mer. The processing of such a huge volume of data is compute-intensive. The hashtable-based techniques such as Jellyfish (Jellyfish, 1978) require a high memory footprint. Moreover, disk-based techniques such as KMC2 (Kamer, 2016) are inefficient as it takes huge time to process as compared to lightweight ones. There is a requirement for a data structure that is faster, has a low memory footprint, and is efficient. One such data structure is Bloom Filter which is a solution to all these issues.
There are many Bloom Filter-based K-mer counting techniques. However, the Bloom Filter is not responsible for counting K-mers. The Bloom Filter is used for membership checking or filtering of the first encounter K-mers. The techniques maintain a hashtable to keep the count of the K-mers. Hence, the techniques require more memory because they maintain two data structures. However, CBF and its variants can be implemented to keep the count of the K-mers. These Bloom Filters do not provide exact K-mer count; however, the main focus of the K-mer counting technique is the identification of distinct K-mers, and the classification of K-mers into trustworthy and erroneous K-mers rather than the exact count of K-mers. The count of K-mer merely helps in classification. Thus, a CBF or its variant is more efficient for a K-mer counting technique because a single data structure is capable of both storing K-mers and keeping the count of K-mers.
There is a lack of experimental benchmark that evaluates the performance of the K-mer counting techniques. The state-of-the-art research articles depict the experimental results in tabular form that does not adduce performance. The listing of distinct, trustworthy, and erroneous K-mers in a table does not convey any information regarding the deviation of the presented techniques from the correct values. Notably, only the RAM usage and insertion time are the measurements of performance. Regardless, accuracy is an important performance metric that is neglected due to the lack of an experimental benchmark.
### Contributions
We proposed a fast, efficient, and lightweight K-mer counting technique, called KmerCo, which implements a fast CBF variant called countBF (Kamer, 2016). Furthermore, we have proposed a novel benchmark performance metric for the K-mer counting technique. KmerCo quickly processes the K-mers while maintaining a low memory footprint. It processes millions of K-mers within a few seconds. KmerCo classifies the K-mers based on the user input threshold value. It provides countBF and three files, i.e., distinct, trustworthy,
Figure 2. Architecture of Standard Bloom Filter using three hash functions.
and erroneous K-mers as output where countBF can be used for querying K-mers and their frequency. The distinct file contains the list of all distinct K-mers present in the input DNA file. The trustworthy file contains the list of all K-mers having a frequency more than the user input threshold value. The erroneous file contains the list of all K-mers having a frequency less than or equal to the user input threshold value.
We have conducted extensive experiments on KmerCo using four real datasets of different organisms to measure its various performance parameters. We have trimmed the datasets to have different-sized datasets to observe the change in KmerCo performance with the change in dataset size. We have considered two different K lengths: 28 and 55 to notice the efficiency of KmerCo with different K-mer lengths. We have compared KmerCo with three K-mer counting techniques: Squeakr (a Bloom Filter-based technique), BFCounter (implements both Bloom Filter and hashtable), and Jellyfish (a hashtable-based technique). Our proposed K-mer counting benchmark performance metric is the counting of the K-mers using the Hadoop MapReduce program. The Hadoop provides zero error K-mer frequency counts and a list of distinct and trustworthy K-mers. These values help to determine the deviation of distinct and trustworthy K-mers generated by KmerCo and other state-of-the-art techniques. The performance of KmerCo was compared with other techniques based on data structure memory size, insertion time, number of insertions, inserted-to-ignored K-mer ratio, number of insertions per second, and trustworthy rate. KmerCo requires the lowest memory which is 7.08\(\times\), 115.25\(\times\), and 8889.08\(\times\) less memory compared to Squeakr, BFCounter, and Jellyfish, respectively, for the 28-mers Balaeonoptera dataset. KmerCo took the highest insertion time in the case of 55-mers because it inserted all K-mers whereas other techniques inserted lesser K-mers. It is important to notice that KmerCo has a zero inserted-to-ignored ratio whereas other techniques have a non-zero ratio with a negative ratio in a few cases. Moreover, KmerCo is the second highest number of insertions per second after Jellyfish which is due to the Jellyfish's lowest insertion time. Notable, Jellyfish requires the highest memory footprint, i.e., it consumes a minimum of 2368 MB memory footprint in our experiment. Apart from all, KmerCo has a positive trustworthy rate whereas others have a negative one, which indicates that other techniques are classifying many trustworthy K-mers as erroneous.
Finally, summarising the contributions of this paper as follows:
* Proposed technique, KmerCo, is a fast, efficacious, and lightweight K-mer counting method.
* KmerCo implements a counting Bloom Filter which has a low memory footprint and false positive probability, called countBF.
* KmerCo provides a list of distinct K-mers using countBF.
* KmerCo classifies the K-mers based on a user-provided threshold value.
* KmerCo gives countBF and three files, i.e, distinct, trustworthy, and erroneous K-mers as output.
* Proposed a novel benchmark performance metric for K-mer counting techniques.
* KmerCo has high performance compared to other state-of-the-art K-mer counting techniques.
* KmerCo requires 7.08, 115.25, 8889.08 times less memory compared to Squeakr, BFCounter, and Jellyfish for insertion of 28-mers Balaeonoptera dataset. KmerCo has a zero inserted-to-ignored K-mer ratio. Moreover, KmerCo has a positive trustworthy rate in all datasets whereas other techniques have a negative trustworthy rate in the majority of datasets.
**Why do we use Bloom Filter-based technique instead of a Hadoop-based technique for K-mer counting?**
The answer is as follows:
* Not an in-memory program: Hadoop program is a heavyweight program that requires many resources such as CPUs, memory, HDD, etc. Contrary, Bloom Filter is implemented for its low memory footprint. It is a lightweight data structure. Therefore, Bloom Filter is a suitable technique for K-mer counting.
* Processing overhead: Hadoop requires network communication between the \(Map\) tasks and \(Reduce\) tasks which require extra time to complete the entire process due to the network latency. We know that Hadoop MapReduce is a distributed computing platform, and therefore, it requires many computing resources in cluster mode.
## 2. Background
### countBF
The countBF (Kumar et al., 2017) is a CBF variant based on 2DBF (Squeakr et al., 2017) which is a two-dimensional integer array. Each slot of countBF is \(\beta\)-bit length which is partitioned into \(\eta\) number of counters of \(\alpha\)-bit length. The user can define the counter length. Figure 3 presents the architecture of countBF with an 8-bit counter. As shown in figure, \(C_{1}\), \(C_{2}\), \(C_{3}\), etc are counters. Based on the counter length some bits are unused. The countBF performs two operations: insertion and query. Algorithm 3 depicts the insertion operation where K-mer is the input item. Initially, all slots were set to 0. Let \(\mathcal{G}\) be an item
Figure 3. Architecture of countBF with 8-bit counters
inserted into countBF as presented in Figure 3. The \(\mathcal{Q}\) is hashed by \(k_{h}\) hash functions. Line 8 of the Algorithm 3 is used to obtain the slot and counter location. The countBF has two predefined masks: extract mask and reset mask. The extract mask extracts corresponding counter values using bit operations. The reset mask helps to reset the corresponding counter value to zero. Let \(\mathcal{M}_{l}^{e}\) be the extract mask and \(\mathcal{M}_{l}^{r}\) be the reset mask where \(e\) indicates the extract mask, \(r\) indicates the reset mask and \(l\) is the counter number. The extract mask for the \(1^{st}\) counter for an 8-bit counter (\(\mathcal{M}_{1}^{e}\)) is \(0x000000000000000FF\) and the reset mask for the \(1^{st}\) counter (\(\mathcal{M}_{1}^{r}\)) is \(0xFFFFFFFFFFFFFF00\). In Line 9 of the Algorithm 3, the AND operation is performed between the corresponding slot and the corresponding extract mask to obtain the corresponding counter value with rest bits zero. The counter value is right-shifted to generate only the corresponding counter value. Then the counter value is incremented and to avoid overflow of value to the adjacent counter the new counter value is checked for maximum value. In case, the counter value reaches the maximum value the insert operation is terminated. To avoid the overflow issue applications having high frequency should consider longer counters in countBF. Line 16 of the Algorithm 3 indicates the left shifting of the new counter value to the required location. Then, an AND operation between the corresponding slot and reset mask removes the old counter value. The new counter value is reflected in the slot by performing OR operation between the slot and the new counter value. This procedure is followed for \(k_{h}\) times. Algorithm 5 outlines the countBF query operation which returns the frequency of the queried item. The counter value is obtained following a similar procedure as in the insertion operation. If the counter value is zero, then the item is absent; otherwise, return the minimum counter value among the \(k_{h}\) slots.
Let \(X\) and \(Y\) be countBF dimensions which are prime numbers, \(n\) be the total number of input items, and FPP is the false positive probability.
The standard Bloom Filter size (\(m\))=\(\frac{-nlog(FPP)}{(log^{2})^{2}}\)
However, \(m\) is a large size for countBF
\(\therefore\)\(v=\sqrt{\frac{m}{28}}\)
\(X\) is the closest prime number more than \(v\)
\(Y\) is the third consecutive prime number from \(X\) if all prime numbers are written sequentially.
Thus,
\[m_{countBF}=X\times Y\times\beta\ bits \tag{1}\]
The countBF size depends on the number of input items, i.e., \(n\).
### Reverse complement
The complement of a DNA sequence is obtained by replacing each nucleotide with its complement nucleotide. The nucleotides A, C, G, and T are replaced with T, G, C, and A, respectively. The DNA sequence has another symbol, i.e, N. The N symbol is used to indicate any nucleotide but not a gap. The N remains the same in the complement sequence. The complement sequence is a 3' to 5' representation, but DNA sequences are represented from 5' to 3'. Henceforth, the complement sequence is reversed which is called the reverse complement of the forward/original DNA sequence (Song et al., 2015). Example: consider GGCCTCAT as the original DNA sequence, its complement sequence is CCGAGATA and the reverse complement is ATAGAGCC.
### Canonical K-mer
It is the lexicographical smaller K-mer between the original (\(K\)-\(mer\)) and the reverse complement (\(K\)-\(mer_{RC}\)) K-mer (Brandt et al., 2007).
\[canonical\ K\text{-}mer=\begin{cases}K\text{-}mer&\text{if }h(K\text{-}mer)<h(K \text{-}mer_{RC})\\ K\text{-}mer_{RC}&\text{Otherwise}\end{cases}\]
Where \(h()\) is a hash function
**Why is canonical K-mer considered in K-mer counting?**
During sequencing of the double-stranded DNA sequence, first, the two strands are separated and one is randomly selected to be decrypted by the machine. In other words, a DNA sequence has both original or reverse complement K-mers in different locations. Both are the same scientifically but different based on the nucleotide. Considering both as different induces errors in the frequency calculation of the K-mer counting process. Thus, the K-mer counting technique tenacious uses the canonical K-mer.
## 3. Methodology
KmerCo is a fast approximate K-mer counting technique capable of processing billions of K-mers quickly using a small-sized Bloom Filter. It implements a CBF variant called countBF (Karp and Karp, 1999). The countBF is a two-dimensional CBF where each cell consists of many counters. The countBF performs a few arithmetic operations for fast and high performance. KmerCo takes a DNA sequence as input and produces three files as output. All the K-mers present in the DNA sequence are inserted into the countBF, hence, it can be used to determine the presence of a K-mer in the DNA sequence. The three files contain distinct, trustworthy, and erroneous K-mers. The distinct file contains the list of all distinct K-mers present in the input DNA sequence. The trustworthy file contains the list of all distinct K-mers having a frequency of more than a threshold value, say \(\tau\). Whereas the erroneous file contains the list of all distinct K-mers having a frequency less than or equal to \(\tau\). This section explains the working of KmerCo in detail. Table 1 lists the term and notation used in the article for better understanding.
\begin{table}
\begin{tabular}{|c|p{142.3pt}|} \hline Terms/Notations & Description \\ \hline \hline \(k_{h}\) & Number of hash functions \\ \hline K-mer & _Read_ of a DNA sequence having length K \\ \hline \(K\) & Length of _read_ \\ \hline \(\tau\) & Threshold value for determining the trustworthy K-mers \\ \hline Distinct K-mer & Distinct K-mer present in the DNA sequence \\ \hline Trustworthy K-mer & K-mer having frequency more than \(\tau\) \\ \hline Erroneous & K-mer having frequency less than or equal to \(\tau\) \\ K-mer & \(\tau\) \\ \hline \end{tabular}
\end{table}
Table 1. Term/Notations used in the article and its description
Figure 4 illustrates the working of KmerCo. KmerCo has two phases: insertion and classification. The responsibilities of the insertion phase are extraction of \(read\) from the DNA sequence file, insertion of K-mers into countBF, and distinct file. The responsibility of the classification phase is the classification of distinct K-mers into trustworthy and erroneous K-mers. In the insertion phase, first, the \(Read\) of K length is extracted from the DNA sequence and obtains its reverse complement. KmerCo is a canonical K-mer counting technique which means it considers the canonical K-mers. The \(Read\) and its reverse complement are hashed by a hash function, we have considered the murmur hash function [(1)]. The canonical K-mer is the \(Read\) or its reverse complement which has the smallest hash value. The canonical K-mer is queried to the countBF. If returns zero it means the K-mer is absent in countBF, then the K-mer is inserted into countBF. Moreover, the K-mer is encountered for the first time, hence, it is also inserted into the distinct file. If countBF returns a non-zero value then increment the counter of the K-mer. The insertion phase completes after processing all K-mers of the DNA sequence. The output of this phase is countBF and the distinct file which are forwarded to the classification phase. In the classification phase, the K-mers are read from the distinct file and queried to the countBF. The countBF returns the frequency of the K-mer. If the frequency is more than \(\tau\), then the K-mer is classified as a trustworthy K-mer and written to the trustworthy file. Otherwise, the K-mer is an erroneous K-mer and is written to the erroneous file. The trustworthy and erroneous file can be replaced with Bloom Filter, for instance, robustBF [(30)] for fast determination of the status of the K-mer, i.e., trustworthy or erroneous.
Algorithm 1 presents the insertion phase of KmerCo. It requires the \(DNAfile\) which is the DNA sequence file, K, and \(k_{h}\). The \(Read\) of K length is extracted from the \(DNAfile\). Both \(Read\) and its reverse complement are passed as an argument to the QcountBF-I() to determine whether it is present in the countBF or not. The value of the Result parameter of the QcountBF-I() indicates the canonical K-mer. If both frequency and result are zero, then \(Read\) is inserted into countBF and written to the distinct file. In case, the frequency is zero and the Result parameter is 1, then the canonical K-mer is the reverse complement of the \(Read\), and it is inserted into countBF and written to the distinct file. In case, the frequency is non-zero, then increment the counter of the canonical K-mer which is determined by the Result parameter.
```
1:Input
2:\(DNAfile\): A DNA sequence file
3:\(\mathbb{C}_{x,y}\): countBF
4:\(K\): Length of K-mer
5:\(k_{h}\): Number of hash functions
6:Output
7:Distinct file: A file containing all the distinct K-mers present in \(DNAfile\)
8:procedureInsertKmerCo(\(DNAfile\), \(\mathbb{C}_{x,y}\), \(K\), \(k_{h}\))
9:while\(Read\neq EOF\)do\(\triangleright\)\(EOF\): End of file
10:\(Read\leftarrow\) K-mer of length \(K\)
11:\(Read_{RC}\leftarrow\) Reverse complement of \(Read\)
12:ifQcountBF-I(\(\mathbb{C}_{x,y}\), \(K\), \(Read\), \(Read_{RC}\), \(k_{h}\), \(Result\)) = 0then
13:ifResult=0 then
14: Insert \(Read\) into Distinct file
15:\(\textsc{IscountBF}(\mathbb{C}_{x,y}\), \(Read\), \(k_{h}\))
16:else
17: Insert \(Read_{RC}\) into Distinct file
18:\(\textsc{IscountBF}(\mathbb{C}_{x,y}\), \(Read_{RC}\), \(k_{h}\))
19:endif
20:else
21:ifResult=0 then
22:\(\textsc{IscountBF}(\mathbb{C}_{x,y}\), \(Read\), \(k_{h}\))
23:else
24:\(\textsc{IscountBF}(\mathbb{C}_{x,y}\), \(Read_{RC}\), \(k_{h}\))
25:endif
26:endif
27:endwhile
28:endprocedure
```
**Algorithm 1** Insertion phase of KmerCo.
Algorithm 2 illustrates the classification phase of KmerCo. Its inputs are \(\mathbb{C}_{x,y}\) the countBF containing all distinct K-mers present in the DNA sequence inserted into it, \(k_{h}\), and the distinct file containing the list of all distinct K-mers. The K-mers are read from the distinct file and queried to QcountBF-II(). The QcountBF-II() returns the frequency of the K-mer. If the K-mer has a frequency of more than \(\tau\) then it is a trustworthy K-mer; otherwise, erroneous K-mer. Based on the classification, the K-mer is written to the trustworthy or erroneous file.
Algorithm 3 demonstrates the insertion operation of countBF. The K-mer is hashed by a hash function. The modulo operation of hash value with the dimension of \(\mathbb{C}_{x,y}\) and the number of counters per cell provides the required cell and counter location. Then the whole cell value is extracted by performing the \(AND\) operation with the predefined extract mask, \(\mathcal{M}_{l}^{e}\). The right shift operation is performed to obtain only the required counter value. The counter value is incremented and verified if the value is the MAX value permitted in the counter. If yes, then the counter overflows; hence, the operation is terminated. Otherwise, the incremented value is left shifted and performs \(AND\) operation with the predefined reset mask to obtain the new counter value with respect to the cell location.
Figure 4. Working of KmerCo
Then the new cell value is inserted into the countBF using \(OR\) operation with the old cell value. This whole procedure is repeated for \(k_{h}\) times.
Noteworthy, the insertion operation causes an overflow issue in some cases. However, it does not affect the classification of K-mers: trustworthy and erroneous K-mers. On the contrary, it can affect the classification process if \(\tau=2^{\alpha}\) but we always choose \(\tau<<2^{\alpha}\).
We propose two variants of query operations for KmerCo to optimize the execution time: QcountBF-I() and QcountBF-II(). Algorithm 4 presents the QcountBF-I() which takes the \(\mathbb{C}_{x,y}\) the countBF, \(Read\), \(Read_{RC}\) is the reverse complement of \(Read\), and \(k_{h}\) as inputs. First, it hashes both \(Read\) and \(Read_{RC}\) to determine the canonical K-mer. Among the two hash values, the K-mer having the lowest hash value is the canonical K-mer which is queried to the countBF. If \(Read\) is selected then the Result parameter is set to zero; otherwise one. The \(\textsc{{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ }}}}}}}}}}}}}}}\) uses the value of the Result parameter and directly inserts the \(Read\) or \(Read_{RC}\) without determining the canonical K-mer again. A similar procedure as in \(\textsc{{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\{{\{ }}}}}}}}}}}}}}}}}\) is followed to obtain the counter value. If the counter value is zero, then it returns zero. Otherwise, save the counter value in an array. This procedure is followed for \(k_{h}\) times. Then, the QcountBF-I() returns the minimum value among the counter values. Algorithm 5 presents the QcountBF-II() which takes a single K-mer as input. It returns the counter value of the K-mer. In case, the \(k_{h}\) is greater than 1, then the QcountBF-II() returns the minimum among the counter values.
## 4. Experiments
We have conducted rigorous experiments to prove the supremacy of our proposed technique compared to other state-of-the-art K-mer counting techniques. KmerCo is a fast K-mer counting technique with the best performance. KmerCo takes less time for the construction of its countBF which is a classifier to classify the K-mers into trustworthy and erroneous K-mers. We have used four real datasets of different organisms of different sizes for the experiments. We have trimmed some DNA sequences from the real dataset to construct different size datasets. It helps in determining the performance of the KmerCo with big-sized datasets. We have considered two K values for experimentation: 28 and 55. This helps in show-casing the best performance of KmerCo in varying scenarios of different K length _Reads_. Other K-mer counting techniques evaluate their performance using the number of identified distinct and trustworthy K-mers which is merely a tabulation of information without any benchmark for comparison. In this paper, we have proposed a new benchmark for comparison to determine the accuracy and performance of the K-mer counting techniques. We used the Hadoop MapReduce program to determine the exact number of distinct, trustworthy, and erroneous K-mers of the datasets. We have compared KmerCo with Squeakr, BPCounter, and Jellyfish K-mer counting techniques. We have measured the performance of the techniques using data structure size, insertion time, number of insertions, inserted-to-ignored K-mer ratio, number of insertions/second, and trustworthy rate. This section provides detailed information regarding the dataset and experimentation. We have also conducted some experiments on the countBF of KmerCo to determine the counter length per cell and the number of input items for the construction of the Bloom Filter. This information is presented in the supplementary document. We have conducted the experiments in a low-cost Ubuntu-Desktop computer with 4GB RAM and a Core-i7 processor.
### Dataset Description
We have used four real datasets of different organisms, specifically mammals in our experimentation. The organisms are Loxodonta cyclots (common name: Elephant, downloaded from (Kumar et al., 2017)), Gale-opterus variegatus (common name: Sunda flying lemun, downloaded from (Kumar et al., 2017)), Microbeus muring (common name: grey mouse lemun, downloaded from (Kumar et al., 2017)), and Balaenoptera acutorostra (common name: minke whale, downloaded from (Kumar et al., 2017)). Table 2 provides other details regarding the datasets. We have trimmed the real dataset to have four different size datasets. The aim is to observe the performance of KmerCo and other K-mer counting techniques in the case of different-size datasets. We have used the first word of the organism's scientific name in the rest of the article.
### Frequency counting using Hadoop MapReduce
The other K-mer counting techniques present only the number of distinct and trustworthy K-mers. However, these numbers do not provide any comparison of performance between the techniques. Hence, we have used the Hadoop MapReduce program for determining the exact number of distinct, trustworthy, and erroneous K-mers in the datasets. We generated K-mers of lengths 28 and 55 of the four real datasets separately in different files. These files are input into the Hadoop MapReduce program (Kumar et al., 2017). After execution of the program, the output file gives the list of distinct K-mers along with their frequency. Using the frequency, we determined the trustworthy and erroneous K-mers. This program gives no errors, thus, we can confidently use this information for comparing the performance between KmerCo and other techniques. Table 3 and Table 4 exhibit the total, distinct, and trustworthy 28-mers and 55-mers, respectively. The erroneous K-mers is the difference between the distinct and trustworthy K-mers, hence, it is excluded from the tables.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Species & Down-load link & SRA Access-sion & \#Sequences & File Size \\ \hline \hline Loxodonta cyclots & (Kumar et al., 2017) & SRR12606482 & 550262 & 100 MB \\ \hline Galeopterus variegatus & (Kumar et al., 2017) & SRR3683902 & 1622500 & 200 MB \\ \hline Microbeus muring & (Kumar et al., 2017) & SRR20563527 & 866250 & 300.1 MB \\ \hline Balaenoptera acutorostra & (Kumar et al., 2017) & SRR17322416 & 748250 & 400.4 MB \\ \hline \end{tabular}
\end{table}
Table 2. Dataset Details. #Sequences: Number of sequences present initially in the downloaded file. File Size is in megabytes (MB) after trimming the dataset.
### Dataset Analysis
This section provides the analysis of the datasets based on the ratio of the number of distinct, trustworthy, and erroneous K-mers to total K-mers as illustrated by Figure 5.
\[Distinct\text{ K-mer }rate=\frac{|Distinct\text{ K-mers }|}{|Total\text{ K-mers }|}\]
\[Trustworthy\text{ K-mer }rate=\frac{|Trustworthy\text{ K-mers }|}{|Total\text{ K-mers }|}\]
\[Erroneous\text{ K-mer }rate=\frac{|Erroneous\text{ K-mers}|}{|Total\text{ K-mers }|}\]
Figure 5a presents the Distinct 28-mer rate where the rate decreases with an increase in dataset size. The Loxodonta dataset has the highest rate whereas the Balaenoptera dataset has the least rate which is obvious as the dataset being the lowest and highest size, respectively. Figure 5b highlights the Trustworthy 28-mer rate where the rate increases with an increase in dataset size with the exception of the Microebus dataset. The Microebus has the highest ratio whereas Loxodonta has the least ratio. Obviously, the contrary pattern will be followed in the case of an Erroneous 28-mer rate as shown in Figure 5c. The Loxodonta dataset has the highest rate whereas the Microebus dataset has the least rate. Figure 5d, Figure 5e, and Figure 5f illustrate the Distinct, Trustworthy, and Erroneous 55-mer rate, respectively. The 55-mer dataset follows the same pattern as observed in the case of the 28-mer dataset. Overall, it is observed that the Loxodonta dataset has the highest distinct and erroneous K-mers with the least trustworthy K-mers. The Balaenoptera dataset has the lowest distinct K-mers in spite of having the highest dataset size. The Microebus dataset has the highest trustworthy and least erroneous K-mers.
### Experimental Results
This section provides details regarding the experimentation performed on KmerCo. The KmerCo is compared with other K-mer counting techniques: Squeakr, BFCounter, and Jellyfish. The Squeakr (Squeakr, 2017) (code downloaded from (Squeakr, 2017)) is a Bloom Filter-based technique, specifically, it implements Counting Quotient Filter (CQF) (Squeakr, 2017). The BFCounter (BFCounter, 2017) (code downloaded from (BFCounter, 2017)) implements both standard Bloom Filter and hashtable. Jellyfish2 (Squeakr, 2017) (code downloaded from (Squeakr, 2017)) is a hashtable-based K-mer counting technique.
The data structure size of KmerCo is the size of countBF. Using Equation 1, we get \(m_{countBF}=X\times Y\times\beta\) where \(\beta=64\) as each cell of countBF is unsigned long int, FPP=0.001 and \(n\)=|total K-mers|. In the case of Squeakr, it provides the log of estimated CQF size (say s) as output. Squeakr uses two CQFs: global and local. Therefore, \(m_{Squeakr}=2\times 2^{s}\). The memory size of Squeakr also depends on the total K-mers present in the dataset. It gives an option to provide the memory size, however, instead of using the provided value Squeakr calculates the memory size from the given file. Hence, it does not provide the freedom to use a large CQF to reduce the FPP. Furthermore, providing a higher memory size than the estimated value reduces the performance of Squeakr. It reduces the number of distinct, total, and trustworthy K-mers with increasing memory size which is observed during experimentation. On the contrary, if less memory size is provided than the estimated value then it causes segmentation faults. Overall, Squeakr gives optimal performance only in the case of the estimated memory size. BFCounter uses two data structures: standard Bloom Filter and hashtable. The Bloom Filter size is the total K-mers multiplied by the number of bits per K-mer. The default value of the number of bits per K-mer is 4 but the value can be provided by the user. We have provided 8 because the countBF counter length is 8 bits. The Bloom Filter size is \(8\times|\text{total K-mers}|\) bits. The size of the hashtable is the number of slots multiplied by the counter length per hashtable slot. The number of slots is provided as output. Thus, \(m_{BFCounter}=(|total\text{ K-mers}|)+(8\times|slots|)\) bytes. In the case of Jellyfish, one hashtable entry size is \(2K-d+r+1\) bits. The number of entries is \(2^{d}\) where \(d=[\lceil(\sqrt{|total\text{ K-mers}|})\rceil]\) and \(r\) is calculated from reprobe (for detail refer (Squeakr, 2017)). Each entry has a counter whose length is provided by the user, we have considered 8 bytes. Therefore, \(m_{Jellyfish}=2^{d}(2K-d+r+1)\) bytes.
Figure 6 depicts the comparison of KmerCo with other techniques based on data structure memory size for 28-mers (Figure 6a) and 55-mers (Figure 6b) using various datasets. The memory increases with an increase in dataset size as all techniques depend on the number of K-mers for the construction of their data structure. Jellyfish have the highest memory. KmerCo has more memory compared to Squeakr only in the case of 28-mer and 55-mer Loxodonta and Galeopterus datasets. Otherwise, it has less memory compared to other techniques for other datasets. In the case of the 28-mer Loxodonta dataset, KmerCo has 10 times more memory compared to Squeakr, and 28.92 and 2350 times less memory compared to BFCounter and Jellyfish, respectively. In the case of the 28-mer Balaenoptera dataset, KmerCo has 57.08, 115.25, and 8889.08 times less memory compared to Squeakr, BFCounter, and Jellyfish, respectively. Similarly, KmerCo is 18 times more compared to Squeakr,
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Dataset** & \#**55-mers** & \#**Distinct** & \#**Trustworthy** \\ \hline \hline Loxodonta & 41013031 & 40198219 & 24629 \\ \hline Galeopterus & 74824945 & 62985971 & 334397 \\ \hline Microecbus & 130803695 & 65779817 & 4633902 \\ \hline Balaenoptera & 163872444 & 79374926 & 2030213 \\ \hline \end{tabular}
\end{table}
Table 4. Details of 55-mers determined by the Hadoop MapReduce program. #55-mers: Total number of 55-mers, and #**Trustworthy**: Number of trustworthy 55-mers having frequency more than \(\tau=5\).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Dataset** & \#**28-mers** & \#**Distinct** & \#**Trustworthy** \\ \hline \hline Loxodonta & 41013058 & 32512928 & 185770 \\ \hline Galeopterus & 74824972 & 42294113 & 581671 \\ \hline Microecbus & 130803722 & 43261507 & 6205318 \\ \hline Balaenoptera & 163872472 & 38775701 & 2701406 \\ \hline \end{tabular}
\end{table}
Table 3. Details of 28-mers determined by the Hadoop MapReduce program. #28-mers: Total number of 28-mers, #Distinct: Number of distinct 28-mers, and #Trustworthy: Number of trustworthy 28-mers having frequency more than \(\tau=5\).
and 23.79 and 2350 times less memory compared to BFCounter and Jellyfish, respectively for the 55-mer Loxodonta dataset. In the case of the 55-mer Balaenoptera dataset, KmerCo has 57.08, 105.92, and 8889.08 times less memory compared to Squeakr, BFCounter, and Jellyfish, respectively.
Figure 7 elucidates the comparison of KmerCo with other techniques using insertion time for 28-mers (Figure 7a) and 55-mers (Figure 7b) using various datasets. The insertion time of KmerCo excludes file writing time. In the case of 28-mers, BFCounter took the highest time whereas KmerCo took the second highest time. Whereas, KmerCo took higher time than other techniques in 55-mer datasets. For the 28-mer Loxodonta dataset, KmerCo took 6.518 sec and 1.96 sec less time compared to Squeakr and BFCounter, respectively, but 0.2 sec more than Jellyfish. For the 28-mer Balaenoptera dataset, KmerCo took 10.87 sec and 16.03 sec more time compared to Squeakr and Jellyfish, respectively, but 21.83 sec less than BFCounter. In the case of the 55-mer Loxodonta dataset, KmerCo took 4.49 sec, 7.67 sec, and 7.79 sec more than Squeakr, BFCounter, and Jellyfish, respectively; for the 55-mer Balaenoptera dataset, KmerCo took 40.6 sec, 16.26 sec, and 36.43 sec more than Squeakr, BFCounter, and Jellyfish, respectively. Overall, BFCounter and KmerCo took
Figure 5. Analysis of various datasets based on (a) Distinct 28-mer rate, (b) Trustworthy 28-mer rate, (c) Erroneous 28-mer rate, (d) Distinct 55-mer rate, (e) Trustworthy 55-mers rate, and (f) Erroneous 55-mer rate.
Figure 6. Comparison of the memory footprint of the data structure in megabytes among KmerCo, Squeakr, BFCounter, and Jellyfish using (a) 28-mers and (b) 55-mers of various datasets. Lower is better.
Figure 7. Comparison of insertion time in second among KmerCo, Squeakr, BFCounter, and Jellyfish using (a) 28-mers and (b) 55-mers of various dataset. Lower is better.
the highest time in the case of 28-mer and 55-mer datasets, respectively. On the contrary, Jellyfish took the lowest time in all datasets. The reason for KmerCo taking higher time is it inserts all K-mers whereas other techniques insert fewer K-mers as shown in Figure 8. Another reason is that others are compromising and occupying large memory to have low insertion time as adduced by Figure 6.
Figure 8 interprets the number of insertions of 28-mers (Figure 7(a)) and 55-mers (Figure 7(b)) of the datasets by the techniques. The Hadoop (black bar) represents the total number of K-mers in the dataset as determined by the Hadoop MapReduce program. Hence, a bar closer to the Hadoop bar is better. KmerCo is the same as Hadoop in all datasets; whereas other techniques are less than Hadoop. Squeakr, BFCounter, and Jellyfish insert approximately 35 million, 15 million, and 15 million, respectively, fewer 28-mers in the Loxodonta dataset. As BFCounter, and Jellyfish are both hashtable-based K-mer counting techniques they insert the same number of K-mers. Similarly, Squeakr, BFCounter, and Jellyfish insert approximately 54 million, 44 million, and 44 million, respectively, fewer 28-mers in the Balaenoptera dataset. The Squeakr, BFCounter, and Jellyfish insert approximately 40 million, 30 million, and 30 million fewer 55-mers of the Loxodonta dataset. In the case of the Balaenoptera dataset, Squeakr, BFCounter, and Jellyfish insert 94 million, 88 million, and 88 million fewer 55-mers, respectively.
Figure 9 evidences the comparison of KmerCo with other techniques based on inserted-to-ignored K-mer ratio of 28-mers (Figure 7(a)) and 55-mers (Figure 7(b)) of the datasets. The ratio is [_inserted K-mers_]. More than 1 means more K-mers are ignored compared to inserted K-mers. KmerCo inserts all K-mers in all datasets, hence, the ratio is 0 in all datasets. Squeakr ignored the highest number of 28-mers and 55-mers of the Loxodonta dataset compared to others. Squeakr has ignored more 28-mers of the Loxodonta and Galeopterus datasets and ignored more 55-mers of all datasets compared to the inserted K-mers. Considering the BFCounter and Jellyfish, they have more ignored 55-mers in all datasets.
Figure 10 represents the number of insertions per second for 28-mers (Figure 9(a)) and 55-mers (Figure 9(b)) of the datasets. As presented in Figure 7 although KmerCo took more insertion time, it has more insertions per second than Squeakr and BFCounter. KmerCo has the highest number of insertions per second in the case of both 28-mer and 55-mer of the Loxodonta and Galeopterus dataset; while Squeakr has the lowest. Jellyfish have the highest number of insertions per second in both 28-mer and 55-mer of Microecbus and 55-mer of Balaenoptera dataset; albeit KmerCo is the second highest in these datasets. Overall, KmerCo has a good performance with respect to the number of insertions per second.
Figure 8. Comparison of the number of insertions among KmerCo, Squeakr, BFCounter, and Jellyfish using (a) 28-mers and (b) 55-mers of various datasets. Closer to the Hadoop bar (black bar) is better.
Figure 10. Comparison of the number of (a) 28-mers and (b) 55-mers inserted per second among KmerCo, Squeakr, BFCounter, and Jellyfish using various datasets. Higher is better. #Insertion: Number of insertions.
Figure 9. Comparison of inserted-to-ignored K-mer ratio among Squeakr, BFCounter, and Jellyfish using (a) 28-mers and (b) 55-mers of various datasets. KmerCo has zero ratios in all datasets. Positive and close to zero is better.
Figure 11 adduce the performance of KmerCo compared with other techniques based on the trustworthy rate of 28-mers (Figure 11a) and 55-mers (Figure 11b) of the datasets.
\[Trustworthy\ rate=\] \[\frac{[Trustworthy\ \text{K-mers}]-[Hadoop\ Trustworthy\ \text{K-mers}]}{ |Total\ \text{K-mers}|}\]
where \(|Trustworthy\ \text{K-mers}|\) is the trustworthy K-mers generated by the respective technique and \(|Hadoop\ Trustworthy\ \text{K-mers}|\) is the trustworthy K-mers generated by the Hadoop MapReduce program. In the figure, close to zero means the technique correctly identifies trustworthy K-mers. More than zero, i.e., a positive trustworthy rate means the technique identifies some erroneous K-mers as trustworthy K-mer. On the contrary, less than zero, i.e., a negative trustworthy rate means some trustworthy K-mers are identified as erroneous K-mers. A positive trustworthy rate is better because the DNA assembly process ignores erroneous K-mers, hence, identifying some trustworthy K-mers as erroneous leads to loss of information. BFCounter and Jellyfish have the same trustworthy rate because they have the same number of inserted and trustworthy K-mers. In all datasets, KmerCo has a positive trustworthy rate; whereas none of the other techniques has a positive trustworthy rate in any dataset. Squeakr, BFCounter, and Jellyfish have very near to zero trustworthy rates for both 28-mer and 55-mer Loxodonta datasets. Whereas KmerCo is very close to zero trustworthy rates for both 28-mer and 55-mer Balaenoptera datasets. KmerCo has the more positive trustworthy rate in the 55-mer dataset compared to 28-mer datasets.
## 5. Related Work
The K-mer counting techniques can be broadly classified into shared and distributed memory based. The shared memory tools are further classified into hashtable-based, disk-based, and Bloom Filter-based techniques. The hashtable-based techniques use hashtable(s) to keep the counts of K-mers, for example, Jellyfish (Kmer et al., 2017). The disk-based techniques have a low memory footprint but perform huge data processing using disk partitioning techniques, for example, KMC2 (KMC2, 2017), MSPKmerCounter (KMC2, 2018), and DSK (KMC2, 2018). The Bloom Filter-based technique uses Bloom Filter as the data structure for filtering or counting K-mers, for example, KCOSS (KMC2, 2018), Squeakr (KMC2, 2018), and SWAPCounter (KMC2, 2018). The distributed memory tools use many systems for distributed computing, for example, Kmerind (KMC2, 2018), and Bloomfish (KMC2, 2018). Our proposed KmerCo is a Bloom Filter-based K-mer counting technique. Hence, this section provides a review of only Bloom Filter-based K-mer counting techniques.
Jellyfish (KMC2, 2018) is a lightweight, multi-threaded lock-free hashtable-based K-mer counting technique. The hashtable keeps the count of the K-mers. The lock-free scheme enables parallel processing of K-mers. Each entry of the hashtable has two values: K-mer and its frequency. When the hashtable becomes saturated the data is written to disk in the form of K-mer and frequency record instead of increasing the hashtable size.
BFCounter (KMC, 2018) is both a Bloom Filter and hashtable-based K-mer counting technique. The hashtable keeps the count of the K-mers. It implements a standard Bloom Filter. The DNA sequence is traversed twice. In the first traversal, the K-mers are queried to the Bloom Filter, if absent it is inserted into Bloom Filter; otherwise into the hashtable. If K-mer is absent in the hashtable, insert it; otherwise, increment the counter. The second traversal is used to determine the exact frequency of the K-mers. Finally, delete all unique K-mers. The second traversal takes half the time of the first traversal as the hashtable lookup operation is faster than the insertion operation. Overall, BFCounter is slow for large datasets and the twice traversal increases the processing time.
Mcvicar _et. al._(Mcvicar et al., 2018) proposed a field programmable gate array (FPGA) and Bloom Filter-based K-mer counting technique. Bloom Filter generates small operations ideal for execution by FPGA. The CBF keeps the count of the K-mers. The technique uses 4 FPGAs and each has a Bloom Filter. The Bloom Filter implemented is CBF. The _Reads_ are parsed to generate K-mers which are saved in small-size blocks and saved in a queue. From the queue, the K-mers of the blocks are hashed by the Shift-And-Xor (SAX) (Mcvicar et al., 2018) hash function. The hash value is forwarded to a selector which selects the FPGA that processes the K-mers. Each FPGA also has a queue which stores the K-mers which are forwarded for processing by CBF. All CBF work in parallel. The performance of the technique is independent of K. In case one hybrid memory cube (HMC) receives many operations the performance reduces due to the lack of parallel processing. This technique is best applicable in a small DNA sequence.
Squeakr (KMC2, 2018) is an in-memory Bloom Filter-based K-mer counting technique which implements the CQF (KMC, 2018). The CQF keep the frequency of K-mers. It is a thread-based technique having two types of CQF: global and local. There is a single global CQF and each thread has a local CQF. The threads try to acquire a lock on the global CQF. The thread having the lock inserts the K-mer directly into the global CQF and others insert into the local CQF. When a local CQF becomes saturated, it is written to the global CQF. It performs a lock-free queue and implements thread-safe CQF to parallelise file parsing which enhances its ability to scale more threads. The CQF does not scale efficiently in the case of large highly skewed datasets because such datasets contain high-frequency hot-spots,
Figure 11. Comparison of trustworthy rate among KmerCo, Squeakr, BFCounter, and Jellyfish using (a) 28-mers and (b) 55-mers of various dataset. Positive and close to zero is better.
i.e., regions in the sequence having many repetitive K-mers which causes excess lock contention among threads.
SWAPCounter (Srivastava et al., 2017) is a distributed Bloom Filter-based K-mer counting technique. A hashtable keeps the count of the K-mers. It implements CBF having each slot counter length \(log(\theta)\) where \(\theta\) is the maximum frequency among the K-mers. CBF performs the counting of K-mers. It has four components: parallel sequence I/O, K-mer extraction and distribution, K-mer filtering, and counting and statistics. The first three components are the most time-intensive tasks and the time is reduced by implementing pipelines. The first component, i.e., parallel sequence I/O, partitions the DNA sequence. In the K-mer extraction step, the DNA sequence is parsed to generate K-mers which are packed in a block. In K-mer distribution, first, the K-mer is hashed twice to determine a process and a memory location, respectively. The process is responsible for the processing of the K-mer and the K-mer is stored in the memory location. The process then queries the K-mer into the CBF. The trustworthy K-mers are stored in a K-mer container. After filtering, a hashtable is constructed and inserts trustworthy K-mers. The message-passing interface I/O module does caching and data pooling to achieve maximum I/O performance. The technique utilises non-blocking all-to-all communication for overlapping computation and communication to enhance performance and efficacy. The computation of K-mer extraction and distribution are performed in parallel. SWAPCounter reduces the computation time by performing data compression and instruction-level optimisation to the K-mer extraction phase. Maintaining many CBFs decreases memory footprint efficiency.
KCOSS (Kumar et al., 2017) is a Bloom Filter-based K-mer counting technique which implements a segmented standard Bloom. The counting of K-mer is performed by a hashtable. The segmented standard Bloom Filter is constructed by partitioning a single array into multiple Bloom Filters. The number of hash functions in KCOSS is \(k_{h}+1\) because the first hash function determines the corresponding Bloom Filter where the K-mer is inserted or queried. KCOSS uses a shared hashtable or Bloom Filter based on K. In case \(0<K\leq 14\), then KCOSS uses a shared hashtable; otherwise, i.e., \(K>14\) uses Bloom Filter. Along with Bloom Filter KCOSS implements two hashtables: fixed hashtable and elastic hashtable. The fixed hashtable is large in size whereas the elastic hashtable is small. The elastic hashtable is a cuckoo hashtable. The DNA sequence is partitioned into blocks and inserted into a lock-free queue. From the queue, K-mers are extracted and converted into binary format. In case \(k>14\), the K-mer is queried to Bloom Filter. If the K-mer is the first occurrence, then it is stored in an overlapping sequence set. If a non-first occurrence K-mer, then it is stored in a hashtable and elastic cuckoo hashtable. When Bloom Filter returns true, then K-mer is checked in the fixed hashtable, if present then increment counter. Otherwise, check the K-mer in the cuckoo hashtable. Finally, all distinct K-mers, i.e., overlapping sequence set and the hashtables write the K-mer with its frequency to a file. The FPP of a segmented Bloom Filter is the same as a standard Bloom Filter. KCOSS maintains many data structures; which increases the overall memory footprint. The size of the overlapping sequence set depends on the DNA sequence size, number of distinct K-mers, and number of unique K-mers. KCOSS implements shared hashtable or Bloom Filter based on K value because it believes a lower K value has fewer K-mers; however, the contrary is true, with the increase in K value the number of K-mers decreases.
## 6. Conclusion
In this paper, we proposed, KmerCo, a new efficacious and potent K-mer counting technique. It implements a low memory footprint counting Bloom Filter called countBF for low FPP and high efficiency. KmerCo has two phases: insertion and classification. In the insertion phase, countBF is constructed, i.e., K-mers are inserted into countBF and recognize distinct K-mers. In the classification phase, the distinct K-mers are queried to countBF to classify between trustworthy and erroneous K-mers. The classification is based on the user-provided threshold value. The output of KmerCo is countBF with inserted K-mers, and three files: distinct, trustworthy, and erroneous.
We conducted a myriad of experiments to prove the dominance of KmerCo with other state-of-the-art techniques in terms of performance. The experiments are conducted using DNA sequence datasets of four different organisms, specifically mammals. The dataset was cropped to construct four different size datasets to showcase the performance with an increase in dataset size. The KmerCo was compared with Squeakr, BFCounter, and Jellyfish. KmerCo took the least memory footprint because usage of a single counting Bloom Filter, i.e., countBF, was sufficient for better and faster operation whereas other techniques implemented multiple data structures except for Jellyfish. Moreover, Jellyfish requires a large-sized hashtable for good performance. The state-of-the-art techniques sacrifice memory to lessen the insertion time. On the contrary, KmerCo maintains both the lowest memory and less insertion time. KmerCo has 57.08, 105.92, and 8889.08 times less memory compared to Squeakr, BFCounter, and Jellyfish, respectively in the 55-mers. Balaenoptera dataset; whereas KmerCo took 40.6 sec, 16.26 sec, and 36.43 sec more than Squeakr, BFCounter, and Jellyfish, respectively. Another contributing factor for KmerCo's high insertion time is the insertion of all K-mers while others have inserted fewer K-mers. In the 55-mers Balaenoptera dataset, Squeakr, BFCounter, and Jellyfish inserted approximately 94 million, 88 million, and 88 million fewer 55-mers. This leads to another comparison based on the number of K-mers inserted to the number of K-mers ignored ratio. KmerCo has a zero ratio whereas others have ignored more 55-mers in all datasets. KmerCo is the second largest in the number of insertions per second after Jellyfish as Jellyfish has the lowest insertion time but requires the largest memory compared to others. Another comparison parameter is the trustworthy rate which indicates the deviation of trustworthy K-mers identified by the state-of-the-art techniques compared to trustworthy K-mers recognized by Hadoop. KmerCo has a positive trustworthy rate whereas others have a negative rate in all datasets. A positive trustworthy rate represents that KmerCo is classifying some erroneous K-mer as trustworthy. On the other hand, a negative trustworthy rate displays that the K-mer counting technique fails to recognise some K-mers as trustworthy. It leads to an issue as trustworthy K-mers are only considered in further processing by the DNA assembly. Overall, the rigorous experiments and experimental analysis prove the dominance of KmerCo over other state-of-the-art techniques. |
2304.10310 | LA3: Efficient Label-Aware AutoAugment | Automated augmentation is an emerging and effective technique to search for
data augmentation policies to improve generalizability of deep neural network
training. Most existing work focuses on constructing a unified policy
applicable to all data samples in a given dataset, without considering sample
or class variations. In this paper, we propose a novel two-stage data
augmentation algorithm, named Label-Aware AutoAugment (LA3), which takes
advantage of the label information, and learns augmentation policies separately
for samples of different labels. LA3 consists of two learning stages, where in
the first stage, individual augmentation methods are evaluated and ranked for
each label via Bayesian Optimization aided by a neural predictor, which allows
us to identify effective augmentation techniques for each label under a low
search cost. And in the second stage, a composite augmentation policy is
constructed out of a selection of effective as well as complementary
augmentations, which produces significant performance boost and can be easily
deployed in typical model training. Extensive experiments demonstrate that LA3
achieves excellent performance matching or surpassing existing methods on
CIFAR-10 and CIFAR-100, and achieves a new state-of-the-art ImageNet accuracy
of 79.97% on ResNet-50 among auto-augmentation methods, while maintaining a low
computational cost. | Mingjun Zhao, Shan Lu, Zixuan Wang, Xiaoli Wang, Di Niu | 2023-04-20T13:42:18Z | http://arxiv.org/abs/2304.10310v1 | # LA3: Efficient Label-Aware AutoAugment
###### Abstract
Automated augmentation is an emerging and effective technique to search for data augmentation policies to improve generalizability of deep neural network training. Most existing work focuses on constructing a unified policy applicable to all data samples in a given dataset, without considering sample or class variations. In this paper, we propose a novel two-stage data augmentation algorithm, named _Label-Aware AutoAugment (LA3)_, which takes advantage of the label information, and learns augmentation policies separately for samples of different labels. _LA3_ consists of two learning stages, where in the first stage, individual augmentation methods are evaluated and ranked for each label via Bayesian Optimization aided by a neural predictor, which allows us to identify effective augmentation techniques for each label under a low search cost. And in the second stage, a composite augmentation policy is constructed out of a selection of effective as well as complementary augmentations, which produces significant performance boost and can be easily deployed in typical model training. Extensive experiments demonstrate that _LA3_ achieves excellent performance matching or surpassing existing methods on CIFAR-10 and CIFAR-100, and achieves a new state-of-the-art ImageNet accuracy of 79.97% on ResNet-50 among auto-augmentation methods, while maintaining a low computational cost.
## 1 Introduction
Data augmentation has proven to be an effective regularization technique that can improve the generalization of deep neural networks by adding modified copies of existing samples to increase the volume and diversity of data used to train these networks. Traditional ways of applying data augmentation in computer vision include using single augmentation techniques, such as rotation, flipping and cutout [4], adopting randomly selected augmentations [2], and employing a manually crafted augmentation policy consisting of a combination of transformations. However, these methods either do not reach the full potential of data augmentation, or require human expertise in policy design for specific tasks.
Recently, automated learning of augmentation policies has become popular to surpass the limitation of manual design, achieving remarkable advances in both the performance and generalization ability on image classification tasks. Different search algorithms such as reinforcement learning [1], population-based training [9], and Bayesian Optimization [16] have been investigated to search
effective augmentation policies from data to be used to train target networks. Dynamic augmentation strategies, e.g., PBA [9], AdvAA [25], are also proposed to learn non-stationary policies that vary during model training.
However, most existing methods focus on learning a single policy that is applied to all samples in the dataset equally, without considering variations between samples, classes or labels, which may lead to sub-optimal solutions. Figure 1 demonstrates the effects of different augmentation operations on different classes of samples in CIFAR-10, from which we can see that the effectiveness of augmentations is different on each class. For example, when the operation "Posterize" is applied in training, the test accuracy of "dog" class increases by 3.8%, whereas the test accuracy of "cat" drops significantly by 5%. It is possible that a certain augmentation used in training has completely different impacts on different labels. This observation implies the limitation of label or sample-invariant dataset-level augmentation policies. MetaAugment [26] proposes to learn a sample-aware augmentation policy by solving a sample re-weighting problem. It uses an augmentation policy network to take an augmentation operation and the corresponding augmented image as inputs, and outputs a weight to adjust the augmented image loss computed by the task network. Despite the benefit of a fine-grained sample-dependent policy, MetaAugment is time-consuming and couples policy network learning with target model training, which may not be convenient in some production scenarios that require functional decomposition.
In this paper, we propose an efficient data augmentation strategy named _Label-Aware AutoAugment (LA3)_, which produces label-aware augmentation policies to overcome the limitation of sample-invariant augmentation while still being computationally efficient as compared to sample-aware or dynamic augmentation strategies. _LA3_ achieves competitive performance matching or out
Figure 1: The effects of different augmentation operations on each class in CIFAR-10, demonstrated by the test accuracy change in each class after each single augmentation is applied to training WRN-40-2.
performing a wide range of existing static and dynamic auto-augment methods, and attains the highest ImageNet accuracy on ResNet-50 among all existing augmentation methods including dynamic ones. In the meantime, _LA3_ is also a simple scheme which separates augmentation policy search from target network model training, and produces stationary augmentation policies that can easily be applied to enhance deep learning with minimum perturbation to the original target model training routine.
_LA3_ adopts a two-staged design, which first explores a search space of combinations of operations and evaluates the effectiveness of promising augmentation operations for each class, while in the second stage, forms a composite policy to be used in target model training.
In the first stage of _LA3_, a neural predictor is designed to estimate the effectiveness of operation combinations on each class and is trained online through density matching as the exploration process iterates. We use Bayesian Optimization with a predictor-based sampling strategy to guide search into meaningful regions, which greatly improves the efficiency and reduces search cost.
In the second stage, rather than only selecting top augmentation operations, we introduce a policy construction method based on the minimum-redundancy maximum-reward (mRMR) principle [17] to enhance the performance of the composite augmentation policy when applied to the target model. This is in contrast to most prior methods [1], [16], which simply put together best performing augmentations in evaluation, ignoring their complementary effects.
Extensive experiments show that using the same set of augmentation operations, the proposed _LA3_ achieves excellent performance outperforming other low-cost static auto-augmentation strategies, including FastAA and DADA, on CIFAR-10 and CIFAR-100, in terms of the accuracy. On ImageNet, _LA3_, using stationary policies, achieves a new state-of-the-art top-1 accuracy of 79.97% on ResNet-50, which outperforms prior auto-augmentation methods including dynamic strategies such as AdvAA and MetaAug, while being \(2\times\) and \(3\times\) more computationally efficient, respectively.
## 2 Related Work
Data augmentation is a popular technique to alleviate overfitting and improve the generalization of neural network models by enlarging the volume and diversity of training data. Various data augmentation methods have been designed, such as Cutout [4], Mixup [24], CutMix [22], etc. Recently, automated augmentation policy search has become popular, replacing human-crafted policies by learning policies directly from data. AutoAugment [1] adopts a reinforcement learning framework that alternatively evaluates a child model and trains an RNN controller to sample child models to find effective augmentation policies. Although AutoAugment significantly improves the performance, its search process can take thousands of GPU hours which greatly limits its usability.
Multiple strategies are proposed to lower the search cost. Fast AutoAugment [16] proposes a density matching scheme to avoid training and evaluating child
models, and uses Bayesian Optimization as the search algorithm. Weight-sharing AutoAugment [18] adopts weight-sharing settings and harvests rewards by fine-tuning child models on a shared pre-trained target network. Faster AutoAugment [7] further reduces the search time by making the search of policies end-to-end differentiable through gradient approximations and targeting to reduce the distance between the original and augmented image distributions. Similarly, DADA [15] relaxes the discrete policy selection to a differentiable optimization problem via Gumbel-Softmax [12] and introduces an unbiased gradient estimator.
Instead of producing stationary augmentation policies that are consistent during the target network training, PBA [9] learns a non-stationary augmentation schedule, inspired by population based training [11], by modeling the augmentation policy search task as a process of hyperparameter schedule learning. AdvAA [25] adopts an adversarial framework that jointly optimizes target network training and augmentation search to find harder augmentation policies that produce the maximum training loss. However, AdvAA must rely on the batch augment trick, where each training batch is enlarged by multiple times with augmented copies, which significantly increases its computational cost. In general, one concern of these dynamic strategies is that they intervene the standard model training procedure, causing extra deployment overhead and may not be applicable in many production environments.
While most previous studies focus on learning augmentation policies for the entire dataset, MetaAugment [26] proposes to learn sample-aware augmentation policies during model training by formulating the policy search as a sample re-weighting problem, and constructing a policy network to learn the weights of specific augmented images by minimizing the validation loss via meta learning. Despite its benefits, MetaAugment is computationally expensive, requiring three forward and backward passes of the target network in each iteration. LB-Aug [19] is a concurrent work that also searches policies dependent on labels, but focuses on a different task under multi-label scenarios, where each sample has multiple labels rather than a single classification label. LB-Aug uses an actor-critic reinforcement learning framework and policy gradient approach for policy learning. Despite the benefits from label-based policies, LB-Aug has potential stability issues due to the use of reinforcement learning, which is generally harder and computational costly to train. In fact, the search cost of LB-Aug is not reported. In contrast, _LA3_ targets the classical single-label image classification tasks, e.g., on CIFAR-10/100 and ImageNet benchmarks, on which most other auto-augmentation methods are evaluated. It adopts Bayesian Optimization coupled with a neural predictor to sample and search for label-dependent augmentation policies efficiently. In addition, a policy construction stage is proposed to further form a more effective composite policy for target network training.
## 3 Methodology
In this section, we first review the task of conventional augmentation search and introduce the formulation of the proposed label-aware augmentation search
task. Then we describe the two-stage design of _LA3_, and present the algorithm in detail.
### Conventional Augmentation Search
Given an image recognition task with a training dataset \(D^{tr}=\{(x_{i},y_{i}\}_{i=1}^{|D^{tr}|}\), with \(x_{i}\) and \(y_{i}\) representing the image and label respectively, augmented samples \(\mathcal{T}(x_{i})\) are derived by applying augmentation policy \(\mathcal{T}\) to sample \(x_{i}\). Usually, the policy \(\mathcal{T}\) is composed of multiple sub-policies \(\tau\), and each sub-policy is made up by \(K\) augmentation operations \(O\), optionally with their corresponding probabilities and magnitudes, which are adopted in the original design of AutoAugment [1], but not included in some of the recent methods such as Weight-sharing AutoAugment [18] and MetaAugment [26].
Conventional augmentation search methods focus on the task whose goal is to construct the optimal policy \(\mathcal{T}^{*}\) from given augmentations so that the performance \(\mathcal{R}\) of the task network \(\theta_{\mathcal{T}}\) on the validation dataset \(D^{val}\) is maximized:
\[\begin{split}\mathcal{T}^{*}&=\operatorname*{arg \,max}_{\mathcal{T}}\mathcal{R}(\theta_{\mathcal{T}}|D^{val}),\\ \text{where}&\theta_{\mathcal{T}}&= \operatorname*{arg\,min}_{\theta_{\mathcal{T}}}\frac{1}{|D^{tr}|}\sum_{i=1}^{| D^{tr}|}\mathcal{L}_{\theta}(\mathcal{T}(x_{i}),y_{i}),\end{split} \tag{1}\]
and \(\mathcal{L}_{\theta}\) is the loss function of target network \(\theta\).
### Label-Aware Augmentation Search
Though learning a dataset-level policy achieves considerable improvements, it is unlikely the optimal solution due to the lack of consideration of sample variations and utilization of label information.
In this paper, we aim to learn a label-aware data augmentation policy \(\mathcal{T}^{*}=\{\mathcal{T}^{*}_{y_{0}},\cdots,\mathcal{T}^{*}_{y_{n}}\}\), where for samples of each label \(y_{j}\), an individual policy \(\mathcal{T}_{y_{j}}\) is learned by maximizing the label-specific performance \(\mathcal{R}_{y_{j}}\) of label \(y_{j}\):
\[\begin{split}\mathcal{T}^{*}_{y_{j}}&= \operatorname*{arg\,max}_{\mathcal{T}_{y_{j}}}\mathcal{R}_{y_{j}}(\theta_{ \mathcal{T}}|D^{val}),\\ \text{where}&\theta_{\mathcal{T}}&= \operatorname*{arg\,min}_{\theta_{\mathcal{T}}}\frac{1}{|D^{tr}|}\sum_{i=1}^{| D^{tr}|}\mathcal{L}_{\theta}(\mathcal{T}_{y_{i}}(x_{i}),y_{i}).\end{split} \tag{2}\]
Similar to conventional augmentation, in our label-aware setting, we define that each policy for a label is composed of multiple augmentation triples, each consisting of three augmentation operations. The magnitude of each augmentation operation is chosen randomly from ranges defined in AutoAugment [1], and is excluded from the search space in order to introduce randomness and diversity into the policy, and allocate more computational resources to assessing the fitness of operations to different classes of samples.
In this paper, we propose a label-aware augmentation policy search algorithm called _LA3_, composed of two stages as presented in Figure 2. The first augmentation exploration stage aims to search for effective augmentation triples with density matching, and train a neural predictor to provide evaluations on all seen and unseen augmentation triples in the search space. And the goal of the second policy construction stage is to build a composite policy for each label based on the evaluation results from stage 1 by selecting a subset of complementary augmentation triples based on the minimum-redundancy maximum-reward principle.
### Stage 1: Augmentation Exploration
**Density Matching** is an efficient mechanism originally proposed by Fast AutoAugment [16] to simplify the search process for effective augmentations, since
Figure 2: An overview of the proposed _LA3_ method. It contains two stages, where in the first stage, augmentation triples are individually evaluated for each label via Bayesian Optimization with the help of an label-aware neural predictor. In the second stage, the best combination of complementary augmentation triples is selected based on the minimum-redundancy maximum-reward principle.
the problem defined by Equation (1) and Equation (2) is a bi-level optimization problem, and is extremely hard to solve directly. It calculates the reward of each augmentation triple without the need of repeatedly training the target network. Specifically, given a model \(\theta\) pre-trained on the training set \(D^{tr}\) and a validation set \(D^{val}\), the performance of a certain augmentation triple \(\tau\) can be evaluated by approximately measuring the distance between the density of \(D^{tr}\) and density of augmented validation set \(\tau(D^{val})\) with the model performance \(\mathcal{R}(\theta|\tau(D^{val}))\). And the reward \(r\) is measured by the performance difference caused by applying the augmentation triple \(\tau\):
\[r_{\tau}=\mathcal{R}(\theta|\tau(D^{val}))-\mathcal{R}(\theta|D^{val}). \tag{3}\]
Similarly, in our label-aware setting, the reward \(r\) for a certain augmentation triple \(\tau_{y}\) at label \(y\) is given by
\[r_{\tau,y}=\mathcal{R}_{y}(\theta|\tau_{y}(D^{val}))-\mathcal{R}_{y}(\theta|D^ {val}). \tag{4}\]
**Bayesian Optimization with a Neural Predictor** is a widely adopted framework in many applications such as neural architecture search [21, 20] to find the optimal solution within a search space. In standard BO setting, over a sequence of iterations, the results from previous iterations are used to model a posterior distribution to guide the candidate selection of next iteration. And a neural predictor is a neural network that is repeatedly trained on the history evaluated candidates, and provides evaluations on unseen candidates, which increases the utilization efficiency of history evaluations and notably accelerates the search process.
In our _LA3_ algorithm, we incorporate a label-aware neural predictor \(f(r|\tau,y)\) which takes in an augmentation triple \(\tau\) and the label \(y\) it is evaluated on, and predicts the reward \(r\). In each iteration, the sampled augmentation triples for different labels are evaluated according to Equation (4), and together with the previous evaluated augmentation triples, are passed to train a new predictor.
Next, we select 100 candidate augmentation triples at the balance of exploration and exploitation, based on the following selection procedure: 1) Generate 10 new candidates by randomly mutating 1 or 2 operations in the chosen augmentation triples of the previous iteration; 2) Randomly sample 50 candidates from all unexplored augmentation triples; 3) Sample 40 candidates from the explored augmentation triples according to their real reward values. Then, for each label \(y\), we choose the augmentation triple \(\tau\) with the highest predicted reward \(\tilde{r}_{\tau,y}\) for evaluation.
**Overall workflow** of the first stage is summarized in Algorithm 1. To begin with, a warm-up phase of \(T_{0}\) iterations is incorporated to randomly explore the search space, and retrieve the initial training data for learning a label-aware neural predictor \(f(r|\tau,y)\). Then, for the following \(T-T_{0}\) iterations, the search phase is adopted. In each iteration, we first train a neural predictor from scratch with data collected from previous iterations. Then, for each label, we apply the fore-mentioned selection procedure to select a set of candidate augmentation
triples, and use the trained predictor to choose the augmentation triple for evaluation. After enough training data is collected, a well-trained label-aware neural predictor can be derived to provide accurate evaluations on all augmentation triples for different labels.
### Stage 2: Policy Construction
Policy construction is a process of mapping the evaluation results of stage 1 to the final augmentation policy for training target networks. It is needed because augmentation policies are usually searched on light-weight proxy tasks such as density matching, but are evaluated on the complete tasks of image classification. Even for methods that search on complete tasks such as AutoAugment [1], they still naively concatenate multiple searched policies into a final policy. However, the policies for concatenation usually share a great option of overlapped transformations, resulting in a high degree of redundancy.
In this paper, we propose an effective policy construction method to iteratively select candidate augmentation triples for the final policy, based on the mutual information criteria of minimum-redundancy maximum-relevance (mRMR) [17]. Specifically, in _LA3_, the relevance metric is defined as the predicted reward \(\tilde{r}\) as it provides a direct evaluation on the performance of a certain augmentation triple. And the redundancy of an augmentation triple \(\tau\) is defined as the average number of intersecting operations between it and the already selected augmentation triples \(\mathcal{T}_{s}\). Formally, in each iteration of policy construction, we
define the score \(v(\tau,y)\) of each unselected augmentation triple \(\tau\) at label \(y\) as
\[v(\tau,y)=\tilde{r}_{\tau,y}-\alpha\times\overline{\tau}\times\frac{1}{|\mathcal{ T}_{s}|}\sum_{\tau_{s}\in\mathcal{T}_{s}}|\tau\cap\tau_{s}|, \tag{5}\]
where \(|\tau\cap\tau_{s}|\) refers to the number of overlapped operations between \(\tau\) and \(\tau_{s}\), \(\overline{\tau}\) is the average predicted reward of all augmentation triples in search space and is used to scale the redundancy, and \(\alpha\) is a hyper-parameter adjusting the weight between the reward value and the redundancy value.
Algorithm 2 illustrates the overall process of the policy construction stage where the goal is to find a label-aware policy containing a collection of augmentation triples that maximizes the rewards while keeping a low degree of redundancy. Specifically, for each label \(y_{i}\), we retrieve the predicted reward \(\tilde{r}_{\tau,y_{i}}\) for each augmentation triple \(\tau\) in the search space \(A\). Afterwards, a label-specific policy \(\mathcal{T}_{y_{i}}\) is constructed iteratively by calculating the score \(v(\tau,y_{i})\) of unselected augmentation triples with Equation (5) and add the augmentation triple with the highest score to the policy until the required number of candidates \(N_{\text{cand}}\) is met. Eventually, the label-aware policy \(\mathcal{T}^{*}\) is built with each label \(y_{i}\) corresponding to a label-specific policy \(\mathcal{T}_{y_{i}}\).
```
Input: Well-trained predictor \(f^{T}(r|\tau,y)\), search space \(A\), number of candidates \(N_{\text{cand}}\) Output: Label-aware policy \(\mathcal{T}^{*}\)
1for\(y_{i}=y_{0},\cdots,y_{n}\)do
2for\(\tau\in A\)do
3 predict the reward \(\tilde{r}_{\tau,y_{i}}=f^{T}(\tau,y_{i})\)
4 initialize label-specific policy \(\mathcal{T}_{y_{i}}\leftarrow\emptyset\)
5for\(k=0,\cdots,N_{\text{cand}}\)do
6for\(\tau\in(A\setminus\mathcal{T}_{y_{i}})\)do
7 calculate \(v(\tau,y_{i})\) using Equation (5)
8 find augmentation triple with highest score \(\tau^{k}=\arg\max_{\tau}(v(\tau,y_{i}))\)
9\(\mathcal{T}_{y_{i}}\leftarrow\mathcal{T}_{y_{i}}\cup\tau^{k}\)
10\(\mathcal{T}^{*}=\{\mathcal{T}_{y_{0}},\cdots,\mathcal{T}_{y_{n}}\}\)
```
**Algorithm 2**Stage 2: Policy Construction
## 4 Experiments
In this section, we first describe the details of our experiment settings. Then we evaluate the proposed method, and compare it with previous methods in terms of both performance and search cost. Finally, we perform thorough analysis on the design of different modules in our algorithm. Code and searched policies are released at [https://github.com/Simpleple/LA3-Label-Aware-AutoAugment](https://github.com/Simpleple/LA3-Label-Aware-AutoAugment).
### Datasets, Metrics and Baselines
Following previous work, we evaluate our _LA3_ method on CIFAR-10/100 [14] and ImageNet [3], across different networks including ResNet [8], WideResnet [23], Shake-Shake [5] and PyramidNet [6]. Test accuracy is reported to assess the effectiveness of the discovered policies, while the cost is assessed by the number of GPU hours measured on Nvidia V100 GPUs. For a fair comparison, we list results of stationary policies produced by static strategies, AutoAugment [1], FastAA [16], and DADA [15]. We also include results from dynamic strategies, PBA [9], AdvAA [25], and MetaAug [26], producing non-stationary policies as target model training progresses.
### Implementation Details
**Policy Composition.** For a fair comparison, we use the same 15 augmentation operations as PBA and DADA do, which is also the same set used by AA and FastAA with SamplePairing [10] excluded. Additionally, "Identity" operation that returns the original image is introduced in our search space to prevent images from being excessively transformed. Each label-specific policy consists of \(N_{cand}=100\) augmentation triples, while in evaluation, each sample is augmented by an augmentation triple randomly selected from the policy with random magnitudes.
**Neural Predictor.** The network structure of the neural predictor is composed of two embedding layers of size 100 that map labels and augmentation operations to latent vectors and three fully-connected layers of hidden size 100 with Relu activation function. The representation of an augmentation triple is constructed by combining the three augmentation operation embedding vectors with mean-pooling and concatenating it with the label embedding vector. Then it is passed into the FC layers to derive the predicted reward. The predictor network is trained for 100 epochs with Adam optimizer [13] and a learning rate of 0.01.
**Search Details.** For CIFAR-10/100, we split the original training set of \(50,000\) samples into a training set \(D^{tr}\) of size \(46,000\) to pre-train the model \(\theta\), and a valid set \(D^{val}\) of \(4,000\) for density matching. We search our policy on WRN-40-2 network and apply the found policy to other networks for evaluation. For ImageNet, we randomly sample 50 examples per class from the original training set, and collect \(50,000\) examples in total to form the valid set, where the remaining examples are used as the training set. In the augmentation exploration stage, the total number of iterations is set to \(T=500\), and the warm-up iterations is set to \(T_{0}=100\). In the policy construction stage, \(\alpha=2.5\) is used to calculate the reward values of augmentation triples.
**Evaluation.** The evaluation is performed by training target networks with the searched policies, and the results are reported as the mean test accuracy and standard deviation over three runs with different random seeds. We do not specifically tune the training hyperparameters and use settings consistent with prior work. We include the details in the supplementary materials.
### Experimental Results
**CIFAR-10/100.** Table 1 summarizes the CIFAR-10 and CIFAR-100 results of different auto-augmentation methods on a wide range of networks. Among all static methods that produce stationary policies, _LA3_ achieves the best performance for all 5 target networks on CIFAR-10 and for 2 out of 4 target networks on CIFAR-100. When extending the comparison to also include dynamic strategies, _LA3_ still achieves the best CIFAR-10 and CIFAR-100 accuracies on WRN-40-2, which is the original network on which policy search was performed. When transferring these augmentation policies found on WRN-40-2 to other target network models for evaluation, _LA3_ also achieves excellent performance comparable to the current best methods. In particular, _LA3_ achieves the highest score for WRN-28-10 on CIFAR-100. These results evidently proves the effectiveness of _LA3_ as an augmentation strategy to improve model performance, and demonstrates the strong transferability of our label-aware policies across different neural networks.
**ImageNet Performance.** In Table 2, we list the top-1 accuracy of different methods evaluated on ResNet-50, as well as their computational cost. For a fair comparison, we also indicate whether the Batch Augment (BA) trick [25], which forms a large batch with multiple copies of transformed samples, is used for each method, with "(BA)" after the method name. We also indicate the number of transformations used in the batch augment. Note that the search cost for dynamic methods is included in the training cost, since they learn a dynamic augmentation policy during the training of the target model. We include the results for _LA3_ both with and without batch augment.
From Table 2 we can observe that among all methods without the batch augment trick, _LA3_ achieves the best ImageNet top-1 accuracy of 78.71%, while the search only took 29.3 GPU hours, which is 15 times faster than FastAA. Although DADA is faster, _LA3_ is substantially better in terms of the ImageNet accuracy achieved.
\begin{table}
\begin{tabular}{l l|c c c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Baseline**} & **AA** & **FastAA** & **DADA** & **LA3** & **PBA** & **AdvAA** & **MetaAug** \\ & & & static & static & static & static & dynamic & dynamic & dynamic \\ \hline CIFAR-10 & WRN-40-2 & 94.7 & 96.3 & 96.4 & 96.4 & \(\mathbf{97.08\pm 0.08}\) & \(-\) & \(-\) & 96.79 \\ & WRN-28-10 & 96.1 & 97.4 & 97.3 & 97.3 & \(\mathbf{97.80\pm 0.15}\) & 97.42 & 98.10 & 97.76 \\ & Shake-Shake (26 \(\pm\)396d) & 97.1 & 98.0 & 98.0 & \(\mathbf{98.07\pm 0.11}\) & 97.97 & 98.15 & 98.29 \\ & Shake-Shake (26 \(\pm\)121d) & 97.2 & 98.1 & 98.1 & 98.0 & \(\mathbf{98.12\pm 0.08}\) & 97.97 & 98.22 & 98.28 \\ & PyramidNet+ShakeDrop & 97.3 & 98.5 & 98.3 & 98.3 & \(\mathbf{98.55\pm 0.02}\) & 98.54 & 98.64 & 98.57 \\ \hline CIFAR-100 & WRN-40-2 & 74.0 & 79.3 & 79.4 & 79.1 & \(\mathbf{81.09\pm 0.28}\) & \(-\) & \(-\) & 80.60 \\ & WRN-28-10 & 81.2 & 82.9 & 82.8 & 82.5 & \(\mathbf{84.54\pm 0.03}\) & 83.27 & 84.51 & 83.79 \\ & Shake-Shake (26 \(\pm\)396d) & 82.9 & \(\mathbf{85.7}\) & 85.4 & 84.7 & \(85.17\pm 0.13\) & 84.69 & 85.90 & 85.97 \\ & PyramidNet+ShakeDrop & 86.0 & \(\mathbf{89.3}\) & 88.3 & 88.8 & \(89.02\pm 0.03\) & 89.06 & 89.58 & 89.46 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 test accuracy (%) on CIFAR-10 and CIFAR-100. We mainly compare our method _LA3_ with methods that also produce stationary augmentation policies, including AA, FastAA and DADA. Results of dynamic policies (PBA, AdvAA and MetaAug) are also provided for reference.
Meanwhile, _LA3 (BA)_ achieves a new state-of-the-art ImageNet accuracy of 79.97% surpassing all existing auto-augmentation strategies including dynamic strategies AdvAA and MetaAug, with a total computational cost 2 times and 3 times lower than theirs, respectively. The high cost of these dynamic policies is due to the fact that augmentation policies may vary for each sample or batch and must be learnt together with model training. By generating static policies, _LA3_ is a simpler solution that decouples policy search from model training and evaluation, which is easier to deploy in a production environment, without introducing specialized structures, e.g., the policy networks in AdvAA and MetaAug, into target model training.
### Ablation Study and Analysis
The reason of the success can be attributed to the following designs in our _LA3_ algorithm.
**Label-Awareness.** One of the main contributions of the paper is to leverage the label information and separately learn policies for samples of different classes, which captures distinct characteristics of data and produces more effective label-aware policies. The results of _LA3_ variant without label-awareness (i.e., searching for label-invariant policies) are shown in the first row of Table 3, which are constantly lower than _LA3_ in all experimental settings. This confirms that label-aware augmentation policies are effective at improving target network accuracy.
Figure 3 gives an overview of the searched label-aware policies on CIFAR-10, CIFAR-100 and ImageNet, where we calculate the occurrences of different operations in each label-specific policy and plot their proportions in different colors. We can see that the derived policies possess a high diversity by having all the operations contributing to the final policy, meanwhile making the individual policies notably different among labels. This observation further proves the need for separately treating samples of different labels in augmentation policy search.
**Neural Predictor.** In addition to using density matching to simplify augmentation assessment during search, we have adopted a label-aware neural predictor to learn the mapping from an augmentation triple to its label-specific
\begin{table}
\begin{tabular}{l|c c c c c|c c c} \hline \hline & **Baseline** & **AA** & **FastAA** & **DADA** & **LA3** & **LA3 (BA)** & **AdvAA (BA)** & **MetaAug (BA)** \\ & & static & static & static & static & static & dynamic & dynamic \\ \hline Batch Augment (BA) & n/a & n/a & n/a & n/a & n/a & \(\times 4\) & \(\times 8\) & \(\times 4\) \\ \hline
**ResNet-50 Acc (\%)** & 76.3 & 77.6 & 77.6 & 77.5 & \(\mathbf{78.71\pm 0.07}\)[\(\mathbf{79.97\pm 0.07}\)] & 79.40 & 79.74 \\ \hline \hline
**Search Cost (h)** & \(-\) & \(15,000\) & 450 & 1.3 & 29.3 & 29.3 & \(-\) & \(-\) \\
**Train Cost (h)** & 160 & 160 & 160 & 160 & 160 & 640 & \(1,280\) & \(1,920\) \\ \hline
**Total Cost (h)** & 160 & \(15,160\) & 610 & 161.3 & 189.3 & 669.3 & \(1,280\) & \(1,920\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: ResNet-50 top-1 test accuracy (%) and computational cost on ImageNet. Batch Augment (BA) trick is used in the training of _LA3_ (BA), AdvAA (BA) and MetaAug (BA). The number of transformations used in batch augment is also given in the table.
reward. We now conduct a thorough evaluation to assess the performance of the neural predictor. For each search iteration, the predictor is trained on 80% of the history data and tested on the remaining 20% data in terms of both the Spearman's Rank Correlation and Mean Abusolute Error (MAE). As shown in Figure 4, as the policy search on ImageNet progresses and more samples are explored, the predictor can produce more accurate predictions of rewards, obtaining a 0.78 Spearman Correlation and a decreased MAE when the search ends. This allows the predictor to properly guide the search process and find effective policies.
Furthermore, the use of the predictor better utilizes the search history and improves the sample efficiency during searching. As a result, the search cost of our method is significantly reduced and is 15 times lower than FastAA.
**Policy Construction.** We evaluate the impact of our two-stage design on CIFAR-10 and CIFAR-100 datasets, by showing the performance of model variants with different policy construction methods in row 2 and 3 of Table 3.
We compare our policy construction method based on mRMR to the commonly used Top-k selection method adopted in AA [1], FastAA [16] and DADA [15]. We use two different \(k\) value settings of \(k=100\) equaling the number of candidates used in _LA3_, and \(k=500\) following the FastAA setting. We can see that the policy that includes 500 augmentation triples per label with top predicted
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \hline & WRN-40-2 WRN-28-10 & WRN-40-2 WRN-28-10 \\ \hline w/o Label-aware & 96.70 & 97.11 & 80.08 & 82.76 \\ w/o Stage 2 (top-100) & 96.53 & 97.49 & 78.57 & 82.76 \\ w/o Stage 2 (top-500) & 96.70 & 97.26 & 79.85 & 84.04 \\ \hline
**LA3** & **97.08** & **97.80** & **81.09** & **84.54** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation analysis results in top-1 test accuracy (%) on CIFAR-10 and CIFAR-100 with different designs removed from the full _LA3_ method.
Figure 3: The proportion of different augmentation operations in policies for different labels in _LA3_ searched label-aware policies on CIFAR-10, CIFAR-100 and ImageNet.
rewards yields a better performance than the policy with top 100 augmentation triples on both CIFAR-10 and CIFAR-100. This can be attributed to the better diversity as more possibilities of augmentations are contained. However, increasing the k value is not the best solution to improve augmentation diversity as the augmentation triples with high rewards tend to have similar compositions and may result in a high redundancy in the final policy. Our _LA3_ incorporates a policy construction method that selects high-reward augmentation triples, and at the same time, keeping the lowest redundancy of the final policy. With the two-stage design, our _LA3_ method beats the top-k variants and produces significant improvements in all settings.
**Limitation.** Unlike dataset-level augmentation policies that can be learned from one dataset and transferred to other datasets [1, 9, 25], _LA3_ learns label-aware policies where labels are specific to a dataset, and hence lacks the transferability across datasets, although _LA3_ demonstrates transferability across networks as shown in Table 1. However, when dealing with a large dataset, _LA3_ can work on a reduced version of the dataset to search for label-dependent policies efficiently, and requires no tuning on training recipes when applying the found policy to the entire dataset.
## 5 Conclusion
In this paper, we propose a label-aware data augmentation search algorithm where label-specific policies are learned based on a two-stage algorithm, including an augmentation exploration stage based on Bayesian Optimization and neural predictors as well as a composite policy construction stage. Compared with existing static and dynamic augmentation algorithms, _LA3_ is computationally efficient and produces stationary policies that can be easily deployed to improve deep learning performance. _LA3_ achieves the state-of-the-art ImageNet accuracy of 79.97% on ResNet-50 among all auto-augmentation methods, at a substantially lower search cost than AdvAA and MetaAugment.
Figure 4: The evaluation of the predictor during the policy search on ImageNet given by the Spearman’s Rank Correlation and Mean Absolute Error over search iterations. |
2302.03784 | Leveraging User-Triggered Supervision in Contextual Bandits | We study contextual bandit (CB) problems, where the user can sometimes
respond with the best action in a given context. Such an interaction arises,
for example, in text prediction or autocompletion settings, where a poor
suggestion is simply ignored and the user enters the desired text instead.
Crucially, this extra feedback is user-triggered on only a subset of the
contexts. We develop a new framework to leverage such signals, while being
robust to their biased nature. We also augment standard CB algorithms to
leverage the signal, and show improved regret guarantees for the resulting
algorithms under a variety of conditions on the helpfulness of and bias
inherent in this feedback. | Alekh Agarwal, Claudio Gentile, Teodor V. Marinov | 2023-02-07T22:42:27Z | http://arxiv.org/abs/2302.03784v1 | # Leveraging User-Triggered Supervision in Contextual Bandits
###### Abstract
We study contextual bandit (CB) problems, where the user can sometimes respond with the best action in a given context. Such an interaction arises, for example, in text prediction or autocompletion settings, where a poor suggestion is simply ignored and the user enters the desired text instead. Crucially, this extra feedback is _user-triggered_ on only a subset of the contexts. We develop a new framework to leverage such signals, while being robust to their biased nature. We also augment standard CB algorithms to leverage the signal, and show improved regret guarantees for the resulting algorithms under a variety of conditions on the helpfulness of and bias inherent in this feedback.
Machine Learning, ICML
## 1 Introduction
Consider a learning agent for predicting the next word as a user composes a text document or an email. Such an agent can be pre-trained on an offline dataset of documents to predict the next word according to a language model, but it is often desirable to further improve the models for the task at hand, based on the data collected upon deployment. Such an improvement from logged data is not amenable to supervised learning, as we only observe whether a user liked the suggestions showed by the model, with no feedback on the quality of other actions. Consequently, a popular paradigm to model such settings is that of Contextual Bandits (CB), where the model is optimized to maximize a notion of reward, such as the likelihood of the predicted word being accepted by the user. The CB approach has in fact been successfully and broadly applied in online recommendation settings, owing to a natural fit of the learning paradigm.
However, in the example of next word prediction above, the standard CB model ignores important additional signals. When the user at hand does not accept the recommended word, they typically enter the desired word, which is akin to a supervised feedback on the best possible word in that scenario. How should we leverage such an extra modality of feedback along with the typical reward signal in CBs? While prior works have developed hybrid models such as learning with feedback graphs (e.g., (Mannor & Shamir, 2011; Caron et al., 2012; Alon et al., 2017)) to capture a continuum between supervised and CB learning, such settings are not a natural fit here. A key challenge in the feedback structure is that the extra supervised signal is only available on a subset of the contexts, which are _chosen by the user_ as some unknown function of the algorithm's recommended action. We term this novel learning setting _CB with User-triggered Supervision_ (CBUS). In this paper, we develop theoretical frameworks and algorithms to address CBUS problems.
In addition to the supervision being user triggered, an additional challenge in the CBUS setting is that, unlike in learning with feedback graphs, the supervised feedback and the reward signal are not naturally available in the same units. For instance, in the next word prediction setting, a natural reward metric might be _time-to-completion_ (TTC), that is, the time a user takes to enter a word (either accepting a recommended word or typing it manually). When the user does not accept the recommended word, they will enter a new word manually, and it is natural to expect that the TTC would be minimized if this new word were recommended instead. Since we do not know the TTC for any other word, this makes it challenging to reconcile the supervised feedback with the CB rewards. To overcome this issue, we develop a constrained optimization framework, where the learner seeks to optimize its CB reward while also trying to do well under the expected supervised learning error. The intuition is to guide the learner to a reasonable family of models using the supervised performance constraint, among which reward optimization can be fine-tuned for the performance metric that we eventually want to maximize.
Our work can be considered as part of the CB literature with constraints which has been extensively studied in several different settings. For a more careful discussion of these settings we refer the reader to Appendix A. Prior work can be roughly split into three categories. First is bandits with knapsacks where the additional constraint is modeled as a knapsack problem and the game ends when the knapsack constraint is exceeded (Badanidiyuru et al., 2018;
Tran-Thanh et al., 2010, 2012; Ding et al., 2013; Xia et al., 2015; Zhu and Nowak, 2022; Agrawal and Devanur, 2014; Wu et al., 2015; Agrawal and Devanur, 2016; Sun et al., 2017; Immorlica et al., 2022; Sivakumar et al., 2022). Second is conservative bandits where the player has to play a policy which is never much worse compared to a baseline (Wu et al., 2016; Kazerouni et al., 2017; Garcelon et al., 2020; Lin et al., 2022; Garcelon et al., 2020). Perhaps closest to our work is that of the setting in which there exist two distributions, one over rewards for actions, and one over costs. The goal is to maximize the expected reward, while ensuring that the expected cost of the selected action is below a certain threshold (Amani et al., 2019; Moradipari et al., 2021; Pacchiano et al., 2021). Crucially none of these frameworks allow for observing the constrained only on an uncontrolled subset of the rounds, which is a key challenge of the CBUS setting.
**Our Contributions.** In addition to formalizing the CBUS framework for the learning settings of interest, our paper makes the following key contributions.
1. _Constrained formulation:_ We propose a new constrained optimization approach for solving CBUS problems, where the objective encourages reward maximization and constraints capture fidelity to the supervised feedback. The constraints are enforced across all the rounds, independent of whether we observe the supervised feedback.
2. _Lower bound:_ We show a fundamental tradeoff between the best attainable regret in terms of the bandit rewards and the supervised constraints. Informally, we show that the learner incurs an \(\Omega(T^{2/3})\) regret on at least one of the expected reward or constraint violation, over \(T\) rounds.
3. _Simple and optimal algorithm:_ We develop an explore-first strategy (EFBO) which performs initial exploration to gather a diverse dataset for both the CB rewards and the supervised feedback. We then solve the constrained optimization problem on this dataset using a saddle-point approach, and provide guarantees on the regret and constraint violation of EFBO. The guarantees improve upon those for learning from supervised or CB signals alone, under an alignment condition on the two sources, and scale as \(O(T^{2/3})\), matching the lower bound.
4. _Leveraging favorable distributions:_ We develop an Exp4-based algorithm that can benefit from favorable conditions on the user, such as feedback from the user is only withheld if the selected action has small supervised learning error. This algorithm enjoys improved \(O(\sqrt{T})\) regret, both for reward and constraint violation, allowing us to go beyond the lower bound by leveraging problem structure. We also design an active learning strategy to explicit helpful structures in the constraint function.
## 2 Problem Setting and a Lower Bound
In this section, we formally describe the CBUS learning protocol, and also give a lower bound on the fundamental trade-off between the achievable regret on CB rewards and that on user supervision.
### The CBUS Problem Setting
We are given a context space \(\mathcal{X}\) and an action space \([K]\) of size \(K\geq 2\). In the CBUS protocol, the learner observes some context \(x_{t}\in\mathcal{X}\) at time \(t\), and has to choose an action \(a_{t}\in[K]\). Upon choosing \(a_{t}\), one of two things happen:
1. The learner observes the reward \(r_{t}\sim D_{b}(\cdot|x_{t},a_{t})\), \(r_{t}\in[0,1]\), for the chosen action from the conditional reward distribution \(D_{b}(\cdot|x_{t},a_{t})\), given the context \(x_{t}\) and the action \(a_{t}\) at hand, or
2. The learner observes \(r_{t}=0\) together with a special action \(\bar{a}_{t}=\bar{a}(x_{t})\), and has access to a _surrogate loss_ function \(\Delta(a,a^{\prime};x_{t})\) for any \(a\) relative to \(a^{\prime}\), given context \(x_{t}\). The rounds \(t\) on which \(r_{t}=0\) is observed _are not under the learner's_ control ("user triggered"), and we define an indicator \(\xi_{t}=1\) to track these rounds.
Given a input tolerance \(\epsilon>0\), and a (finite) policy space \(\Pi\) of functions \(\pi(\cdot)\) mapping contexts to actions, and a distribution \(D\) over \(\mathcal{X}\), we wish to solve the following policy optimization problem:
\[\max_{\pi\in\Pi}\mathbb{E}_{x\sim D}\mathbb{E}_{D_{b}}[r|x,\pi(x)] \text{(Performance)}\] s.t. \[\mathbb{E}[\Delta(\pi(x),\bar{a}(x);x)] \text{(Fidelity)}\] \[\leq\min_{\pi^{\prime}\in\Pi}\mathbb{E}[\Delta(\pi^{\prime}(x), \bar{a}(x);x)]+\epsilon. \tag{1}\]
In words, we would like to find a policy \(\pi\in\Pi\) that maximizes the expected reward, subject to the constraint that, on average over the contexts, the amount by which the surrogate loss between the action selected by \(\pi\) and the special action \(\bar{a}\) exceeds the minimal expected surrogate loss achieved by policies in \(\Pi\) by no more than \(\epsilon\). Note that \(\bar{a}(x)\) can be random, and the expectation in the constraint includes the randomness in both \(x\) and \(\bar{a}(x)\). We call the expected reward our _performance_ criterion, and the expected surrogate loss constraint our _fidelity_ criterion.
We now illustrate how this formulation captures relevant practical scenarios.
**Example 1** (Next word prediction).: _As a first motivating example, consider the next word prediction problem dis
cussed in Section 1. The context \(x_{t}\) consists of the preceding text, as well as any prior information on the user's writing style, demographics, etc. Feasible actions in a context \(x_{t}\) might be plausible next words proposed by some base model, and the reward \(r_{t}\) can be binary, based on the user accepting the suggested word, or more fine-grained such as TTC. The latter might reward the learner more for correct predictions on longer words, for instance, than for common and short stop words. If the recommendation is not accepted (the learner observes \(r_{t}=0\)) the word entered by the user provides \(\bar{a}(x_{t})\), and \(\Delta(a,\bar{a}(x_{t});x_{t})\) can be a contextual measure of word similarity, such as distance in a word embedding space. The objective (1) then incentivizes the maximization of the desired performance metric, while guaranteeing fidelity to the ground-truth signals provided by the user.
**Example 2** (Rich in-session interaction).: _As another example of CBUS, consider a user interacting with a recommendation system through multiple modes, like clicks, conversions, and textual queries. The goal of the recommendation system is to improve user experience by minimizing the time it takes for the user to find the information they are looking for. Each round \(t\) is a user session. The user may start the session by entering some text (say a product they are interested in buying), the system may respond with a list of links to relevant products, then the user may react by either clicking on some product in the list or decide to refine their search by entering new and possibly more specific text. In this case, the context \(x_{t}\) may encode the user's past behavior from previous sessions, as well as the initial query typed in during session \(t\), the set of actions may include content which are relevant to this initial query, the reward \(r_{t}\) may be some function of the value of a click or a conversion on one of the recommended items/products, while the fact that the initial recommendations are not accepted (\(r_{t}=0\)) are witnessed by the extra text the user decides to type in. In this case, \(\bar{a}(x_{t})\) may encode the "correct" product for \(x_{t}\) as evinced by the new and more specific query the user enters. Finally, \(\Delta(a,a^{\prime};x_{t})\) can be a contextual measure of pairwise similarity between items/products._
A key challenge here is that the feedback \(\bar{a}(x_{t})\) is only observed on a subset of the rounds which are not controlled by the algorithm. Yet, the fidelity constraint seeks to enforce it in expectation over the full context distribution, and we are unable to correctly estimate this expectation using feedback only from the rounds where we observe \(\bar{a}(x_{t})\). For ease of presentation, we use \(\xi_{t}\) to denote the indicator of whether \(\bar{a}(x_{t})\) was observed at time \(t\), and note that the distribution of \(\xi\) as a random variable depends both on the context \(x\) and the learner's action \(a\). We are going to measure the sub-optimality of any policy \(\pi\) to the solution, \(\pi^{*}\), of the problem in (1) by the psuedo-regret1 over \(T\) rounds of interactions with the environment incurred by \(\pi\) to the objective and constraint respectively, defined as follows:
Footnote 1: For simplicity we refer to the pseudo-regret as regret.
\[\text{Reg}_{r}(\pi) =\left(\mathbb{E}[r(\pi^{*}(x),x)]-\mathbb{E}[r(\pi(x),x)]\right)\] \[\text{Reg}_{c}(\pi) =\left(\mathbb{E}[\Delta(\pi(x),\bar{a}(x);x)]-\mathbb{E}[\Delta (\pi^{*}(x),\bar{a}(x);x)]\right)\,.\]
For any distribution, \(Q\in\Delta(\Pi)\), over the policies \(\Pi\), we define \(\text{Reg}_{r}(Q)=\mathbb{E}_{\pi\sim Q}[\text{Reg}_{r}(\pi)]\), and \(\text{Reg}_{c}(Q)\) in a similar manner. Finally, for any algorithm \(\mathcal{A}\) which produces a sequence of distributions \((Q_{t})_{t\in[T]}\), we define
\[\text{Reg}_{r}(\mathcal{A},T)=\sum_{t=1}^{T}\text{Reg}_{r}(Q_{t})\,\]
and define \(\text{Reg}_{c}(\mathcal{A},T)\) similarly by using \(\Delta\) instead of \(r\). The upper and lower regret bounds that we prove will all be in expectation with respect to the randomness in the algorithm as well, that is we show upper and lower bounds on \(\mathbb{E}[\text{Reg}_{r}(\mathcal{A},T)]\) and \(\mathbb{E}[\text{Reg}_{c}(\mathcal{A},T)]\).
### Revealing assumption and min-max rates
In order to better understand our problem, the first thing to observe is that objective (1) can be arbitrarily hard to achieve a good performance on, in the sense of simultaneously controlling both \(\text{Reg}_{r}\) and \(\text{Reg}_{c}\). This is due to the user-triggered nature of the supervised signal \(\bar{a}(x)\). As an extreme case, suppose \(\bar{a}(x)\) is never revealed by the user, even when the chosen actions are highly suboptimal under \(\Delta\), then \(\text{Reg}_{c}\) will clearly be \(\Omega(T)\). However, this does not correspond to natural scenarios, since we expect the user not to accept bad recommendations, and hence there should typically be actions which lead to the revelation of \(\bar{a}(x)\) in any context. Another common alternative is to simply omit a recommendation if we hope to elicit the ground-truth. We now make a concrete assumption to formalize this intuition and avoid trivial lower bounds.
**Assumption 1** (Revealing action).: _There exists a revealing action \(a_{0}\in\mathcal{A}\) such that whenever the learner selects \(a_{0}\) they get to observe \(\bar{a}(x)\), that is, they get to observe the full feedback for the constraint given by \(\Delta(\cdot,\bar{a}(x);x)\)._
Note that the revealing action can be context dependent in general, so long as it is known, and all of our work is fully compatible with this generalization. We use a fixed revealing action \(a_{0}\) solely for notational simplicity.
Even under the availability of \(a_{0}\), the learner faces a more nuanced exploration dilemma. It can engage in natural exploration over \(\Pi\) for optimizing rewards, and obtain incidental and biased observations of \(\bar{a}(x)\), or occasionally
choose \(a_{0}\) to learn about the constraint. This sets up a potential trade-off between the two regrets \(\text{Reg}_{r}\) and \(\text{Reg}_{c}\), and we now give a fundamental characterization of the best achievable trade-off next.
**Theorem 1** (Lower bound).: _For any algorithm, \(\mathcal{A}\), which has constraint regret at most \(\mathbb{E}[\text{Reg}_{c}(\mathcal{A},T)]\), there exists an instance on which the algorithm suffers reward regret_
\[\mathbb{E}[\text{Reg}_{r}(\mathcal{A},T)]=\Omega\left(\min\left(T\epsilon, \frac{T}{\sqrt{\mathbb{E}[\text{Reg}_{c}(\mathcal{A},T)]}}\right)\right)\.\]
We defer the construction of the problem instance and the proof of Theorem 1 to Appendix B. The lower bound shows that in general it is not possible to achieve \(O(\sqrt{T})\) regret for both the reward and the constraint under Assumption 1. We note that it may be possible to achieve \(O(T^{2/3})\) regret simultaneously for the constraints and the reward (ignoring any dependence on the size of the action set and policy class). In general if the regret for the constraint is \(O(T^{\alpha})\) then there exists an environment in which the algorithm incurs \(\Omega(T^{1-\alpha/2})\) regret for the reward.
## 3 A Simple and Optimal Algorithm
To build intuition for the setting, we begin with an explore-first strategy which performs an initial exploration to separately learn about the rewards and the constraint. The exploration data is used to find a near-optimal solution to (1). While explore-first is statistically sub-optimal in an unconstrained scenario, this approach will be shown to match our lower bound in the constrained setting. We start with the algorithm and then present the regret guarantee.
### The Explore First, Blend Optimally Algorithm
Given any \(T_{0}\leq T/2\), we might choose random actions for the first \(T_{0}\) rounds and the revealing action \(a_{0}\) for the subsequent \(T_{0}\) rounds to form estimators for the reward and constraint violation for any policy \(\pi\in\Pi\) as:
\[\widehat{R}(\pi)= \frac{K}{T_{0}}\sum_{t=1}^{T_{0}}r_{t}1(a_{t}=\pi(x_{t}))\, \tag{2}\] \[\widehat{\text{Reg}}_{c}(\pi)= \frac{1}{T_{0}}\Big{[}\sum_{t=T_{0}+1}^{2T_{0}}\Delta_{t}(\pi(x_ {t}))-\min_{\pi^{\prime}\in\Pi}\sum_{t=T_{0}+1}^{2T_{0}}\Delta_{t}(\pi^{ \prime}(x_{t}))\Big{]},\]
where \(\Delta_{t}\) is a shorthand for \(\Delta(\cdot,\bar{a}_{t};x_{t})\). Then we might solve an empirical version of the objective (1), and use standard concentration arguments to guarantee good performance in terms of regret. However, this simple approach has a significant drawback.
Suppose that \(\Delta\) and the reward distribution \(D_{b}\) are perfectly aligned, so that \(\mathbb{E}[r|x,a]=1-\mathbb{E}[\Delta(a,\bar{a}(x);x)|x,a]\) for all \(x\) and \(a\). Then choosing the revealing action \(a_{0}\) reveals the rewards of all the actions, and hence we would expect guarantees compatible with supervised learning, where the suboptimality of the learned policy decays as \(\sqrt{\ln|\Pi|/T_{0}}\) for both the objective and the constraint. On the other hand, the two distributions could be quite misaligned too, in which case the best reward suboptimality we can guarantee is \(\sqrt{K\ln|\Pi|/T_{0}}\), incurring an additional \(K\) factor due to the uniform exploration for learning the reward structure. Since we expect practical settings to be somewhere between these two extremes, we leverage ideas from Zhang et al. (2019) to take advantage of any available (unknown) alignment between the rewards and the constraints.
The algorithm, which we name Explore First, Blend Optimally (EFBO) is presented in Algorithm 1. For an exploration parameter \(T_{0}\), the algorithm chooses different types of exploration over \(4T_{0}\) rounds. For the \(2T_{0}\) rounds in \([T_{0}]\cup[3T_{0}+1,4T_{0}]\) we explore uniformly over the actions and record the rewards obtained. For the \(2T_{0}\) rounds in \([T_{0}+1,3T_{0}]\) we choose the revealing action \(a_{0}\) and observe \(\bar{a}(x_{t})\). Now we form the \(\mu\)-_blended reward estimator_:
\[\widehat{R}_{\mu}(\pi)=\mu\widehat{R}(\pi)+(1-\mu)\sum_{t=2T_{0}+1}^{3T_{0}} \frac{(1-\Delta_{t}(\pi(x_{t})))}{T_{0}}. \tag{3}\]
We note here that more generally, any other known function \(g(\Delta)\) can be used to transform the constraints to be more compatible with rewards, in place of the choice \(g(u)=1-u\) used here. As long as the function takes bounded values, most of our results directly extend to this generalization. We still use the same constraint estimator as in (2) (so constraints and rewards using observations of \(\Delta_{t}\) from disjoint rounds). Next, we need to optimize a constrained optimization with the objective \(\widehat{R}_{\mu}(\pi)\) and constraint \(\widehat{\text{Reg}}_{c}(\pi)\leq\epsilon\). In particular, we assume that we are given a class \(\mathcal{M}\) of candidate \(\mu\)-values, and find the best policy for each \(\mu\in\mathcal{M}\). Following prior works (e.g., (Langford & Zhang, 2007; Agarwal et al., 2014, 2018)), we only assume the ability to solve reward maximization problems over the policy class, which is needed even in the unconstrained case. We use a common primal-dual approach to solve the constrained problem by defining a Lagrangian for any \(Q\in\Delta(\Pi)\) as:
\[\widehat{\mathcal{L}}_{\mu}(Q,\lambda)=\widehat{R}_{\mu}(Q)-\lambda\widehat{ \text{Reg}}_{c}(Q), \tag{4}\]
where \(\widehat{R}_{\mu}(Q)\) and \(\widehat{\text{Reg}}_{c}(Q)\) are defined via expectations under policy distributions just like true rewards and regrets. Lines 7-9 in Algorithm 1 optimize the empirical saddle-point objective
\[\max_{Q\in\Delta(\Pi)}\min_{\lambda\in[0,B]}\widehat{\mathcal{L}}(Q,\lambda).\]
The optimization uses the approach pioneered by Freund and Schapire (1996) to interpret the objective as a two player zero-sum game, which is solved by alternating between a best response strategy for the policy player, and a no-regret strategy for the \(\lambda\) player. The best response corresponds to finding the best policy under an appropriate reward definition (line 8), since all \(\pi\)-dependent terms in \(\widehat{\mathcal{L}}(\pi,\lambda)\) are just functions of \(\pi(x_{t})\), and \(\widehat{\mathcal{L}}(Q,\lambda)\) is optimized at a point mass on some policy \(\pi\in\Pi\), due to the linearity in \(Q\). We optimize over the scalar \(\lambda\) using the Multiplicative Weight Updates algorithm (MWU) (Arora et al., 2012) together with a clipping operator (in line 9), which is a standard no-regret strategy for bounded subsets of the positive orthant. Alternating these steps for \(S\) iterations yields an approximate solution for each fixed \(\mu\in\mathcal{M}\), denoted by \(\widehat{Q}_{\mu}\). Hence, we expect that all \(\widehat{Q}_{\mu}\) are feasible, but differ in their performance on the rewards. We then select the distribution \(\widehat{Q}_{\mu}\) with the highest empirical reward, evaluated on the second set of \(T_{0}\) rewards collected by uniform exploration. That is our selected distribution is \(\widehat{Q}_{\widehat{\mu}}\) where
\[\widehat{\mu}=\underset{\mu\in\mathcal{M}}{\text{argmax}}\frac{1}{T_{0}} \big{\langle}\widehat{Q}_{\mu},\sum_{t=3T_{0}+1}^{4T_{0}}\widehat{r}_{t}(\cdot,x_{t})\big{\rangle}\;, \tag{5}\]
where \(\widehat{r}_{t}(a,x_{t})=Kr_{t}\mathbb{1}(a=\pi(x_{t}))\). Finally, we play \(\widehat{Q}_{\widehat{\mu}}\) for the remainder of the game.
```
0:\(4T_{0}\) rounds of exploration, \(B\), \(S\) parameters for constraints accuracy, set of mixture weights \(\mathcal{M}\)
0: Distribution \(\widehat{Q}_{\widehat{\mu}}\) in \(\Delta(\Pi)\)
1:for\(t\in[T_{0}]\cup[3T_{0}+1,4T_{0}]\)do
2: Choose \(a_{t}\sim Unif([K])\), observe reward \(r_{t}(a_{t},x_{t})\)
3:for\(t\in[T_{0}+1,3T_{0}]\)do
4: Choose \(a_{t}=a_{0}\) and observe \(\Delta(\cdot,\bar{a}(x_{t});x_{t})\)
5:for\(\mu\in\mathcal{M}\)do
6:\(\lambda_{1,\mu}=\frac{1}{B}\), \(Q_{1,\mu}=\underset{Q\in\Delta(\Pi)}{\text{argmax}}\widehat{\mathcal{L}}_{ \mu}(Q,\lambda_{1,\mu})\) (Eq. (4))
7:for\(s\in[S]\)do
8:\(Q_{s,\mu}=\underset{Q\in\Delta(\Pi)}{\text{argmax}}\widehat{\mathcal{L}}_{ \mu}(Q,\lambda_{s,\mu})\)
9:\(\lambda_{s+1,\mu}=\operatorname{clip}\Big{[}MWU(\lambda_{s,\mu},\widehat{ \mathcal{L}}_{\mu}(Q_{s,\mu},\lambda_{s,\mu}))|B\Big{]}\)
10:\(\widehat{Q}_{\mu}=(Q_{1,\mu}+\ldots+Q_{S,\mu})/S\)
11:return\(\widehat{Q}_{\widehat{\mu}}\) (see Eq. 5)
```
**Algorithm 1** Explore First, Blend Optimally (EFBO)
### Regret guarantee
We express our regret guarantees in terms of the degree of similarity between reward and constraint signals, which is inspired by the work of Zhang et al. (2019).
**Definition 1**.: _A distribution \(D_{2}\) is said to be \((\alpha,\mathfrak{d})\)-similar to a distribution \(D_{1}\) with respect to the tuple \((\Pi,\pi^{\star})\) if_
\[\mathbb{E}_{D_{2}} [r_{2}(\pi^{\star}(x),x)]-\mathbb{E}_{D_{2}}[r_{2}(\pi(x),x)]\] \[\geq\alpha\Big{(}\mathbb{E}_{D_{1}}[r_{1}(\pi^{\star}(x),x)]- \mathbb{E}_{D_{1}}[r_{1}(\pi(x),x)]\Big{)}\!-\!\mathfrak{d}\;.\]
In our setting we let \(\mathbb{E}_{D_{1}}[r_{1}(\pi(x),x)]=\mathbb{E}[r(\pi(x),x)]\) and \(\mathbb{E}_{D_{2}}[r_{2}(\pi(x),x)]=1-\mathbb{E}[\Delta(\pi(x),\bar{a}(x);x)]\), and use \(\pi^{\star}\) as the solution of the problem in (1). Definition 1 essentially measures how well the full information component of the feedback, in the form of \(1-\Delta\) is aligned with the bandit part of the reward, given by \(\widehat{r}_{t}\). The smaller \(\mathfrak{d}\) is and the larger \(\alpha\) is, the better the two distributions are aligned, which in turn will result in regret guarantees closer to the full information setting, in that the dependence on \(K\) will be mild.
We can now state the main theorem for this section. Before stating the regret bound we define
\[V_{T_{0}}(\mu,v) =2\sqrt{2T_{0}(\mu^{2}K+(1-\mu)^{2}v^{2})\log(4|\Pi|T_{0})}\] \[\qquad+(\mu K+(1-\mu))\log(4|\Pi|T_{0})\;. \tag{6}\]
**Theorem 2**.: _Set in EFBO the parameter values \(S=\Omega(BT_{0})\) and \(B=T/T_{0}\). If the distribution over the constraints \(\Delta(\cdot,\bar{a}(x);x)\) is \((\alpha,\mathfrak{d})\)-similar to \(D_{b}\), the expected reward regret \(\mathbb{E}[\text{Reg}_{r}(\widehat{Q}_{\widehat{\mu}})]\) is bounded by_
\[O\!\left(\!\sqrt{\frac{K\log(T_{0}|\mathcal{M}|)}{T_{0}}}+\!\min_{\mu\in \mathcal{M}}\frac{\frac{2V_{T_{0}}(\mu,1)}{T_{0}}+(1-\mu)\mathfrak{d}}{\mu+ \alpha(1-\mu)}+\frac{T_{0}}{T}\!\right)\!.\]
_Further, the expected regret to the constraint is bounded as_
\[\mathbb{E}[\text{Reg}_{c}(\widehat{Q}_{\widehat{\mu}})]\leq\epsilon+O\left(\sqrt{ \frac{\log(T_{0}|\Pi|)}{T_{0}}}+\frac{T_{0}}{T}\right)\,.\]
Note, that we can show the above regret bounds hold with high probability as well. In practice, we choose the class \(\mathcal{M}\) to be relatively small (constant or \(|\mathcal{M}|=O(\log(T))\)), so for the remainder of the discussion we treat \(\log(|\mathcal{M}|)\) as a lower order term.
We prove Theorem 2 in Appendix C. To interpret the result, we examine different regimes of distributional similarity.
**Minimax optimality.** Choosing \(T_{0}=\Theta(T^{2/3})\) above, the expected reward regret satisfies
\[T\mathbb{E}[\text{Reg}_{r}(\widehat{Q}_{\widehat{\mu}})]\leq O \Big{(}T^{2/3}\sqrt{K\log(T)}\] \[+\min_{\mu\in\mathcal{M}}\frac{T^{2/3}\sqrt{(\mu^{2}K+(1-\mu)^{2}) \log(|\Pi|T)}+T(1-\mu)\mathfrak{d}}{\mu+\alpha(1-\mu)}\Big{)},\]
while
\[T\mathbb{E}[\text{Reg}_{c}(\widehat{Q}_{\widehat{\mu}})]=O\big{(}T\epsilon+T^{2/3 }\sqrt{\log(T|\Pi|)}\big{)}\;.\]
In terms of the scaling with \(T\), this bound is minimax optimal due to the lower bound of Theorem 1. We note that this is in contrast with the suboptimality of explore-first in the unconstrained setting, and a consequence of the trade-off between constraint and reward exploration inherent in our framework. However, the relatively crude setting of \(T_{0}\) here does not recover the best bound using explore-first even in the unconstrained setting (in \(K\) and \(\ln|\Pi|\) scaling). For a finer grained understanding, we now make distributional similarity assumptions, under which we can make better choices of \(T_{0}\) as a function of the ideal \(\mu\) value, and obtain sharper bounds. We note that the inability to depend on the best \(\mu\) in hindsight for \(T_{0}\) is akin to the difficulty of choosing hyperparameters in model selection (Marinov and Zimmert, 2021; Zhu and Nowak, 2022).
**Well-aligned signals.** In this case, we assume \(\alpha=1\) and \(\mathfrak{d}=O(T^{-1/2})\). The RHS of Theorem 2 is then minimized for \(\mu=O(1/\sqrt{K})\), and \(T\mathbb{E}[\text{Reg}_{r}(\widehat{Q}_{\widehat{\mu}})]\) is at most
\[O(T\sqrt{K\log(T|\mathcal{M}|)/T_{0}}+T\sqrt{\log(T|\Pi|)/T_{0}}+T_{0})\;.\]
Choosing \(T_{0}=\Theta(T^{2/3}(K\log(T)\vee\log(|\Pi|T))^{1/3})\) optimally further implies
\[T\mathbb{E}[\text{Reg}_{r}(\widehat{Q}_{\widehat{\mu}})]=O\left(T^{2/3}(K\log (T)\wedge\log(|\Pi|T))^{1/3}\right)\;,\]
that is, we achieve a bound which decouples the bandit part of the regret, \(K\), from the policy class part \(\log(|\Pi|)\). This is analogous to the benefit of similarity in Zhang et al. (2019). The constraint violation regret admits the same bound.
**Mis-aligned signals.** On the other extreme, when \(\mathfrak{d}=\Omega(1)\), we take \(\mu=1\) and set \(T_{0}=T^{2/3}(K\log(T|\Pi|))^{1/3}\). This gives a bound consistent with the standard CB setting, that is
\[T\mathbb{E}[\text{Reg}_{r}(\widehat{Q}_{\widehat{\mu}})]\leq O(T^{2/3}\big{(} K\log(T|\Pi|))^{1/3})\;.\]
Finally, we address the size of \(\mathcal{M}\). As discussed, the favorable case is when \(\mathfrak{d}\approx 0\) and thus \(\mu=O(1/\sqrt{K})\). Hence it is sufficient to take
\[\mathcal{M}=\{1-1/2^{n},1/K+1/2^{n}:n\leq\log(T)\}\]
(see Lemma 3 in the Appendix C for details).
## 4 Improving Regret under Favorable Conditions
We now present a high-level algorithmic framework which maintains the worst-case statistical optimality of EFBO, while allowing the possibility of stronger results under favorable problem structures, such as a relationship between the user decision to provide the supervision \(\bar{a}(x)\). Since the algorithm is more complex, we first provide the high-level structure, before moving to concrete instantiations of some components later in the section. The algorithm is a version of a corralling algorithm (Agarwal et al., 2017) applied to an adaptation of the classical Exp4 algorithm (Auer et al., 2002). At any round \(t\), our adapted Exp4 incorporates an arbitrary constraint estimator \(\bar{\Delta}_{t}\) for \(\Delta(a,\bar{a}(x_{t});x_{t})\). The estimator is used as part of the reward signal, similarly to how the rewards are constructed in Algorithm 1. Secondly, the estimator is used to maintain approximately feasible policies \(\Pi_{t}\subseteq\Pi\), as a proxy for policies feasible for (1).
A formal description of the modified Exp4 algorithm can be found in Equation 11 in Appendix D. Since the Exp4 update only works for a fixed combination of \(\Delta_{t}\) and reward \(r_{t}\) we further use model selection over a \(\mu\) parameter used to blend rewards in a similar way as EFBO, through corralling the Exp4 algorithms, each corresponding to a single \(\mu\). Formally this is achieved by running a version of the Hedged FTRL corralling algorithm described in (Foster et al., 2020; Marinov and Zimmert, 2021). Pseudo-code for this algorithm is in Algorithm 2. The algorithm also includes an indicator \(Z_{t}\) as some (adaptively chosen) rounds might be needed to form the constraint estimator \(\bar{\Delta}_{t}\) in the subsequent instantiations. On these rounds with \(Z_{t}=1\), Exp4 does not update its internal state (lines 9-10). We set \(M=O(\log(T))\) and each base algorithm uses Equation 11 with \(\mu\in\{1-1/2^{n},1/K+1/2^{n}:n\leq\log(T)\}\), same as in Algorithm 1. The main regret bound can be found in Theorem 8 in Appendix D.
```
0:\((Base_{m})_{m=1}^{M}\)
1: Initialize \(P_{1}\) to be uniform distribution over \((Base_{m})_{m=1}^{M}\) base algorithms.
2: Initialize constraint proxy \(\bar{\Delta}_{1}\), and base algorithms \((Base_{m})_{m=1}^{M}\).
3:for\(t=1,\ldots,T\)do
4: Receive context \(x_{t}\), compute set of feasible policies \(\Pi_{t}\subseteq\Pi_{t-1}\), sample \(Z_{t}\).
5:if\(Z_{t}=0\)then
6: Sample base algorithm \(m_{t}\sim P_{t}\) and play according to policy, \(\pi_{t}\), selected by \(Base_{m_{t}}\).
7: Observe loss \(r_{t}(\pi_{t}(x_{t});x_{t})\) and \(\bar{\Delta}_{t}(\cdot;x_{t})\).
8:else
9: Play revealing action \(a_{0}\), observe \(\Delta(\cdot,\bar{a}(x_{t});x_{t})\).
10: Update \(P_{t+1}\) using Hedged-FTRL (Marinov and Zimmert (2021) Algorithm 1).
11: Send feedback \(r_{t,m}=1\,(m_{t}=m)/P_{t.m},\bar{\Delta}_{t}\) to \(m\)-th base algorithm.
12: Base algorithms update their state as per Eq. (11).
```
**Algorithm 2** Corralling Exp4 with constraints
Next, we illustrate two instantiations for \(\bar{\Delta}_{t}\) and \(\Pi_{t}\), along with concrete theoretical guarantees. All results of this section are derived from a general result proved in Theorem 8
in Appendix D. The first is based on the assumption that the supervision from the user is triggered by the choice of a significantly suboptimal action under the CB rewards, so that the lack of supervision is an implicit signal about the chosen action being fairly good in terms of reward. The second approach is based on active learning to adaptively learn the mapping \(x\to\bar{a}(x)\) and use this mapping to induce the constraints on all points. In both settings we make the following mild assumption on \(\Delta\).
**Assumption 2**.: \(\Delta\) _is symmetric for any \(x\in\mathcal{X}\), that is \(\Delta(a,a^{\prime};x)=\Delta(a^{\prime},a;x)\) and further it satisfies a triangle inequality, that is \(\Delta(a,b;x)\leq\Delta(a,a^{\prime};x)+\Delta(a^{\prime},b;x)\)._
For instance, the assumption holds if \(\Delta(a,a^{\prime};x)\) is a distance between \(a\) and \(a^{\prime}\) in some (\(x\)-dependent) embedding.
### Suboptimality-triggered supervision
We now make the following assumption on when the supervised feedback \(\bar{a}(x)\) is received.
**Assumption 3** (Suboptimality-triggered supervision).: _At any round \(t\), if the user does not reveal \(\bar{a}(x_{t})\) (i.e. \(\xi_{t}=0\)), then it holds that \(\Delta(a_{t},\bar{a}(x_{t});x_{t})\leq\nu\)._
This assumption is natural when the user behaves in a nonmalicious way. Indeed, we expect that if the user accepts the learner's recommendation, the recommendation can not be much worse than what the user would have specified themselves. Using the above assumptions we can construct the following simple constraint estimator.
A biased constraint estimator.Let us define the following estimator for the true constraint:
\[\widehat{\Delta}_{t}(\pi(x_{t});x_{t}) =(1-\xi_{t})\Delta(\pi(x_{t}),a_{t};x_{t})\] \[\qquad+\xi_{t}\Delta(\pi(x_{t}),\bar{a}(x_{t});x_{t})\;,\]
where we recall that \(\xi_{t}=1\) if the user reveals \(\bar{a}(x_{t})\). Clearly \(|\widehat{\Delta}_{t}(\pi(x_{t});x_{t})-\Delta(\pi(x_{t}),\bar{a}(x_{t});x_{t })|\leq\nu,\forall\pi\in\Pi\) under Assumption 3, that is \(\widehat{\Delta}_{t}\) is a \(\nu\)-biased estimator for \(\Delta\). Furthermore, it has a variance bounded by \(1\), since \(0\leq\Delta(a,a^{\prime};x)\leq 1\). Consequently, we can use Lemma 4 in Appendix D to construct \(\Pi_{t}\) as follows. Let \(r_{t}=2\nu+4\sqrt{2\frac{\log(T|\Pi|/\delta)}{t}}\), and set \(\Pi_{1}=\Pi\),
\[\Pi_{t+1}=\Bigg{\{} \pi\in\Pi_{t}:\frac{1}{t}\sum_{s=1}^{t}\widehat{\Delta}_{s}( \pi(x_{s});x_{s}) \tag{7}\] \[\leq\min_{\pi\in\Pi_{t}}\frac{1}{t}\sum_{s=1}^{t}\widehat{\Delta }_{s}(\pi(x_{s});x_{s})+\epsilon+r_{t}\Bigg{\}}.\]
This construction ensures that all policies in \(\Pi_{t}\) are only \(O(r_{t})\)-suboptimal to the constraint. We immediately obtain the following corollary of Theorem 8. Let
\[\phi(\mu,v_{m},T,\mathfrak{d})= \frac{(\mu^{2}K+(1-\mu)^{2}v_{m}^{2})\sqrt{T\log(|\Pi|)\log(T)}}{ \mu+\alpha(1-\mu)}\] \[\qquad+\frac{T(1-\mu)\mathfrak{d}}{\mu+\alpha(1-\mu)}\;.\]
**Theorem 3**.: _Assume that the distribution over constraints \(\Delta(\cdot,\bar{a}(x);x)\) is \((\alpha,\mathfrak{d})\)-similar to the distribution over rewards \(r(\cdot,x)\) with respect to \((\Pi,\pi^{*})\). Algorithm 2 invoked with \(Z_{t}\equiv 0,\,(\Pi_{t})_{t\in[T]}\) as in Eq. 7 and \(\bar{\Delta}_{t}=\widehat{\Delta}_{t}\) satisfies_
\[\mathbb{E}[\text{Reg}_{r}(\mathcal{A},T)]\leq\min_{\mu\in[0,1]} \phi(\mu,1,T,\mathfrak{d}+\nu)\;,\text{and}\] \[\frac{\mathbb{E}[\text{Reg}_{c}(\mathcal{A},T)]}{T}\leq\epsilon+4 \nu+8\sqrt{2\frac{\log(T|\Pi|)}{T}}\;.\]
We note that Theorem 3 does not require Assumption 1.
Better bounds for small \(\nu\).When \(\nu=O(1/\sqrt{T})\), Theorem 3 yields an \(O(\sqrt{T})\) bound for both rewards and constraints. However, the regret to the constraint can be as large as \(\Omega(\nu T)\) in the worst case, due to the bias in \(\widehat{\Delta}\). We can further improve the robustness of this estimator using a doubly robust approach, which we describe next.
Doubly robust estimator.Consider choosing the revealing action \(a_{0}\) with probability \(\gamma_{t}\) at round \(t\) (i.e., \(Z_{t}=1\) with probability \(\gamma_{t}\)). To obtain a better bias-variance trade-off than the constraint estimator above, we consider a doubly-robust approach (Robins et al., 1994; Dudik et al., 2014):
\[\bar{\Delta}_{t}(a;x_{t})=\widehat{\Delta}_{t}(a;x_{t})+Z_{t}\frac{(\Delta(a, \bar{a}(x_{t});x_{t})-\widehat{\Delta}_{t}(a;x_{t}))}{\gamma_{t}}.\]
We note the distinction between \(Z_{t}\) and \(\xi_{t}\) here. \(\xi_{t}\) is 1 for all rounds where \(\bar{a}(x_{t})\) is observed, irrespective of whether the chosen action was \(a_{0}\) or some other action, while \(Z_{t}=1\) only on the rounds where we choose \(a_{0}\) intentionally, to avoid bias in the user's revelation of \(\bar{a}(x_{t})\) in response to the other actions. Due to this, the doubly robust estimator is unbiased and has variance bounded by \(2+2\nu^{2}/\gamma_{t}\). Let
\[U_{t}(\delta,v)=4\sqrt{\frac{(1\lor\nu T^{1/4})\log(T|\Pi|/\delta)}{t}}+4\frac {(T^{1/4})\log(T|\Pi|/\delta)}{t}\]
(see Lemma 6 in Appendix E). In a similar way to Equation 7 we can construct the following nearly feasible policy sets,
\[\Pi_{t+1}=\Bigg{\{} \pi\in\Pi_{t}:\frac{1}{t}\sum_{s=1}^{t}\bar{\Delta}_{s}(\pi(x_{s});x _{s}) \tag{8}\] \[\leq \min_{\pi\in\Pi_{t}}\frac{1}{t}\sum_{s=1}^{t}\bar{\Delta}_{s}(\pi( x_{s});x_{s})+\epsilon+4U_{t}(\delta,\nu)\Bigg{\}}\;.\]
Setting \(\gamma_{t}=\frac{\nu}{T^{1/4}}\) allows us to show the following result.
**Theorem 4**.: _Assume that the distribution over constraints \(\Delta(\cdot,\bar{a}(x);x)\) is \((\alpha,\mathfrak{d})\)-similar to the distribution over rewards \(r(\cdot,x)\) with respect to \((\Pi,\pi^{*})\). Algorithm 2 invoked with \(Z_{t}=Ber(\gamma_{t})\), \((\Pi_{t})_{t\in[T]}\) defined in Eq. 8 satisfies_
\[\mathbb{E}[\text{Reg}_{r}(\mathcal{A},T)]=O(\min_{\mu\in[0,1]}\phi(\mu,1\lor \nu T^{1/4},T,\mathfrak{d}))\,\text{and}.\]
\[\frac{\mathbb{E}[\text{Reg}_{c}(\mathcal{A},T)]}{T}\leq\epsilon+O\left(\sqrt{ \frac{\nu}{T^{3/4}}\log(T|\Pi|)}+\frac{\log^{2}(T|\Pi|)}{T^{3/4}}\right)\.\]
**Better bounds for small \(\nu\).** Theorem 4 implies that as long as \(\nu=O(1/T^{1/4})\) the instance of Algorithm 2 will incur only \(O(\sqrt{T})\) regret (ignoring other multiplicative factors) to both the reward and constraint. This improves upon Theorem 3 by expanding the range of \(\nu\) for the improved rate, at the cost of requiring Assumption 1. As with Theorems 2 and 4, we retain the ability to leverage distributional similarity in rewards and constraints.
**Robustness to large \(\nu\).** When \(\nu\) becomes too large, \(\nu=\omega(1/T^{7/24})\), the regret bound in Theorem 4 becomes asymptotically worse compared to that of Theorem 2. This is because in this setting of \(\nu\), \(\gamma_{t}=\omega(1/T^{1/3})\) and the algorithm incurs large regret due to sampling \(a_{0}\) too often. To correct this minor problem, we can additionally enforce \(Z_{t}=0\) for any \(t\geq t_{\max}\), where \(t_{\max}\) is the smallest round at which \(\sum_{t=1}^{t_{\max}}Z_{t}\geq\Omega(T^{2/3})\). It is possible to show that in this case \(\frac{1}{t}\sum_{t=1}^{t_{\max}}\tilde{\Delta}_{t}\) will have similar statistical properties to the estimator of \(\Delta\) in Section 3. In particular this modification yields a regret bound (in terms of \(T\)) for Algorithm 2 of \(O(T^{2/3})\) both for the reward and constraint, while retaining the \(O(\sqrt{T})\) improvement for small \(\nu\).
Note, that for both the biased estimator and the doubly-robust unbiased estimator we require knowledge of \(\nu\) to be able to correctly instantiate \(\bar{\Delta}_{t}\) and construct \(\Pi_{t}\). Making these algorithms adaptive to the knowledge of \(\nu\) is an important direction for future research. Our final approach does not require such knowledge of hyper-parameters and is inspired by the active-learning literature.
### An active learning approach
Now we consider a strategy for constraint estimation, where we use active learning to estimate \(x\to\bar{a}(x)\) using policies in \(\Pi\). The resulting optimization problem, however, is slightly different and the guarantees we get are not directly comparable to Theorems 3 and 4. We first define the query rule and sets \(\Pi_{t}\). Set \(\Pi_{1}=\Pi\) and \(r_{t}=4\sqrt{\frac{2\log(|\Pi|/\delta)}{t}}\), and \(\mathcal{S}(\pi,t)=\sum_{s=1}^{t}Z_{s}\Delta(\pi(x_{s}),\bar{a}(x_{s});x_{s})\). Define \(\widehat{\pi}_{t}=\operatorname*{argmin}_{\pi\in\Pi_{t}}\mathcal{S}(\pi,t)\) and
\[\Pi_{t+1} =\big{\{}\pi\in\Pi_{t}:\mathcal{S}(\pi,t)\leq\mathcal{S}(\widehat {\pi}_{t},t)+(2\epsilon+3r_{t})t\big{\}}\] \[Z_{t+1} =\mathbb{1}\Big{(}\exists\pi,\pi^{\prime}\in\Pi_{t+1}:\Delta(\pi( x_{t+1}),\pi^{\prime}(x_{t+1});x_{t+1}))\] \[\geq\epsilon+r_{t+1}/2\Big{)}. \tag{9}\]
The definition of \(\Pi_{t}\) does not differ too much from the one using the biased estimator of \(\Delta\) in the previous section, however, the query rule has now changed from a uniform exploration one to an active learning one. The rule states that the revealing action is played only when there exist at least two policies which have large disagreement with respect to \(\Delta\) and have not yet been eliminated as infeasible. Under a Masssart-like noise condition on the constraint (Massart & Nedelec, 2006) it is possible to show that \(Z_{t}=1\) only for \(\text{polylog}(T)\) rounds. Let \(\bar{\pi}=\operatorname*{argmin}_{\pi\in\Pi}\mathbb{E}[\Delta(\pi(x),\bar{a}(x );x)]\). We state the desired noise condition below.
**Assumption 4** (Low noise in constraints).: _The constraint function \(\Delta\) satisfies a low noise condition with margin \(\tau\) if for all \(x\) and \(a\neq\bar{\pi}(x)\), we have \(\Delta(a,\bar{\pi}(x);x)\geq\epsilon+\tau\)._
The assumption is a natural modification of Massart's low noise condition to the problem of minimizing \(\Delta(a,\cdot,\cdot)\) w.r.t. \(a\), and similar assumptions have been used in active learning for cost-sensitive classification in Krishnamurthy et al. (2017). Intuitively, the assumption posits that every suboptimal action in terms of constraints has a lower bounded gap to \(\bar{\pi}\)'s action. In Appendix F, we state a more general condition under which our results hold, but give the simpler condition here for ease of interpretability.
**Theorem 5**.: _Assume that the distribution over \(\Delta(\cdot,\bar{\pi}(x);x)\) is \((\alpha,\mathfrak{d})\)-similar to the reward distribution. Under Assumption 4, the regret of Algorithm 2 invoked with \(Z_{t}\) and \(\Pi_{t}\) defined as in Equation 9 satisfies_
\[\frac{\mathbb{E}[\text{Reg}_{r}(\mathcal{A},T)]}{T}\leq\frac{\log(T|\Pi|)}{T \tau^{2}}+O\Big{(}\min_{\mu\in[0,1]}\phi\big{(}\mu,1,T,\mathfrak{d}+\epsilon \big{)}\Big{)}\,\]
\[\frac{\mathbb{E}[\text{Reg}_{c}(\mathcal{A},T)]}{T}\leq 3\epsilon+O(\sqrt{\log(T|\Pi|)/T} )\.\]
We note that the constraint violation part of the regret has a constant multiplicative factor in front of \(\epsilon\). This is due to the fact that the algorithm does not try to directly approximate \(\Delta\). Further, note that the \((\alpha,\mathfrak{d})\)-similarity is stated in terms of \(\Delta(\cdot,\bar{\pi}(x);x)\) rather than \(\Delta(\cdot,\bar{a}(x);x)\), which is again due to the same reason. In fact, the active learning based algorithm might never have an accurate estimator of \(\Delta\).
In terms of rates, we incur an \(O(\sqrt{T})\) regret in both rewards and constraints modulo the caveat above, and noting that the constraint threshold \(\epsilon\) also appears in the distributional
bias term in the rewards regret. As a result, the guarantees here are generally incomparable with the previous results, but nevertheless useful for leveraging a problem structure complementary to our previous conditions.
Finally, we note that the noise condition in Assumption 4 can be replaced by a milder Tsybakov-like noise condition. More details and proofs of Theorem 5 can be found in Appendix F. Note that Theorem 5 does not have meaningful guarantees if Assumption 4 fails to hold, however, a modification similar to the one discussed after Theorem 4 can be implemented to again guarantee a \(O(T^{2/3})\) regret bound for both the reward and constraint.
## 5 Discussion
This paper initiates a theoretical investigation of CB problems where the learner observes extra supervised signals produced only on a subset of contexts/time steps which are not under the agent's control ("user triggered"), a practically prevalent scenario. The key challenge we overcome is the biased nature of these observations. We believe that the constrained learning and reward-blending framework used here is a flexible way to capture potentially biased signals which arise in practical deployment of CB algorithms.
Looking ahead, there are important questions of robustness to violation of our assumptions, such as Assumption 3 which are not addressed here. Developing algorithms to leverage such favorable conditions while maintaining computational efficiency is another challenge. More broadly, it would be interesting to validate the assumptions developed here, or discover alternatives, through practical studies of user behavior in the motivating examples underlying our work. Addressing such questions is paramount to improving the sample-efficiency of CB algorithms in practice, and make them applicable in broader settings.
|
2305.06883 | Cross-channel Budget Coordination for Online Advertising System | In online advertising (Ad), advertisers are always eager to know how to
globally optimize their budget allocation strategies across different channels
for more conversions such as orders, payments, etc. Ignoring competition among
different advertisers causes objective inconsistency, that is, a single
advertiser locally optimizes the conversions only based on its own historical
statistics, which is far behind the global conversions maximization. In this
paper, we present a cross-channel Advertising Coordinated budget allocation
framework (AdCob) to globally optimize the budget allocation strategy for
overall conversions maximization. We are the first to provide deep insight into
modeling the competition among different advertisers in cross-channel budget
allocation problems. The proposed iterative algorithm combined with entropy
constraint is fast to converge and easy to implement in large-scale online Ad
systems. Both results from offline experiments and online A/B budget bucketing
experiments demonstrate the effectiveness of AdCob. | Guangyuan Shen, Shenjie Sun, Dehong Gao, Shaolei Li, Libin Yang, Yongping Shi, Wei Ning | 2023-05-09T08:57:33Z | http://arxiv.org/abs/2305.06883v1 | # Cross-channel Budget Coordination for
###### Abstract.
In online advertising (Ad), advertisers are always eager to know how to globally optimize their budget allocation strategies across different channels for more conversions such as orders, payments, etc. Ignoring competition among different advertisers causes objective inconsistency, that is, a single advertiser locally optimizes the conversions only based on its own historical statistics, which is far behind the global conversions maximization. In this paper, we present a cross-channel **A**dvertising **C**oordinated **b**udget allocation framework (**A**d**c**ob) to globally optimize the budget allocation strategy for overall conversions maximization. We are the first to provide deep insight into modeling the competition among different advertisers in cross-channel budget allocation problems. The proposed iterative algorithm combined with entropy constraint is fast to converge and easy to implement in large-scale online Ad systems. Both results from offline experiments and online A/B budget bucketing experiments demonstrate the effectiveness of AdCob.
Online Advertising, Budget, Cross-channel Budget Management +
Footnote †: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotet: *: footnotetext: *: footnotet: *: footnotetext: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: footnotet: *: footnotet: *: footnotet: footnotet: *: footnotet: *: footnotet: *: footnotet: footnotet: *: footnotet: *: footnotet: footnotet: *: footnotet: *: footnotet: foot: *: footnotet: *: footnotet: *: footnotet: footnotet:
* We employ the iterative algorithm with entropy constraint which accelerates the training convergence and ensures large-scale implementation. Thanks to its simple framework, AdCobcan be easily deployed to other Ad systems with cross-channel budget management needs.
* We have deployed the AdCob framework in an online advertising system. The results from the offline experiments and the online A/B budget-bucketing experiments demonstrate the effectiveness of our proposed approach.
## 2. Related Work
A recent strand of literature has considered different aspects of budget management in cross-channel Ad auctions. The main difference to our work is that these works focus on a single advertiser, which is different from our global advertiser coordination.
Earlier literature (Zalier, 2010) introduces the Multiple-Choice Knapsack (MCK) (Krishnan et al., 2015; Krishnan et al., 2015) model to solve the cross-channel budget allocation of one single advertiser. Some researchers take traffic fluctuations caused by time into consideration, and they cast the time-considered allocation as a reinforcement learning-based MCK problem (Krishnan et al., 2015; Krishnan et al., 2015). On this basis, the interactions among sub-campaigns3 are modeled in the allocation model (Krishnan et al., 2015). All these methods ignore the interactions among a tremendous number of advertisers, that is, only working under the assumption that all the other advertisers keep the static strategy. These methods seem not suitable for real online Ad systems where millions of users bid to show their ads.
Footnote 3: The number of sub-campaigns always less than 10.
Besides, pacing methods are another series of budget management focusing on how to allocate the budget over the time blocks of a channel (Krishnan et al., 2015) or how to adjust the budget cost rate according to the budget usage (Bahdan et al., 2016). Pacing methods can also be regarded as cross-channel budget allocation by distributing the budgets across different time segmentations (channels).
## 3. Method
### Optimal Transport Problem
Optimal transport (OT) is the problem of moving goods from one set of warehouses to another set of destinations while minimizing certain cost functions (Krishnan et al., 2015) (please refer to Eq. 1). For example, suppose that we have \(N\) warehouses (located by \(\{x_{i}\}_{i=1}^{N}\)), the number of goods in each warehouse is \(\{G_{i}\}_{i=1}^{N}\), and need to be moved to \(M\) different places (located by \(\{y_{j}\}_{j=1}^{M}\)). The quantity of demanded goods of each destination is \(\{D_{j}\}_{j=1}^{M}\), and the unit transportation cost between the \(i^{th}\) warehouse and the \(j^{th}\) destination constructs the cost matrix \(\mathbf{C}\), \(\{c(x_{i},y_{j})\}_{i,j=1}^{N,M}\). Then, the OT problem can be formulated as follows,
\[L= \underset{\Gamma}{\arg\min}\sum_{i=1}^{N}\sum_{j=1}^{M}\Gamma_{ ijc}\left(x_{i},y_{j}\right)\] \[s.t.\ \forall i\in\{1,..,N\}\ \sum_{j}\Gamma_{ij}=G_{i}\] \[\forall j\in\{1,..,M\}\ \sum_{i}\Gamma_{ij}=D_{j} \tag{1}\]
where \(\Gamma\) is the transport matrix to optimize, and \(\Gamma_{ij}\) denotes the number of goods sent from the \(i^{th}\) warehouse to the \(j^{th}\) destination. Moreover, we must have \(\sum_{i}G_{i}=\sum_{j}D_{j}\), since the total quantity of goods will not change. If we are in the **unbalance** situation where \(\sum_{i}G_{i}\leq\sum_{j}D_{j}\), we can set a virtual warehouse (located in \(x_{i+1}\)) with \(|\sum_{i}G_{i}-\sum_{j}D_{j}|\) goods stored in. When we set the cost between the \((N+1)^{th}\) warehouse and the \(j^{th}\) destination equal to 0, i.e., \(\forall j,c(x_{i+1,j})=0\), we can convert the unbalance OT to balance.
### Budget Allocation via Optimal Transport
In an online Ad system, advertisers are allowed to create Ad campaigns and the budget allocation in this paper refers to the budget allocation of each campaign. We reformulate the global cross-channel budget allocation as an unbalanced OT. Campaigns' budgets are
Figure 1. The framework of cross-channel budget allocation for online advertising. (a) and (b) show the cross-channel budget allocation for a single advertiser (Krishnan et al., 2015; Krishnan et al., 2015; Krishnan et al., 2015), while (c) shows the advertisers’ interaction considered strategy. Base on the cost matrix (g), we transfer advertisers’ budgets to different channels with minimal conversion cost by optimal transport.
viewed as the goods in warehouses while the channel cost upper limits are viewed as the demanded goods at each destination.
Suppose that we have \(N\) Ad campaigns with budgets \(\mathbf{b}:=\{b_{i}\}_{i=1}^{N}\), and have \(M\) channels with different daily cost upper limits \(\mathbf{h}:=\{h_{j}\}_{j=1}^{M}\) (the cost upper limit estimation refers to Sec. 4.1). We try to maximize the number of conversions by optimizing the budget allocation matrix \(\mathbf{P}:=\{\mathbf{P}_{i,j}\}_{i=1,j=1}^{N,M}\), where \(\mathbf{P}_{i,j}\) denotes the budget that the \(i^{th}\) campaign distributes to the \(j^{th}\) channel. When the total budget is fixed, the objective converts to minimize the global CPC (Cost Per \(\mathbf{C}\)onversion), i.e., minimize the linear combination of different CPC\({}_{i,j}\) with weight \(\mathbf{P}_{i,j}\). Here \(\mathbf{C}:=\{\text{{CPC}}_{i,j}\}_{i=1,j=1}^{N,M}\) denotes the CPC of the \(i^{th}\) campaign on the \(j^{th}\) channel (please refer to Eq. 2). Faced with the sparse nature of Ad conversion action, the calculation of the cost matrix is difficult (for more details, please refer to Sec. 4.2). Moreover, in practice, the sum of the cost upper limits of all channels is always greater than the sum of all the budgets. We can make up a virtual campaign with virtual budget \(b_{N+1}:=|\sum_{i}b_{i}-\sum_{j}h_{j}|\) to bridge the budget gap and simply set the CPC of this campaign on each channel as 0, i.e., \(\forall\ j,\ \text{C}_{N+1,j}=\text{{CPC}}_{N+1,j}=0\). The formal problem formulation is as follows:
\[L= \text{arg}\min_{\mathbf{P}}\sum_{i=1}^{N}\sum_{j=1}^{M}\mathbf{P }_{i,j}\text{C}_{i,j}\] \[s.t. \forall i\in\{1,..,N+1\}\ \sum_{j}\mathbf{P}_{i,j}=b_{i}\] \[\forall j\in\{1,..M\}\ \sum_{i}\mathbf{P}_{i,j}=h_{j} \tag{2}\]
Obviously, this is a large-scale linear programming problem with tremendous numbers of constraints. The complexity of the greedy solution is \(\mathcal{O}(N^{3}logN)\)(Kumar et al., 2017), the iteration speed is too slow to meet the model iteration requirements of large-scale Ad scenarios.
### Iterative Solution with Entropy Constraint
The problem in Eq. 2 is a special linear programming problem, and advanced linear programming solving algorithms can be used to solve it. However, when faced with such large-scale problems in online Ad, the advanced algorithm based on the interior point method (Beng et al., 2017) still has great limitations (Kumar et al., 2017). Some researchers served that, the problem above can be solved in a practical and scalable way by adding an entropy penalty and using the matrix scaling Sinkhorn algorithm (Beng et al., 2017). The new objective of the problem is
\[L=\text{arg}\min_{\mathbf{P}}\sum_{i}\sum_{j}\mathbf{P}_{i,j}\text{C}_{i,j}- \epsilon\mathbf{H}(\mathbf{P}), \tag{3}\]
where entropy \(\mathbf{H}(\mathbf{P}):=-\sum_{i}\sum_{j}\mathbf{P}_{i,j}(\log(\mathbf{P}_{i,j})-1)\), and \(\epsilon\) is the coefficient of entropy regularization. Since the objective in Eq. 3 is an \(\epsilon\)-strongly convex function, it has a unique optimal solution (Kumar et al., 2017). Introducing two dual variables \(\mathbf{f}\in\mathbb{R}^{N+1}\), \(\mathbf{g}\in\mathbb{R}^{M}\) for each marginal constraint, the Lagrangian of Eq. 3 reads
\[\mathcal{E}(\mathbf{P},\mathbf{f},\mathbf{g})=\langle\mathbf{P},\mathbf{C} \rangle-\epsilon\mathbf{H}(\mathbf{P})-\langle\mathbf{f},\mathbf{P}1_{N+1}- \mathbf{P}\rangle-\left\langle\mathbf{g},\mathbf{P}^{\text{T}}\mathbf{1}_{M} -\mathbf{h}\right\rangle, \tag{4}\]
where \(\langle\,\ \rangle\) denotes Frobenius dot-product. First-order conditions yield
\[\frac{\partial\mathcal{E}(\mathbf{P},\mathbf{f},\mathbf{g})}{\partial\mathbf{ P}_{i,j}}=\text{C}_{i,j}+\epsilon\log\left(\mathbf{P}_{i,j}\right)-\mathbf{f}_{i}- \mathbf{g}_{j}=0, \tag{5}\]
which result, for an optimal \(\mathbf{P}\) coupling to the regularized problem, in the expression \(\mathbf{P}_{i,j}=\epsilon^{\mathbf{f}_{i}/\epsilon}e^{-\text{C}_{i,j}/\epsilon} e^{\mathbf{g}_{j}/\varepsilon}\). We iterate over \(\{f_{i}\}\) and \(\{g_{l}\}\) sequenced by the equations (a, b) in Algorithm 1 until convergence. The \(\{f_{i}\}\) and \(\{g_{l}\}\) sequences essentially represent how the solution \(\mathbf{P}\) budget allocation matrix satisfies the bilateral constraints. We alternately satisfy the campaigns' budget constraints and channel cost upper limit constraints by alternately iterating the sequences \(\{f_{i}\}\) and \(\{g_{l}\}\), respectively.
The \(\epsilon\) controls the strength of the regularization. As the \(\epsilon\) goes to zero, more accurate solutions can be obtained while the campaign's budget will be centrally allocated to certain channels bringing numerical instability. We present the auction results with different \(\epsilon\) parameters in the experiment part, please refer to Sec. 5.1.4.
## 4. Implementation Details
### Estimated Cost Upper Limit
We use an offline simulated auction system (Kumar et al., 2017) to estimate the cost upper limit of each channel. By removing budget constraints for all Ad requests, all matching campaigns will be recalled as the impression candidate, and the bidding, uGSP auction (Beng et al., 2017; Beng et al., 2017; Beng et al., 2017) will be executed in order. The average cost of each channel in the past 30 days is counted as the estimated cost upper limit of the channel.
### Estimate Cost Matrix
For large-scale model deployment, we make statistics of the 30 days CPC of a campaign on a channel as the cost \(\text{C}_{i,j}\) to construct the cost matrix \(\text{C}_{i,j}:=\text{{CPC}}_{i,j}=\frac{\text{cost 30 days}_{i,j}}{\text{total conversions 30 days}_{i,j}}\). In practice, we face two challenges and the corresponding solutions are
* **Conversion actions are inherently sparse**, i.e, there are many campaign-channel pairs possessing no conversion action. We use a combination of estimated conversion rate and real conversion to count the number of conversions, so as to alleviate the sparsity of the conversion itself.
* **Partial cold start campaign**, i.e., some Ad campaigns have no cost on some channels. We use the average cost per conversion of the Ad campaign itself as its cost matrix
## 5. Experiment
### Offline Setting.
We experimentally evaluate our cross-channel budget allocation method (AdCob) in an offline setting using a simulated auction
system (Kumar et al., 2017) and real-world datasets collected from our real online advertising system without any sampling.
#### 5.1.1. Baselines
Apart from the plainest first-come-first-served (FCFS) method, two other relevant budget allocation methods termed IDL (Dwork et al., 2017) and unified budget allocation (Kumar et al., 2018) have been introduced to our experiments. All these prior methods focus on only one advertiser, so we directly use these budget allocation methods for 40%, 80% advertisers, regardless of whether these advertisers will generate unreasonable competition, resulting in a decline in platform revenue. In addition, we did small treatments on them, for example:
* IDIL (Dwork et al., 2017): we ignore the strategy of allocating different budgets on different days since we focus on cross-channel budget allocation in this paper.
* Unified budget allocation (Kumar et al., 2018): We use a linear model to approximate the ROI curve.
#### 5.1.2. Data set
We evaluate our method with an advertising data set that was collected from a real-world Internet e-commerce company, where all advertises compete for more conversion, such as purchase behavior, and inquiry behavior. The real dataset covers nearly two hundred thousand campaigns. The real dataset contains tens of millions of records with the following auction information:
* Predicted Click Through Rate (pCTR), predicted ConVersion Rate (pCVR) that describe user preferences for different items, predicted by Deep Interest Network (Kumar et al., 2018).
* Real bid price, generated by OCCP (Kumar et al., 2018) bidding strategy based on pCTR, pCVR, etc.
* Click action, click or not.
* Conversion action, conversion or not.
* Advertiser overall budget and remaining budget.
Each auction includes 5, 10, and 20 ad slots and 500 or 750 advertisers bidding for impressions. Different channels have different numbers of ad slots. In contrast, previous work only considers 5 slots and 10 advertisers bidding on a synthetic data set. We will release the desensitized data set to help researchers better understand our method.
#### 5.1.3. Simulation system and metrics
In our offline simulated auction system, we will traverse each traffic record block by block according to its timestamp. Each block contains all traffic records within fifteen minutes. For each record, we implement a strict Generalised second-price auction (Bauer et al., 2017). Before executing the auction, we rigorously check that the whether the recalled ad campaign has run out of its budget. If this ad campaign runs out of its budget, it will be offline immediately and will not participate in subsequent auctions.
The goal of global budget allocation is to maximize the overall conversions of winning impressions from all the Ad campaigns. Here we report the averaged Cost Per Conversion (CPC), total Conversions (Conv) and platform total Revenue (Rev) in a budget period4. Because we cannot know the real click conversion behavior in the offline experiment, we will use the estimated display revenue of all ad campaigns that have received ad impressions as Revenue (Rev), and the sum of the conversion rates as the number of conversions (Conv). In order to avoid the leakage of sensitive data, we normalize all the metrics, i.e., set the base method to 1.00 and report the percentage change when different budget strategies are turned on.
Footnote 4: A budget period is generally 24 hours.
#### 5.1.4. Offline results
We run extensive experiments on the real dataset to validate the effectiveness of the proposed approach. Tab. 1 presents the offline benefit of revenue, conversions, and CPC by using our method. Fig. 2 captures the impact of the coefficient of entropy regularization \(\epsilon\). As the results show:
**CPC Reduction**. The reason behind CPC reduction is that we **coordinate** all advertisers to optimize their conversion, fully considering the interaction among different advertisers, where we can effectively avoid excessive competition. Almost all advertisers can achieve more conversions within a prefixed budget.
**Revenue and Conversion Increase**. In this paper, we focus on optimizing the conversions of all advertisers while maintaining the revenue of the platform. The overall revenue has also increased even though we did not optimize revenue, as the budget utilization rate of some Ad campaigns has increased. There are two kinds of ad campaigns existing in the ad platform, the former always has a relatively high bid and can easily run out of the budget, and the latter's bids are relatively low and may be in some unpopular track, barely able to spend their budget. In the past, such Ad campaigns with relatively low bids cannot spend their budgets as there are unreasonable competition. With the help of global budget management, the head campaigns (such campaigns with relatively high bids) will give priority to more suitable traffic channels, thus giving up some unsuitable channels. The display opportunities on such "unsuitable" channels are obtained by the Ad campaigns in the middle and tail, improving their budget utilization rate. The increase in conversions is caused by the overall CPC decreases and the budget utilization rate increase.
**Local Methods Cannot Work Well**. When the proportion of advertisers who use local allocation (Dwork et al., 2017; Kumar et al., 2018) in the entire Ad platform increases, the corresponding number of overall conversions and revenue will decrease. This is because there is excessive competition in some channels, as too many advertisers greedily distribute their budgets based on their local historical data preferences.
**The Impact of Entropy Coefficient \(\epsilon\).** In Fig. 2, we use different budget allocation matrix \(\mathbf{P}_{i,j}\) that are computed by different \(\epsilon\) to conduct auction simulations. As the \(\epsilon\) gradually increases, the total number of conversions first increases and then decreases. We analyze that this is because as the entropy term keeps getting bigger, the budget allocation becomes less sparse, but the solution (allocation strategy) also deviates more from the optimal solution. Therefore, in our real online deployment, we will select the most appropriate \(\epsilon\) based on the offline experiment results.
### Online Setting
#### 5.2.1. Online Budget Bucketing A/B Test
In a typical A/B test, in order to compare control A with a treatment B, members are randomly split into two groups with one group receiving the control and the other group receiving the treatment. This approach is not directly applicable to what we wanted to test.
As shown in Fig. 3(a), when we perform budget management in bucket B but impose no budget restrictions in bucket A, bucket A
will consume more budget than Budget B, i.e., getting a negative result. Here we introduce a simple yet effective experiment paradigm for correctly showing the impact of budget management. Learning from the work of other platform5, we will not only divide the whole traffic into two parts but also divide the total budget of a campaign into two parts. As shown in Fig. 3(b), if a traffic hits bucket A, then it only uses the budget of bucket A without affecting that of bucket B, which ensures that the experimental and control bucket do not compete for the budget.
Footnote 5: [https://support.google.com/searchds/answer/7518994?hl=en](https://support.google.com/searchds/answer/7518994?hl=en)
#### 5.2.2. Online results
Here we report the real results that we collect the traffic record from our online advertising system for 30 days. With OT budget allocation, we can help advertisers spend their budgets on high-quality conversion traffic while considering the interaction among different advertisers and avoiding excess competition on certain channels. To our best knowledge, we are the first to provide the online production experiment to validate the effectiveness of the proposed algorithm.
## 6. Discussion
This paper presents a cross-channel budget management framework where we coordinate all competing advertisers to allocate a limited budget to different channels in order to maximize the overall conversion. In other words, we focus on market-making (which is very important for the ad platform) while maintaining or promoting platform revenue as much as possible. In the future, we plan to present a more comprehensive theoretical analysis of the Nash equilibrium efficiency with game theory. We also have an interest in combining the RL-based method to enhance our framework by dynamically adjusting the cost matrix. As for the limitation, the method we propose is more suitable for advertisers who use auto-bidding techniques like OCPX (Zhou et al., 2017). For those advertisers who bid independently, they might adjust their behavior (i.e., lower their offer or increase their offer) as a function of maximizing their own utility. We currently only apply our method on ad campaigns with automatic bidding, and we will work hard to be compatible with those who might adjust their behavior in the future.
|
2310.02853 | A Simulation of the Photoionization of H- Together with the Subsequent
Tracking of the Liberated Electrons | The Proton Improvement Plan - II (PIP-II) is a new linear accelerator (LINAC)
complex being built at Fermilab. It is based on superconducting radiofrequency
cavities and will accelerate H- ions to 800 MeV kinetic energy before injection
into the existing Booster ring. Measurements of the profile of the beam along
the LINAC must be done by non-intercepting methods due to the superconducting
cavities. The method chosen is photoionization of a small number of H- by a
focused infrared laser, aka laserwire. The number of ionized electrons is
measured as a function of laser position within the H- beam. To aid in the
design of the collection mechanism, a simulation was written in MATLAB with
input from the commercial electromagnetic simulation, CST. This simulation
calculates the number and positions of the liberated electrons and tracks them
through the magnetic collection and H- beam fields to the collection point.
Results from this simulation for various points along the LINAC will be shown. | R. Thurman-Keup, M. El Baz, V. Scarpine | 2023-10-04T14:40:40Z | http://arxiv.org/abs/2310.02853v1 | A Simulation of the Photoionization of H- Together with the Subsequent Tracking of the Liberated Electrons+
###### Abstract
The Proton Improvement Plan - II (PIP-II) is a new linear accelerator (LINAC) complex being built at Fermilab. It is based on superconducting radiofrequency cavities and will accelerate H- ions to 800 MeV kinetic energy before injection into the existing Booster ring. Measurements of the profile of the beam along the LINAC must be done by non-intercepting methods due to the superconducting cavities. The method chosen is photoionization of a small number of H- by a focused infrared laser, aka laserwire. The number of ionized electrons is measured as a function of laser position within the H- beam. To aid in the design of the collection mechanism, a simulation was written in MATLAB with input from the commercial electromagnetic simulation, CST. This simulation calculates the number and positions of the liberated electrons and tracks them through the magnetic collection and H- beam fields to the collection point. Results from this simulation for various points along the LINAC will be shown.
## 1 Introduction
Fermilab is in the process of constructing a new superconducting (SC) linear accelerator to replace the existing normal conducting LINAC. This project is called the Proton Improvement Plan - II (PIP-II) [1] and is being built to increase the deliverable beam intensity to the Deep Underground Neutrino Experiment being constructed in South Dakota. Since the LINAC is mostly superconducting, the use of physical wire scanners as beam profilers is not allowed in much of the LINAC due to the risk of a broken wire contaminating the SC cavities. As the beam is composed of H- ions, laserwires [2, 3] were chosen as the profiler. A laserwire functions by photoionization of the extra electron on the H- ion using a focused laser. The binding energy of the extra electron is only 0.7542 eV and can be ionized by a 1064 nm YAG laser. The ionized electrons, which are proportional in number to the local density of H- ions, are collected as the laser is moved through the H-beam, enabling reconstruction of the profile of the beam. Due to the complexity of the system, a simulation was developed to aid in the planning of the laser optics and the electron collection and is presented in this paper.
## 2 Experimental Device
The laserwire system for PIP-II is comprised of a source laser that is transported through a pipe from the laser room to the H- beamline (Fig. 1). The pipe is under vacuum to both reduce distortions from air currents and to serve as a safety interlock system. At each measurement location (station), there is an insertable mirror to direct the laser down to the beamline.
The optical system at the beamline is comprised of movable stages to scan the laser across the H- beam and to select vertical or horizontal scan mode (Fig. 2).
There are 12 beamline stations with the locations and beam parameters summarized in Table 1 (plus an additional emittance measuring one after the LINAC). The laserwires are installed just downstream of the location column entry which specifies the cryomodule section and the cryomodule number within that section. The optics within the beamline stations focus the laser from a large incoming rms size of \(\sim\)5 mm to a focused rms size of \(<\)100 \(\upmu\)m. The focusing distance will be approximately 300 mm. Once the laser has ionized the H- beam, the electrons are bent vertically upward to a detector which will nominally be a faraday cup (Fig. 3).
## 3 Simulation
The simulation is comprised of three parts: the calculation of the number of photoionizations and generation of the electrons, the calculation of the electromagnetic fields of the beam and the magnet, and the tracking of the electrons to the detector.
Figure 1: Diagram of laser transport. The laser originates in the laser room and is transported through a vacuum line to the end of the LINAC. At each laserwire, a vertical pipe can direct the laser to the H- beamline.
Figure 2: Optics inside the laser scanning box. There are linear stages to pick horizontal or vertical scan, and to do the scans.
### Photoionization Simulation
Calculation of the number of photoionizations requires the particle densities of both the incoming laser and H-beams, and the cross section for photoionization
\[dn=c\sigma_{t}\sqrt{\left(\left|\hat{l}-\vec{\beta}\right|^{2}-\left|\hat{l} \times\vec{\beta}\right|^{2}\right)}\,l_{l}l_{b}\ dtdxdydz \tag{1}\]
Here \(c\) is the speed of light, \(\hat{l}\) is the laser beam direction, \(\vec{\beta}\) is the H- beam velocity vector divided by \(c\), the square root is a Lorentz-transformed relative velocity term [4], \(l_{l}\) and \(l_{b}\) are the laser and H- number densities respectively, and \(\sigma_{t}\) is the photoionization cross section in the frame of the H-as a function of the laser wavelength Lorentz-transformed to the frame of the H-
\[\tilde{\lambda}_{l}=\ \lambda_{l}\frac{\sqrt{1-\beta^{2}}}{1-\hat{l}\cdot\vec{ \beta}} \tag{2}\]
where \(\lambda_{l}\) is the laser wavelength in the lab frame.
For the PIP-II laserwire design, the laser is transverse to the H- beam and as such, the square root term in Eq. 1 and the denominator in Eq. 2 are both equal to 1. The Lorentz-transformed laser wavelength shifts from 1064 nm in the lab frame to 1062 nm in the MEBT and to 883 nm at the end of the LINAC. The cross sections, \(\sigma_{t}\), are extracted at the desired wavelengths from a polynomial fit to Table 4 in reference [5].
The simulation code is written in MATLAB [6]. It steps through time and calculates the number of ionized electrons on a predefined set of physical mesh points. The simulation utilizes one of two possible numerical mesh arrangements, laser grid or beam grid, depending on the laser and H- beam parameters.
The laser grid approach creates a fixed cylindrical mesh axially aligned with the laser and covering the overlap region of the laser and the H- beam (Fig. 4). The mesh is an elliptic cylinder with radii that scale with the transverse laser size. This keeps the mesh size proportional to the laser width and avoids losing resolution near the laser waist. This works well when the laser intensity is low enough to avoid significant depletion of the H- beam which is not accounted for in this approach.
The beam grid version is a rectangular fixed mesh covering the physical overlap region of the laser and the H- beam. The simulated time steps have a spacing that is equal to the physical mesh spacing divided by the H- beam velocity. This relationship means that with every time step, the H- beam moves one spatial step, allowing for the fixed mesh to handle depletion of the H- (Fig. 5).
In both cases, the mesh extent is determined from the space-time interaction region of the laser and H- while the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Laserwire Location**} & \multirow{2}{*}{**Position [m]**} & \multicolumn{4}{c}{**Beam Parameters**} \\ \cline{3-6} & & **E\({}_{k}\)** & \(\sigma_{k}\) & \(\sigma_{y}\) & \(\sigma_{t}\) \\ & & **[MeV]** & **[mm]** & **[mm]** & **[ps]** \\ \hline MEBT & 18.7 & 2.1 & 2.3 & 2.3 & 208 \\ HWR & 25.4 & 10.0 & 1.3 & 1.4 & 33 \\ SSR1 CM \#1 & 31.6 & 18.6 & 1.2 & 1.2 & 20 \\ SSR2 CM \#2 & 51.5 & 61.2 & 1.4 & 1.5 & 13 \\ SSR2 CM \#4 & 65.2 & 106.0 & 1.3 & 1.3 & 13 \\ SSR2 CM \#6 & 78.9 & 153.9 & 1.4 & 1.4 & 11 \\ LB650 CM \#1 & 93.4 & 192.2 & 2.3 & 2.4 & 6.7 \\ LB650 CM \#3 & 107.1 & 267.4 & 1.8 & 1.9 & 5.7 \\ LB650 CM \#6 & 127.6 & 400.9 & 1.9 & 2.0 & 5.2 \\ LB650 CM \#9 & 148.0 & 516.5 & 2.1 & 1.8 & 5.0 \\ HB650 CM \#2 & 170.5 & 652.6 & 1.8 & 1.9 & 4.4 \\ HB650 CM \#4 & 192.9 & 833.3 & 1.5 & 1.9 & 3.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Laserwire Locations and Expected Beam Parameters
Figure 4: Laser grid version of the mesh points. The mesh spacing scales with the diameter of the laser as it propagates though the H- beam.
Figure 5: Beam grid version of the mesh points showing the H- beam intensity. Note the depletion of the H- beam as it passes through the laser.
Figure 3: The incoming H- beam is partially ionized by the laser and the electrons are bent vertically by a magnet to a faraday cup. The blue volume is the vacuum chamber.
desired number of mesh points is set by the user to avoid loss of details such as the laser waist.
Electrons for tracking are generated from the space-time mesh containing the calculated number of photoionizations. A Poisson distribution is utilized for up to 500 electrons, beyond which a Gaussian approximation is used. The electrons are given momenta based on the 6D phase space specified for the H- beam (Fig. 6).
### Electromagnetic Field Calculations
The electromagnetic field calculations involve two separate parts: the fields of the H- beam, and the static fields of both the electron collection magnets and possibly secondary electron containment electrodes.
The beam field calculation is done in MATLAB and uses a single bunch which can have any arbitrary shape and size, but a single fixed velocity. For PIP-II, the H- beam is bunched at a frequency of 162.5 MHz and we use a gaussian shape in all three dimensions. The fields are evaluated on a rectangular mesh containing an inner section with generally closer spacing to resolve the fields within the bunch, and an outer section with larger spacing. The fields at each mesh point are a numerical integration over the bunch charge. This numerical integration requires a mesh of the bunch as well. To reduce numerical errors, the field evaluation mesh spacing is adjusted such that it is an integer multiple of the bunch mesh spacing. The results of the field evaluation are stored for later use by the tracking simulation.
The static fields are calculated in CST [7] which is a 3D electromagnetic simulation. We designed a magnet with a return yoke to reduce the impact on the H- beam (Fig. 7). In addition, we will also add a small quadrupole magnet for the MEBT laserwire to compensate electron spreading from H- space charge forces. When the electrons strike the faraday cup, they generate secondary electrons that may escape the surface of the faraday cup, altering the collected charge. To avoid this, we may need to install a conductive ring to apply an electrostatic field to keep the secondary electrons from leaving. As with the bunch fields, these electrostatic results are also stored for later use.
### Tracking Simulation
The tracking simulation is written in MATLAB and was created originally to track electrons in the electron beam profiler [8] and ionization profile monitors [9]. It tracks the electrons through static fields and the fields of the H-bunches but does not do self-interactions with the other electrons since the effect is generally much smaller than the H- bunches. It uses an adaptive Runge-Kutta method [10] to solve the pseudo-relativistic second-order differential equation of motion
\[\begin{split}\vec{F}(\vec{r},t)=\frac{d\vec{p}}{dt}=m\frac{d( \gamma\vec{v})}{dt}\\ \vec{F}(\vec{r},t)=m\gamma\left(\vec{a}+\gamma^{2}\vec{\beta} \left(\vec{\beta}\cdot\vec{a}\right)\right)\end{split} \tag{3}\]
which, when inverted to find \(\vec{a}\), is
\[\vec{a}=\frac{d^{2}\vec{r}}{dt^{2}}=\frac{1}{\gamma m}\big{(}\mathbf{I}-\vec{\beta }\vec{\beta}^{T}\big{)}\vec{F}(\vec{r},t) \tag{4}\]
where \(\vec{F}(\vec{r},t)=q\left(\vec{E}(\vec{r},t)+\vec{v}\times\vec{B}(\vec{r},t)\right)\), \(m\) is the mass of the particle being tracked, \(\gamma=1/\sqrt{1-\beta^{2}}\) is the Lorentz factor, and \(\mathbf{I}\) is the identity matrix. We apply the Runge-Kutta method to the second-order differential equation rewritten as coupled first order differential equations
\[\begin{split}\frac{d\vec{v}}{dt}=\frac{1}{\gamma m}\big{(}\mathbf{I}- \vec{\beta}\vec{\beta}^{T}\big{)}\vec{F}(\vec{r},t)\\ \frac{d\vec{r}}{dt}=\vec{v}\end{split} \tag{5}\]
For each Runge-Kutta time step, the previously stored bunch and static fields are interpolated to find the value at the space-time location of the particle being tracked. For the bunch fields, the field is interpolated to the requested position after adjusting it for the requested time and the velocity of the bunch,
\[\vec{E}\big{|}\vec{B}(\vec{r},t)\rightarrow\vec{E}\big{|}\vec{B}(\vec{r}+ \vec{v}t_{m}) \tag{6}\]
where \(t_{m}=(t\bmod t_{b})\). The modulo function implements a repetitive bunch structure at the specified bunch spacing, \(t_{b}\).
The adaptive part of the algorithm adjusts the step size to keep changes in the momenta, either absolute value or
Figure 6: Generated electron distributions. The laser waist can be seen in the \(x\) position distribution. A 3D distribution of the electron positions is shown in the red box where the laser waist can be seen again.
Figure 7: CST calculation of magnetic field of magnets for electron collection. The plot shows transverse magnetic field integrated along H- beam direction for 5 different transverse locations.
direction, within a range specified by thresholds. If any of the thresholds are exceeded, the step size is adjusted to compensate.
## 6 Results
The simulation has been used to design the collection magnet and find optimal laser parameters to avoid wasted laser energy and resolution degradation. Figures 8 and 9 show electron trajectories at two locations: MEBT and HWR. The MEBT location (Fig. 8) shows tracking results with and without a quadrupole field. The extra quadrupole may be necessary in the MEBT to deal with spreading of the electrons due to the space charge of the H- beam.
Figure 9 shows trajectories in the laserwire after the HWR which is the first cryomodule.
The containment of the electrons is better at this location since the electrons have higher energy and are less susceptible to H- space charge forces. The initial deflection down in all of these is driven by the positioning of the two magnet yokes where the first corrector yoke is closer to the laser interaction region. This effect helps to increase the clearance between the electrons and the corner of the beampipes.
Studies were also done to evaluate the photoionization rates for different length laser pulses, laser arrival time jitter, and H- temporal bunch lengths (Fig. 10). These results will help to determine the optimal laser properties to maximize signal and resolution while minimizing unused laser energy which has detrimental effects on the vacuum windows. For instance, from the plots, we can see that laser pulses with a length of 20 or 30 ps have good photoionization but small variation with jitter.
This simulation was also recently used in an analysis of a novel bunch length measurement at the Spallation Neutron Source at Oak Ridge National Lab [11].
## 7 Conclusion
This laserwire simulation has proven to be useful for a number of analyses pertaining to laserwires. From magnet designs and laser parameter choices to measurement analyses. We foresee even more uses as we proceed to build and commission the PIP-II laserwire systems.
|
2308.09594 | Coupled cluster cavity Born-Oppenheimer approximation for electronic
strong coupling | Chemical and photochemical reactivity, as well as supramolecular organization
and several other molecular properties, can be modified by strong interactions
between light and matter. Theoretical studies of these phenomena require the
separation of the Schr\"odinger equation into different degrees of freedom as
in the Born-Oppenheimer approximation. In this paper, we analyze the
electron-photon Hamiltonian within the cavity Born-Oppenheimer approximation
(CBOA), where the electronic problem is solved for fixed nuclear positions and
photonic parameters. Specifically, we focus on intermolecular interactions in
representative dimer complexes. The CBOA potential energy surfaces are compared
with those obtained using a polaritonic approach, where the photonic and
electronic degrees of freedom are treated at the same level. This allows us to
assess the role of electron-photon correlation and the accuracy of CBOA. | Sara Angelico, Tor S. Haugland, Enrico Ronca, Henrik Koch | 2023-08-18T14:43:15Z | http://arxiv.org/abs/2308.09594v2 | # Coupled cluster cavity Born-Oppenheimer approximation for electronic strong coupling
###### Abstract
Chemical and photochemical reactivity, as well as supramolecular organization and several other molecular properties, can be modified by strong interactions between light and matter. Theoretical studies of these phenomena require the separation of the Schrodinger equation into different degrees of freedom as in the Born-Oppenheimer approximation. In this paper, we analyze the electron-photon Hamiltonian within the cavity Born-Oppenheimer approximation (CBOA), where the electronic problem is solved for fixed nuclear positions and photonic parameters. Specifically, we focus on intermolecular interactions in representative dimer complexes. The CBOA potential energy surfaces are compared with those obtained using a polaritonic approach, where the photonic and electronic degrees of freedom are treated at the same level. This allows us to assess the role of electron-photon correlation and the accuracy of CBOA.
## I Introduction
During the past few years, increasing attention has been devoted to the possibility of exploiting strong light-matter interactions to modify molecular properties. Several recent studies have shown that under proper conditions electromagnetic fields can affect supramolecular organization,[1; 2; 3; 4] optical properties[5; 6; 7] and photochemical processes.[8; 9; 10; 11; 12] Moreover, different experimental works demonstrated the possibility to modify ground state reactivity by means of strong interactions between vibrational states and a cavity field.[13; 14; 15; 16; 17; 18] The most common experimental setups to reach strong coupling conditions are Fabry-Perot cavities[2; 19]. These devices can be schematized as two parallel highly reflecting mirrors surrounding the molecular system and separated by a very short distance (See Fig. 1). In these conditions, the electromagnetic field interferes with itself generating standing waves whose frequencies are related to the distance between the cavity mirrors. When the exchange of energy between the field and the molecules is faster than any decay process, strong light-matter interactions occur. Depending on the frequency of the electromagnetic field, that can be in resonance with either electronic or vibrational excitations, it is possible to reach either the electronic (ESC) or vibrational (VSC) strong coupling regimes, respectively.
When strong light-matter interactions occur, the electromagnetic field effects cannot be included perturbatively, and they require an explicit quantum treatment. The introduction of the quantized electromagnetic field in the Hamiltonian, nevertheless, leads to an increased dimensionality of the problem, that now depends on electronic, nuclear and photonic degrees of freedom. As a consequence, the complete Schrodinger equation has to be simplified for these systems. This can be done generalizing the Born-Oppenheimer approximation, where the electronic and nuclear coordinates are separated to generate two problems with lower dimensionality. Nevertheless, for polaritonic systems, the division of the problem is not straightforward. Up to now, two main approaches to generalize the Born-Oppenheimer approximation to this framework have been proposed. In the first case, the so-called polaritonic approach treats the photonic degrees of freedom at the same level as the electrons, while considering fixed nuclear configurations.[20; 21] The second approach, developed by Flick et al.,[22; 23] is the cavity Born-Oppenheimer approximation (CBOA), which focuses on the formation of vibrational polaritonic states, thus separating the electronic coordinates from the photonic and nuclear ones.[22; 23] These two methods provide complementary points of view on coupled electron-nuclear-photon systems and have been applied to the study of different cases, ranging from molecular properties[24; 25; 26; 27; 28] to chemical and photochemical reactivity.[11; 29; 30; 31; 21] Both approaches, however, require the introduction of methods capa
ble of accurately describe strong electron-photon or nuclear-photon interactions.
As far as electron-photon correlation is concerned, during the past few years some electronic structure methods have been generalized to the polaritonic framework, including the quantized electromagnetic field. Some recent methods include quantum electrodynamics Hartree-Fock (QED-HF) [20], quantum electrodynamics coupled cluster (QED-CC) [20; 24], quantum electrodynamics full configuration interaction (QED-FCI) [20; 24] and quantum electrodynamics density functional theory (QEDFT) [32; 33; 34].
In this paper, we formulate electronic structure methods within the cavity Born-Oppenheimer approximation. We report benchmark results for Hartree-Fock (CBO-HF [35]), complete active space configuration interaction (CBO-CASCI), full configuration interaction (CBO-FCI), coupled cluster theory with perturbative double excitations (CBO-CC2), coupled cluster with single and double excitations (CBO-CCSD), coupled cluster with perturbative triple excitations (CBO-CC3) and coupled cluster with single, double and triple excitations (CBO-CCSDT). We then specifically focus on intermolecular interactions in three selected dimers that are representative examples of van der Waals forces, hydrogen bonding, and dipole-induced dipole interactions using CBO-CCSD. Comparing these results to the QED-CCSD approach, we analyze the flexibility of the cavity Born-Oppenheimer framework and the role of electron-photon correlation.
This paper is organized as follows. In Sec. II, we give a brief introduction to the cavity Born-Oppenheimer approximation, followed in Sec. III by the introduction of post Hartree-Fock methods within CBOA. In Sec. IV we discuss the benchmark results and in Sec. V we present the study of intermolecular interactions. Our final remarks are given in Sec. VI.
## II Cavity Born-Oppenheimer approximation
In the description of strongly coupled systems, we must include the quantized electromagnetic field and its interactions with matter. To this end, the system is usually described in Coulomb gauge within the dipole approximation and the Power-Zienau-Woolley (PZW) framework [36; 37; 23]
\[H =H_{e}+T_{N}\] \[+\sum_{\alpha}\bigg{(}\frac{1}{2}\big{(}\hat{\rho}_{\alpha}^{2}+ \omega_{\alpha}^{2}\hat{q}_{\alpha}^{2}\big{)}+\omega_{\alpha}\hat{q}_{\alpha }(\mathbf{\lambda}_{\alpha}\cdot\mathbf{d})+\frac{1}{2}(\mathbf{\lambda}_{\alpha} \cdot\mathbf{d})^{2}\bigg{)}. \tag{1}\]
This Hamiltonian includes, besides the usual electronic Hamiltonian H\({}_{e}\) and the kinetic energy of the nuclei T\({}_{N}\), additional terms, related to the quantized electromagnetic field. In particular, the electromagnetic field is expressed as the sum of harmonic oscillators with frequency \(\omega_{\alpha}\) described in terms of the displacement operator \(\hat{q}_{\alpha}\) and the momentum operator \(\hat{p}_{\alpha}\). Moreover, the fourth and the fifth terms in Eq. (II) describe the light-matter interactions. In the dipole approximation, these are mediated by the dipole moment operator of the molecule \(\mathbf{d}\) and the coupling strength \(\mathbf{\lambda}_{\alpha}=\sqrt{\frac{4\pi}{\nu_{\alpha}}}\mathbf{\epsilon}_{\alpha}\), that depends on the \(\alpha\)-mode quantization volume \(V_{\alpha}\) and on the polarization of the field \(\mathbf{\epsilon}_{\alpha}\). The last term is the dipole self-energy, that describes how the polarization of matter acts back on the photon field [22; 23]. In addition, this term ensures that the Hamiltonian is bounded from below, which ensures a ground state for the coupled light-matter system [38; 39]. The displacement and momentum operators can be expressed in terms of the photonic creation and annihilation operators as
\[\hat{q}_{\alpha}=\frac{1}{\sqrt{2\omega_{\alpha}}}(b_{\alpha}^{\dagger}+b_{ \alpha}) \tag{2}\]
and
\[\hat{p}_{\alpha}=i\sqrt{\frac{\omega_{\alpha}}{2}}(b_{\alpha}^{\dagger}-b_{ \alpha}). \tag{3}\]
Note that the displacement operator \(\hat{q}_{\alpha}\) is proportional to the electric displacement field operator \(\hat{\mathbf{D}}_{\alpha}=\frac{1}{4\pi}\omega_{\alpha}\mathbf{\lambda}_{\alpha} \hat{q}_{\alpha}\), while \(\hat{p}_{\alpha}\) is proportional to the magnetic field [23; 35].
In the cavity Born-Oppenheimer approximation, the solution of the Schrodinger equation is divided in two parts. First, an electronic wave function \(\psi_{k}(\mathbf{r};\mathbf{R},\mathbf{q})\) is determined for a fixed nuclear and photonic configuration treating the corresponding displacement coordinates as parameters. Subsequently, a vibropolaritonic wave function \(\mathbf{\chi}_{kv}(\mathbf{R},\mathbf{q})\) that explicitly depends on both nuclear and photonic degrees of freedom is determined. Here, \(\nu\) labels the vibropolaritonic states. Finally, a stationary state is approximated as the product of these two terms
\[\Psi_{kv}(\mathbf{r},\mathbf{R},\mathbf{q})=\psi_{k}(\mathbf{r};\mathbf{R}, \mathbf{q})\mathbf{\chi}_{kv}(\mathbf{R},\mathbf{q}). \tag{4}\]
Using the terminology commonly used in the Born-Oppenheimer approximation, the photonic degrees of freedom are here considered "slow" and possibly resonant with the nuclear ones [21]. Analyzing this in more detail, since the photonic momentum operator \(\hat{p}_{\alpha}\) is proportional to the magnetic field, neglecting it corresponds to assuming that the magnetic field is small [22]. This is in line with the dipole approximation, in which the effects of the magnetic field are disregarded. Moreover, since \(p_{\alpha}=\frac{d}{dt}q_{\alpha}\), it follows that the displacement field changes slowly over time. Overall, the cavity Born-Oppenheimer approximation is expected to be valid when the electrons are able to readily adapt to the slow variations of the displacement field [24; 22]. A more rigorous and quantitative assessment of the validity of this approximation, nevertheless, would require the computation of non-adiabatic coupling elements. For this reason, as in standard Born-Oppenheimer approximation, this approach should be accurate when the potential energy surfaces are well separated from each other. Overall, we expect the cavity Born-Oppenheimer approximation to provide a useful framework for the approximate treatment of the effects of the field on the electronic subsystem, and on the electronic ground state in particular, in vibrational strong coupling conditions [31; 23; 31]. As for the ESC regimes, where electron-photon correlation is particularly relevant, we expect this approximation to only partially reproduce the effects of the electromagnetic field.
## III Electronic structure methods within CBOA
As already mentioned, in the cavity Born-Oppenheimer approach a stationary state is approximated as the product of an electronic and a vibropolaritonic wave function. In the following, we will focus on the electronic problem.
Here, the nuclear and photonic degrees of freedom are kept fixed, and the terms describing the corresponding kinetic energies are neglected. The electronic CBOA Hamiltonian, then, is the sum of the usual electronic Hamiltonian, the potential energy of the field, and the field interactions with matter
\[H_{\text{CBOA}}^{c}=H_{e}+\sum_{\mathbf{\alpha}}\left(\frac{1}{2}\omega_{\mathbf{ \alpha}}^{2}q_{\mathbf{\alpha}}^{2}+\omega_{\mathbf{\alpha}}q_{\mathbf{\alpha}}(\mathbf{\lambda }_{\mathbf{\alpha}}\cdot\mathbf{d})+\frac{1}{2}(\mathbf{\lambda}_{\mathbf{\alpha}}\cdot \mathbf{d})^{2}\right). \tag{5}\]
Potential energy surfaces (PESs) are defined by the eigenvalues \(\varepsilon_{\mathbf{k}}(\mathbf{R},\mathbf{q})\) of this operator
\[H_{\text{CBOA}}^{c}\psi_{\mathbf{k}}(\mathbf{r};\mathbf{R},\mathbf{q})=\varepsilon _{\mathbf{k}}(\mathbf{R},\mathbf{q})\psi_{\mathbf{k}}(\mathbf{r};\mathbf{R},\mathbf{q }). \tag{6}\]
Compared to the usual PESs, the modified potential energy surfaces depend on an additional set of coordinates, which are the photonic degrees of freedom. As a consequence, the interaction with the photons will lead to changes of the PESs. Note that in Eq. (5) the non-adiabatic coupling terms have been disregarded, opposed to other proposed approaches to CBOA.[22; 42]
In order to solve the electronic problem defined in Eq. (6), we need to reformulate the standard _ab initio_ electronic structure methods. The CBOA Hamiltonian can then be written in second quantization as
\[H_{\text{CBOA}}=\sum_{\mathbf{\eta}}\tilde{\mathbf{h}}_{pq}E_{pq}+\frac{1}{2}\sum_{pqrs }\tilde{g}_{pqrs}e_{pqrs}+\tilde{c}, \tag{7}\]
where \(p,q,r,s\) denote molecular orbitals. Defining the standard one- and two- electron integrals \(h_{pq}\) and \(g_{pqrs}\),
\[\tilde{h}_{pq}=h_{pq}+\sum_{\mathbf{\alpha}}\left(\omega_{\mathbf{\alpha}}q_{\mathbf{ \alpha}}\mathbf{\lambda}_{\mathbf{\alpha}}\cdot\mathbf{d}_{pq}+\frac{1}{2}\sum_{s}( \mathbf{\lambda}_{\mathbf{\alpha}}\cdot\mathbf{d}_{ps})(\mathbf{\lambda}_{\mathbf{\alpha}} \cdot\mathbf{d}_{qq})\right) \tag{8}\]
\[\tilde{g}_{pqrs}=g_{pqrs}+\sum_{\mathbf{\alpha}}(\mathbf{\lambda}_{\mathbf{\alpha}}\cdot \mathbf{d}_{pq})(\mathbf{\lambda}_{\mathbf{\alpha}}\cdot\mathbf{d}_{rs}) \tag{9}\]
\[\tilde{c}=V_{NN}+\frac{1}{2}\sum_{\mathbf{\alpha}}\omega_{\mathbf{\alpha}}^{2}q_{\bm {\alpha}}^{2}. \tag{10}\]
From Eqs. (7) - (10) it follows that the implementation of the CBOA Hamiltonian only requires trivial modifications of the existing codes by means of a redefinition of the integrals. In the following, we will use these modified integrals to obtain post Hartree-Fock methods, relying on the cavity Born-Oppenheimer Hartree-Fock ansatz, already proposed in Ref.[35]
### Comparison between the CBOA and polaritonic approaches
While the cavity Born-Oppenheimer approximation provides a straightforward procedure to treat the photonic degrees of freedom, this approach is not the only possible way of describing strongly coupled systems. In the past few years methods like Hartree-Fock theory, coupled cluster theory and full configuration interaction have been developed for a QED framework in the polaritonic approach.[20; 24] As opposed to the CBOA, in the polaritonic framework the photonic degrees of freedom are explicitly considered in the electronic wave function, which is now more appropriately referred to as polaritonic. Here, the interactions between electrons and photons are explicitly treated and electron-photon correlation is included. On the other hand, the parametric dependence on the photonic degrees of freedom in CBOA intrinsically implies that the approach does not describe this correlation. Along these lines, we can define the electron-photon correlation energy as the difference between the QED and the CBOA energies, calculated at the optimal value of \(q\):
\[E_{ep,corr}=E_{\text{QED}}-E_{\text{CBOA},q_{opt}} \tag{11}\]
As pointed out in Ref.[43], the effects of electron-photon correlation on the ground state can be seen as a screening to the dipole self-energy term in the Hamiltonian. Note that in our definition of electron-photon correlation we focus on the cavity effects on the electronic wave function only. As a consequence, our definition differs from the one proposed in Ref.[42], where nuclei-mediated effects are considered as well.
In order to analyze the potential of CBO methods, we perform a comparison of these approaches to their analogous in the polaritonic framework. Even though we expect CBO methods to be more suitable for VSC and the polaritonic approach for ESC, the possibility to tune the value of the \(q\) parameter makes the cavity Born-Oppenheimer approach highly flexible. In fact, as we will show later, tuning the electromagnetic field parameter \(q\) can be exploited to reproduce for instance binding energies obtained using the polaritonic approaches. Moreover, the photonic coordinates can be thought of as a proper rescaling of the expectation value of the corresponding displacement operator with respect to a photonic coherent state. With this observation in mind, CBOA can be viewed as a polaritonic mean-field approach.
Some similarities between the two approaches can be found at the Hartree-Fock level. In QED-HF, the energy is minimized with respect to the photonic coherent state. For this reason, CBO-HF with the value of \(q\) that minimizes the energy of the system gives the same energy as QED-HF. The analytical expression for this value has been reported in Ref.[35]
\[q_{opt}=-\frac{\mathbf{\lambda}\cdot\langle\mathbf{d}\rangle}{\omega}. \tag{12}\]
With this choice for the displacement of the field, the Hamiltonians used in QED-HF and in CBO-HF are equivalent. As a consequence, calculations also inherit the origin-invariance already discussed for QED-HF,[20] which would not be otherwise obtained with the CBOA Hamiltonian. Moreover, the energies do not depend on the frequency of the electromagnetic field.[20]
As far as CBO-CC is concerned, the main difference with
QED-CC stands in the definition of the cluster operator. In QED-CC, the T operator also includes photonic and mixed photonic-electronic excitation operators. This method is then able to account for electron-photon correlation in the description of the polaritonic system. On the other hand, in CBO-CC only the purely electronic cluster operator is used. As a consequence, even though the effects of the electromagnetic field are parametrically taken into account, electron-photon correlation is not accounted for. For this reason, minimizing the energy for CBO-CC does not reproduce QED-CC energies when electron-photon correlation plays an important role, as in ESC regimes. In VSC regimes, on the other hand, the effect of electron-photon correlation on the ground state energy is small. As a consequence, CBO-CC for the optimal value of \(q\) provides a good description of QED-CC ground state energies in VSC regimes.[43] Finally, fixing \(q\) to minimize the electronic energy mimics what is commonly done by optimizing the nuclear geometry and thus represents a meaningful choice. As highlighted for CBO-HF, also CBO-CC results do not depend on the frequency of the field when the optimal value of \(q\) is used.[43]
## IV Benchmark of the methods
In order to assess the accuracy of the different methods, we consider several electronic structure methods using the Hamiltonian in Eq. (7). In particular, HF, CASCI and FCI, as well as several methods in the coupled cluster hierarchy (CC2, CCSD, CC3, CCSDT) have been implemented in the CBOA framework using the \(e^{T}\) program.[44] A detailed description of these methods can be found in the literature.[45, 46, 47, 48]
As a preliminary benchmark study, we have applied these methods to the H\({}_{2}\) dimer in parallel configuration at the optimal intermolecular distance calculated at the CCSD level. Calculations have been performed with the aug-cc-pVDZ basis set, with a fixed bond length of 0.74 A for each hydrogen molecule. Only one photonic mode has been considered, with a coupling strength of \(\lambda=0.1\) a.u., polarization along the intermolecular axis (\(z\)), and frequency 12.7 eV. The ground state energies for this system, at the optimal value of \(q\), can be found in Table 1.
From this Table, we note that the introduction of electronic correlation in post CBO-HF methods significantly increases the accuracy of the results. Nevertheless, CBO-CC2 overestimates the effects of the electromagnetic field on this system. Overall, CBO-CCSD is able to provide an accurate description of electronic correlation. For this reason, only CBO-CCSD will be considered in the following discussions.
## V Intermolecular interactions in dimers
In order to assess the usefulness of CBO-CCSD, we have performed calculations of the intermolecular interactions in dimer complexes. Despite the usual weak character of intermolecular interactions, they play an essential role in chemistry, since they are critical to the supramolecular organization of every system. Moreover, experimental results have shown the possibility to exploit resonant cavities to obtain different supramolecular organizations,[1, 2, 3] thus highlighting how strong light-matter coupling can affect non-covalent interactions between molecules. This phenomenon is probably strongly related to the cooperative effects between molecules and is quite complicated to analyze theoretically. In fact, the description requires accurate _ab initio_ methods that are able to treat a large number of molecules with reasonable computational cost. Nevertheless, a benchmark study of intermolecular interactions in dimers has recently been presented using QED-CCSD,[24] where the cavity-induced effects are explicitly taken into account in the calculation of polaritonic potential energy surfaces.
In the following, a different point of view will be provided using the cavity Born-Oppenheimer approximation. Even though this method is not able to explicitly take into account the polaritonic character of potential energy surfaces, it provides a good representation of the ground state properties. In the CBO-CCSD calculations, only one photonic mode has been considered with coupling strength \(\lambda=0.1\) a.u.. This relatively large coupling mimics the use of several modes. All the presented potential energy surfaces are relative to the monomers at a distance of 200 A. Where possible, the results are compared to the polaritonic approach in Ref.[24]
### Hydrogen dimer
We consider the hydrogen dimer in parallel configuration (See Fig. 2). Outside the cavity, the interactions of the hydrogen molecules are dominated by weak dispersion interactions, closely related to electron correlation. The interaction energy thus scales as \(R^{-6}\) and can be expressed in terms of the polarizability of the monomers.[49]
The potential energy surfaces have been calculated using CBO-CCSD by varying both the intermolecular distance and the photonic coordinate using the aug-cc-pVDZ basis set. As in Sec. IV, the H-H distance of each monomer has been kept fixed at the equilibrium value of 0.74 A and the electromagnetic field frequency was set to 12.7 eV, in resonance with the first electronic excitation of the system calculated at the CCSD level. In this case, two different field polarizations have been considered: the intermolecular axis (\(z\)) and the H\({}_{2}\) bond axis (\(x\)).
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Energy (eV) & Cavity effects \\ \hline CBO-HF & -61.01990 & 0.40901 \\ CBO-CASCI (2,6) & -61.03021 & 0.41213 \\ CBO-CC2 & -62.42728 & 0.49271 \\ CBO-CCSD & -63.00968 & 0.37156 \\ CBO-CC3 & -63.00984 & 0.37163 \\ CBO-CCSDT & -63.00991 & 0.37163 \\ CBO-FCI & -63.00991 & 0.37163 \\ \hline \hline \end{tabular}
\end{table}
Table 1: H\({}_{2}\) - H\({}_{2}\) ground state energy with different CBO methods. For each method, cavity effects are defined as \(\mathrm{E_{CBO}-E_{no\,cavity}}\).
In Fig. 2, we present CBO-CCSD potential energy surfaces for different values of \(q\), together with the QED-CCSD and the CCSD curves. We show curves for the optimal value of \(q\) (\(q_{opt}\)) and for the values of \(q\) that make CBO-CCSD as close as possible to QED-CCSD (\(q_{ep}\)). In Fig. 3, instead, the complete three-dimensional potential energy surfaces for both polarizations are reported.
We note that the field polarization along \(z\) causes a destabilization of the system and the binding energy increases when the polarization is along \(x\).
For CBO-CCSD, \(q_{opt}=0\) for both polarization directions. In particular, when the field polarization is along \(z\), the potential energy curve is repulsive at every intermolecular distance. However, when increasing the absolute value of \(q\) the system becomes more stabilized, as can also be seen from Fig. 3 (_right_). For the \(x\) polarization, CBO-CCSD shows a considerable stabilization of the system at \(q_{opt}=0\), while the increase of this parameter leads to weaker intermolecular interactions (See Fig. 3, _left_). We also notice from Fig. 3 that the binding energies are symmetric with respect to \(q\rightarrow-q\) for this geometry of the system.
For \(q=0\), besides minimizing the energy of the system, this also leads to a Hamiltonian that only contains the dipole self-energy term. Increasing the value of \(q\) allows us to tune the relative importance of this term with respect to the bilinear one. We now focus on the values of \(q\) that closely mimic the QED-CCSD binding energies. These values are very similar for both polarization directions (\(q_{ep}^{x}=0.655\) and \(q_{ep}^{z}=0.666\)), and have been determined by a manual scan of the \(q\) parameter. Overall, we conclude that cavity Born-Oppenheimer methods are not able to describe electron-photon correlation, but we can mimic this correlation by varying the value of \(q\).
In order to rationalize the trends observed so far, it is interesting to consider cavity-induced effects on the binding energy. In Fig. 4 we analyze the \(x\) and \(z\) polarizations and compare CBO-CCSD for the optimal value of \(q\) with QED-CCSD. We note that the \(R^{-3}\) scaling of QED-CCSD is reproduced by CBO-CCSD. However, the coefficients found with the two approaches differ considerably. This further highlights the lack of electron-photon correlation in CBO-CCSD. These observations hold for both polarizations considered.
### Water dimer
The second system considered is the water dimer. In this case, intermolecular interactions are more pronounced and characterized by a strong hydrogen bond. The main contribution to this bond is provided by dipole-dipole interactions, even though a consistent component of charge transfer is also participating.[50; 51] The study of such a system is of great interest, since water plays an important role in several chemical phenomena. The possibility to modulate hydrogen bonding would be an important tool in several fields of application, ranging from catalysis to biological phenomena.
We report calculations at different values of \(q\) varying the distance between the two oxygen atoms. The orientations of the water molecules are kept fixed for every intermolecular distance, as well as the geometry of each monomer, which we obtained from Ref.[50] The frequency of the electromagnetic field is 7.86 eV, resonant with the first electronic excitation calculated at the CCSD level. The field polarization is parallel to the axis between the oxygen atoms (\(z\)).
We determined the optimal value of \(q\) by minimizing the total energy of the system at every intermolecular distance \(R\). The dependence of \(q_{opt}\) with respect to \(R\) is shown in Fig. 5 and displays an interesting behavior of the system. When increasing the distance between the molecules, the dimer requires a smaller absolute value of \(q\) to be stabilized. Both CBO-HF and CBO-CCSD show this trend, although for CBO-HF the absolute value of \(q\) is larger. This suggests that when explicitly including electron correlation the system requires smaller absolute values of the displacement of the field to be stabilized.
We now turn to the potential energy curves reported in Fig. 6 for CBO-CCSD at \(q_{opt}\) and \(q_{ep}\), together with QED-CCSD and CCSD. We also report results at the Hartree-Fock level, and we only show one curve for QED-HF and CBO-HF since the two methods are equivalent.
From Fig. 6, we observe that the system is destabilized by the electromagnetic field at the CCSD level. The CBOA approach with the optimal value of \(q\) clearly underestimates the binding energy inside the cavity. When increasing the displacement field the system is further destabilized, as can be observed from Fig. 7. The binding energies do not show any particular symmetry with respect to the variation of the \(q\) parameter (See Fig. 7). Even though a proper interpretation of this behavior would require further investigation, this observation seems to
Figure 2: Potential energy curves the H\({}_{2}\) dimer in parallel configuration. \(\lambda=0.1\) a.u., \(\omega=12.7\) eV. The field polarization is along the intermolecular axis (\(z\)) or the H\({}_{2}\) axis (\(x\)). The energy at 200 Å has been subtracted.
be connected to the low symmetry of the system, as the dipole moment is changed considerably by variations in the electromagnetic field. However, it is possible to choose \(q\) such that CBO-CCSD closely mimics QED-CCSD. Depending on the geometry of the system, the values of \(q_{ep}\) range from \(-0.47\) to \(-0.42\).
Finally, from Fig. 6 we can also analyze the effects of electron and electron-photon correlation. Outside the cavity, electron correlation leads to a considerably larger binding energy. Inside the cavity, the CBOA overestimates the effects of the electron-photon correlation.
Figure 4: Cavity-induced effects on the CBO-CCSD potential energy curves (E\({}_{\text{CBO-CCSD}}\)-E\({}_{\text{CCSD}}\); E\({}_{\text{QED-CCSD}}\)-E\({}_{\text{CCSD}}\)) for the H\({}_{2}\) dimer in the parallel configuration. \(\lambda=0.1\) a.u., \(\omega=12.7\) eV. The field polarization is along the H\({}_{2}\) axis (\(x\), left) or the intermolecular axis (\(z\), right). The energy at 200 Å has been subtracted. All fitted curves have \(R^{2}>0.996\).
Figure 3: CBO-CCSD potential energy surfaces for the H\({}_{2}\) dimer in the parallel configuration. \(\lambda=0.1\) a.u., \(\omega=12.7\) eV. The field polarization is along the H\({}_{2}\) axis (\(x\), left) or the intermolecular axis (\(z\), right). At each \(q\), the energy at 200 Å has been subtracted.
field and leads to a smaller binding energy compared to QED-CCSD. The difference between CBO-CCSD and QED-CCSD is due to electron-photon correlation, which is not captured in the cavity Born-Oppenheimer approach.[43] Nevertheless, CBO-CCSD calculations with the optimal value of \(q\) are still able to partially capture the cavity-induced destabilization of the system.
### Benzene-water
We now consider the interaction between benzene and water. In this dimer, the polar character of water induces charge fluctuations in benzene, and intermolecular interactions are usually described in terms of dipole-induced dipole forces.[49]
We report binding energies for this complex studied by varying the distance between the oxygen atom of the water molecule and the benzene ring. The relative orientation of the single monomers is kept fixed, with the oxygen pointing towards the ring (See Fig. 8). The field polarization is parallel to the dipole moment of the system and the frequency is 13.6 eV. We used the 6-31+G* basis set.
In Fig. 8 we show the potential energy curves outside and inside the cavity, with the polaritonic and the CBOA approaches.
We note that the electromagnetic field clearly destabilizes the system for all approaches. Furthermore, we see that electron correlation plays an important role in the stabilization of this system outside the cavity. Inside the cavity, CBOA reproduces this trend to a smaller extent. Finally, for QED-CCSD electron-photon correlation further stabilizes the system, as is the case for the water dimer system.
## VI Conclusions
In this work, we have presented an implementation of the cavity Born-Oppenheimer approximation for several electronic structure methods. In particular, this framework enables a way to parametrically introduce the effects of the electromagnetic field on the electronic wave functions and potential energy surfaces. We have benchmarked CBOA methods against
Figure 5: Trend of the optimal value of \(q\) with respect to the intermolecular distance.
Figure 6: Potential energy curves with different electronic structure methods for the water dimer, \(\lambda=0.1\) a.u., \(\omega=7.86\) eV. The field polarization is along the intermolecular axis (\(z\)). The energy at 200.0 Å has been subtracted.
Figure 7: Potential energy surface for two parallel H\({}_{2}\)O molecules, \(\lambda=0.1\), \(\omega=7.86\) eV. The field polarization is along the chain axis (\(z\)). For each \(q\), the energy at 200.0 Å has been subtracted. |
2310.13786 | Fundamental Limits of Membership Inference Attacks on Machine Learning
Models | Membership inference attacks (MIA) can reveal whether a particular data point
was part of the training dataset, potentially exposing sensitive information
about individuals. This article provides theoretical guarantees by exploring
the fundamental statistical limitations associated with MIAs on machine
learning models. More precisely, we first derive the statistical quantity that
governs the effectiveness and success of such attacks. We then theoretically
prove that in a non-linear regression setting with overfitting algorithms,
attacks may have a high probability of success. Finally, we investigate several
situations for which we provide bounds on this quantity of interest.
Interestingly, our findings indicate that discretizing the data might enhance
the algorithm's security. Specifically, it is demonstrated to be limited by a
constant, which quantifies the diversity of the underlying data distribution.
We illustrate those results through two simple simulations. | Eric Aubinais, Elisabeth Gassiat, Pablo Piantanida | 2023-10-20T19:32:54Z | http://arxiv.org/abs/2310.13786v4 | # Fundamental Limits of Membership Inference Attacks on Machine Learning Models
###### Abstract
Membership inference attacks (MIA) can reveal whether a particular data point was part of the training dataset, potentially exposing sensitive information about individuals. This article explores the fundamental statistical limitations associated with MIAs on machine learning models. More precisely, we first derive the statistical quantity that governs the effectiveness and success of such attacks. Then, we investigate several situations for which we provide bounds on this quantity of interest. This allows us to infer the accuracy of potential attacks as a function of the number of samples and other structural parameters of learning models, which in some cases can be directly estimated from the dataset.
## 1 Introduction
In today's data-driven era, machine learning models are designed to reach higher performance, and the size of new models will then inherently increase, therefore the information stored (or memorized) in the parameters (Hartley and Tsaftaris, 2022; Del Grosso et al., 2023). The protection of sensitive information is of paramount importance. Membership Inference Attacks (MIAs) have emerged as a concerning threat, capable of unveiling whether a specific data point was part of the training dataset of a machine learning model (Shokri et al., 2017; Nasr et al., 2019; Song et al., 2017; Zhu et al., 2019). Such attacks can potentially compromise individual privacy and security by exposing sensitive information (Carlini et al., 2023a). Furthermore, a recent publication (Tabassi et al., 2019) from the National Institute of Standards and Technology (NIST) explicitly notes that an MIA that successfully identifies an individual as part of the dataset used for training the target model constitutes a breach of confidentiality.
To date, the most comprehensive defense mechanism against privacy attacks is differential privacy (DP), a framework initially introduced by Dwork et al. (2006). DP has shown remarkable adaptability in safeguarding the privacy of machine learning models during training, as demonstrated by the works of Hannun et al. (2021); Jayaraman and Evans (2019). However, it is worth noting that achieving a high level of privacy through differentially private training often comes at a significant cost to the accuracy of the model, especially when aiming for a low privacy parameter (Sablayrolles et al., 2019). Conversely, when evaluating the practical effectiveness of DP in terms of its ability to protect against privacy attacks empirically, the outlook is considerably more positive. DP has demonstrated its efficacy across
a diverse spectrum of attacks, encompassing MIAs, attribute inference, and data reconstruction (see Guo et al. (2023) and references therein).
Empirical evidence suggests that small models compared to the size of training set are often sufficient to thwart the majority of existent MIAs, as observed in the seminal studies by Shokri et al. (2017). Similarly, when the architecture of a machine learning model is overcomplex with respect to the size of the training set, model overfitting increases the effectiveness of MIAs, as has been identified by Yeom et al. (2018). However, despite these empirical findings, there remains a significant gap in our theoretical understanding of this phenomenon. This article delves into the core statistical limitations surrounding MIAs on machine learning models at large.
Our investigation commences by establishing the fundamental statistical quantity that governs the effectiveness and success of these attacks. In the learning model we consider, our attention is directed towards **symmetric** algorithms that adhere to the **redundancy invariance property**, meaning that training on a dataset consisting of multiple repetitions of a dataset is equivalent to training on the same smaller dataset once. Specifically, we concentrate on datasets of independent and identically distributed (_i.i.d._) samples. To assess the effectiveness of MIAs, we will gauge their **accuracy** by examining their success probability in determining membership. Notably, we assess the security of a model based on the highest level of accuracy achieved among MIAs. An MIA that attains the maximum accuracy will be referred to as an _oracle_.
We delve into the intricacies of MIA and derive insights into the key factors that influence its outcomes. Subsequently, we explore various scenarios: empirical mean-based algorithms, discrete data and when the parameter space is quantized, among others, presenting bounds on this pivotal statistical quantity. These bounds provide crucial insights into the accuracy of potential attacks, offering a comprehensive understanding of how attack success relates to the number of samples and other structural parameters, some of which can be estimated from the dataset itself. While our primary focus in this work does not revolve specifically around mechanisms based on differential privacy (DP), it is worth noting that our findings are also applicable within the DP framework.
By shedding light on the statistical limitations of MIAs, this research contributes to enhancing the security and privacy of machine learning models in an era where data protection is of utmost importance.
### Contributions
In our research, we make significant contributions to the understanding of MIAs on machine learning models. Our key contributions can be summarized as follows:
* **Identification of Crucial Statistical Quantity:** We introduce the critical statistical quantity denoted as \(\Delta_{n}(P,\mathcal{A})\), where \(n\) represents the size of the training dataset, \(P\) is the data distribution, and \(\mathcal{A}\) is the underlying algorithm. This quantity plays a pivotal role in assessing the accuracy of effective MIAs. The quantity \(\Delta_{n}(P,\mathcal{A})\) provides an intuitive measure of how distinct parameters of a model can be with respect to a sample, and as a result, it indicates the extent to which we can potentially recover sample membership through MIAs. Consequently, we demonstrate that when \(\Delta_{n}(P,\mathcal{A})\) is small, the accuracy of the best MIA is notably constrained. This highlights the importance of \(\Delta_{n}(P,\mathcal{A})\) in characterizing information disclosure in relation to the training set.
* **Precise Upper Bounds for Empirical Mean-Based Algorithms:** For algorithms that compute functions of empirical means, we establish precise upper bounds on \(\Delta_{n}(P,\mathcal{A})\). We prove that \(\Delta_{n}(P,\mathcal{A})\) is bounded from above by a constant, determined by \((P,\mathcal{A})\), multiplied by \(n^{-1/2}\). In practical terms, this means that having approximately \(\sqrt{d}/\varepsilon\) samples in the dataset, with \(d\) the dimension of the involved empirical means, is sufficient to ensure that \(\Delta_{n}(P,\mathcal{A})\) remains below \(\varepsilon\) for any \(\varepsilon\in(0,1)\). This result significantly contributes to our understanding of the accuracy of MIAs under empirical mean-based algorithms.
* **Maximization of \(\Delta_{n}(P,\mathcal{A})\):** In scenarios involving discrete data with an infinite parameter space, we provide a precise formula for maximizing \(\Delta_{n}(P,\mathcal{A})\) across all algorithms \(\mathcal{A}\). Additionally, when dealing with data that has a finite set of possible values, we determine that this maximization is proportional to a constant times \(n^{-1/2}\). Furthermore, we reveal that there are distinct behaviors concerning the dependence on \(n\) when dealing with discrete data, which can include infinitely many values or when the parameter space is finite (e.g., machine learning models with quantized weights).
These contributions significantly advance the theoretical understanding of MIAs on machine learning models, shedding light on the crucial role played by statistical quantities and their bounds in assessing the security and privacy of these models.
### Related Works
**Privacy Attacks.** The majority of cutting-edge attacks follow a consistent approach within a framework known as Black-Box. In this framework, where access to the data distribution is available, attacks assess the performance of a model by comparing it to a group of "shadow models". These shadow models are trained with the same architecture but on an artificially and independently generated dataset from the same data distribution. Notably, loss evaluated on training samples are expected to be much lower than when evaluated on "test points". Therefore, a significant disparity between these losses indicates that the sample in question was encountered during the training, effectively identifying it as a member. This is intuitively related to some sort of "stability" of the algorithm on training samples (Bousquet and Elisseeff, 2002). Interestingly, we explicitly identify the exact quantity controlling the accuracy of effective MIAs which may be interpreted as a measure of stability of the underlying algorithm.
In fact, as highlighted by Rezaei and Liu (2021), it is important to note that MIAs are not universally effective and their success depends on various factors. These factors include the characteristics of the data distribution, the architecture of the model, particularly its size, the size of the training dataset, and others, as discussed recently by Carlini et al. (2022). Subsequently, there has been a growing body of research delving into Membership Inference Attacks (MIAs) on a wide array of machine learning models, encompassing regression models (Gupta et al., 2021), generation models (Hayes et al., 2018), and embedding models (Song and Raghunathan, 2020). A comprehensive overview of the existing body of work on various MIAs has been systematically compiled in a thorough survey conducted by Hu et al. (2022).
**Overfitting Effects.** The pioneering work by Shokri et al. (2017) has effectively elucidated the relationship between overfitting and the privacy risks inherent in many widely-used machine learning algorithms. These studies clearly point out that overfitting can often provide attackers with the means to carry out membership inference attacks. This connection is extensively elaborated upon by Salem et al. (2018); Yeom et al. (2018), among other researchers. Overfitting tends to occur when the underlying model has a complex architecture or when there is limited training data available, as explained in Yeom et al. (2018); Del Grosso et al. (2023) works. In our paper, we explicitly emphasize this insight by quantifying the dependence of \(\Delta_{n}(P,\mathcal{A})\) on the dataset size and underlying structural parameters.
**Memorization Effects.** Machine learning models trained on private datasets may inadvertently reveal sensitive data due to the nature of the training process. This potential disclosure of sensitive information occurs as a result of various factors inherent to the training procedure, which include the extraction of patterns, associations, and subtle correlations from the data (Song et al., 2017; Zhang et al., 2021). While the primary objective is to generalize from data and make predictions, there is a risk that these models may also pick up on, and inadvertently expose, confidential or private information contained within the training data. This phenomenon is particularly concerning as it can lead to privacy breaches, compromising the confidentiality and security of personal or sensitive data (Hartley and Tsaftaris, 2022; Carlini et al., 2022, 2019; Leino and Fredrikson, 2020; Thomas et al., 2020).
Recent empirical studies have shed light on the fact that, in these scenarios, it is relatively rare for the average data point to be revealed by learning models (Triumala et al., 2022; Murakonda and Shokri, 2007; Song et al., 2017). What these studies have consistently shown is that it is the outlier samples that are more likely to undergo memorization by the model (Feldman, 2020), leading to potential data leakage. This pattern can be attributed to the nature of learning algorithms, which strive to generalize from the data and make predictions based on common patterns and trends. Average or typical data points tend to conform to these patterns and are thus less likely to stand out. On the other hand, outlier samples, by their very definition, deviate significantly from the norm and may capture the attention of the model. So when an outlier sample is memorized, it means the model has learned it exceptionally well, potentially retaining the unique characteristics of that data point. As a consequence, when exposed to similar data points during inference, the model may inadvertently leak information it learned from the outliers, compromising the privacy and security of the underlying data. An increasing body of research is dedicated to the understanding of memorization effects in language models (Carlini et al., 2023).
In the context of our research, it is important to highlight that our primary focus is on understanding the accuracy of MIAs but not its relationship with memorization. Indeed, this connection remains an area of ongoing exploration and inquiry in our work.
## 2 Background and Problem Setup
In this paper, we focus on MIAs, the ability of recovering membership to a training dataset \(\mathbf{z}\coloneqq(\mathbf{z}_{1},\cdots,\mathbf{z}_{n})\in\mathcal{Z}^{n}\) of a test point \(\tilde{\mathbf{z}}\in\mathcal{Z}\) from a predictor \(\hat{\mu}=\mu_{\hat{\theta}_{n}}\) in a model \(\mathcal{F}\coloneqq\{\mu_{\theta}:\theta\in\Theta\}\), where \(\Theta\) is the space of parameters. The predictor is identified to its parameters \(\hat{\theta}_{n}\in\Theta\) learned from \(\mathbf{z}\) through an **algorithm**\(\mathcal{A}:\bigcup_{k>0}\mathcal{Z}^{k}\rightarrow\mathcal{P}^{\prime}\subseteq \mathcal{P}(\Theta)\), that is \(\hat{\theta}_{n}\) follows the distribution \(\mathcal{A}(\mathbf{z})\) conditionally to \(\mathbf{z}\), which we assume we have access to. Here, \(\mathcal{P}(\Theta)\) is the
set of all distributions on \(\Theta\), and \(\mathcal{P}^{\prime}\) is the range of \(\mathcal{A}\). When \(\mathcal{A}\) takes values in the set of Dirac distributions, that is \(\hat{\theta}_{n}\) is a deterministic function of the data, we shall identify the parameters directly to the output of the algorithm \(\hat{\theta}_{n}\coloneqq\mathcal{A}(\mathrm{z}_{1},\cdots,\mathrm{z}_{n})\). We therefore consider MIAs as functions of the parameters and the test point whose outputs are \(0\) or \(1\).
**Definition 2.1** (Membership Inference Attack - MIA).: _Any measurable map \(\phi:\Theta\times\mathcal{Z}\to\{0,1\}\) is called a **Membership Inference Attack**._
We measure the accuracy of an MIA \(\phi\) by its probability of successfully guessing the membership of the test point. For that purpose, we encode membership to the training data set as \(1\). We assume that \(\mathrm{z}_{1},\ldots,\mathrm{z}_{n}\) are independent and identically distributed (_i.i.d._) random variables with distribution \(P\). Following Del Grosso et al. (2023) or Sablayrolles et al. (2019), we suppose that the test point \(\bar{\mathrm{z}}\) is set to an independent copy \(\bar{\mathrm{z}}\) of \(\mathrm{z}_{1}\) with probability \(\nu\in(0,1)\). Otherwise, conditionally to \(\mathbf{z}\), we set \(\bar{\mathrm{z}}\) to any \(\mathrm{z}_{j}\) each with uniform probability \(1/n\).
Letting \(U\) be a random variable with distribution \(\hat{P}_{n}\coloneqq\frac{1}{n}\sum_{j=1}^{n}\delta_{\mathrm{z}_{j}}\) conditionally to \(\mathbf{z}\) and \(T\) be a random variable having Bernoulli distribution with parameter \(\nu\) and independent of any other random variables, we can state
\[\bar{\mathrm{z}}\coloneqq T\bar{\mathrm{z}}+(1-T)U.\]
**Definition 2.2** (Accuracy of an MIA).: _The **accuracy of an MIA**\(\phi\) is defined as_
\[\text{Acc}_{n}(\phi;P,\mathcal{A})\coloneqq P\left(\phi(\hat{\theta}_{n},\bar {z})=1-T\right), \tag{1}\]
_where the probability is taken over all randomness._
The accuracy of an MIA scales from \(0\) to \(1\). Constant MIAs \(\phi_{0}\equiv 0\) and \(\phi_{1}\equiv 1\) have respectively an accuracy equal to \(\nu\) and \(1-\nu\), which means that we always can build an MIA with accuracy of at least \(\max(\nu,1-\nu)\) and any MIA performing worse than this quantity is irrelevant to use. We now define the **Membership Inference Security** of an algorithm as a quantity summarizing the amount of security of the system against MIAs.
**Definition 2.3** (Membership Inference Security - MIS).: _Let \(\nu_{*}\coloneqq\min(\nu,1-\nu)\). The **Membership Inference Security** of an algorithm \(\mathcal{A}\) is_
\[\text{Sec}_{n}(P,\mathcal{A})\coloneqq\nu_{*}^{-1}\left(1-\sup_{\phi}\text{ Acc}_{n}(\phi;P,\mathcal{A})\right), \tag{2}\]
_where the sup is taken over all MIAs._
The Membership Inference Security scales from \(0\) (the best MIA approaches perfect guess of membership) to \(1\) (MIAs can not do better than \(\phi_{0}\) and \(\phi_{1}\)).
Throughout this paper, we focus on algorithms that are **symmetric** and **redundancy invariant**. An algorithm is symmetric if it is invariant under permutation of its inputs. On the other hand, an algorithm is redundancy invariant if for any input dataset, the output of the algorithm on the dataset would be the same as if the dataset was repeated.
**Definition 2.4** (Symmetric Map).: _Given two sets \(\mathcal{Z}_{1}\) and \(\mathcal{Z}_{2}\) and an integer \(k\), a map \(f:\mathcal{Z}_{1}^{k}\to\mathcal{Z}_{2}\) is said to be **symmetric** if for any \((a_{1},\cdots,a_{k})\in\mathcal{Z}_{1}^{k}\) and any permutation \(\sigma\) on \(\{1,\cdots,k\}\), we have_
\[f(a_{1},\cdots,a_{k})=f\left(a_{\sigma(1)},\cdots,a_{\sigma(k)}\right).\]
**Definition 2.5** (Redundancy Invariant Map).: _Given two sets \(\mathcal{Z}_{1}\) and \(\mathcal{Z}_{2}\), a map \(f:\bigcup_{k>0}\mathcal{Z}_{1}^{k}\to\mathcal{Z}_{2}\) is said to be **redundancy invariant** if for any integer \(m\) and any \(\mathbf{a}=(a_{1},\cdots,a_{m})\in\mathcal{Z}_{1}^{m}\), we have_
\[f(\mathbf{a})=f(\mathbf{a},\cdots,\mathbf{a}).\]
The redundancy invariance property states that no information can be gathered from giving the same dataset multiple times.
## 3 Main Results - Performance Assessment of Membership Inference Attacks
In this section, we prove that the **Crucial Statistical Quantity** for the assessment of the accuracy of membership inference attacks is \(\Delta_{n}(P,\mathcal{A})\), defined as
\[\Delta_{n}(P,\mathcal{A})\coloneqq\left\lVert\mathcal{L}\big{(}(\hat{\theta} _{n},\mathrm{z}_{1})\big{)}-\mathcal{L}\big{(}(\hat{\theta}_{n},\bar{\mathrm{ z}})\big{)}\right\rVert_{\mathrm{TV}}, \tag{3}\]
which depends on \(P\), \(n\) and \(\mathcal{A}\). Here, for any random variable \(\mathtt{x}\), \(\mathcal{L}(\mathtt{x})\) denotes its probability distribution, and for any distributions \(Q_{1}\) and \(Q_{2}\), \(\|Q_{1}-Q_{2}\|_{\mathrm{TV}}\) denotes the total variation distance between \(Q_{1}\) and \(Q_{2}\). One can interpret \(\Delta_{n}(P,\mathcal{A})\) as quantifying some stability of the algorithm. We first prove that symmetric and redundancy invariant algorithms can be characterized as functions of the empirical distribution \(\hat{P}_{n}\) of the training dataset. Let \(\mathcal{M}\) be the set of all discrete distributions on \(\mathcal{Z}\).
**Proposition 3.1**.: _Let \(f:\bigcup_{k>0}\mathcal{Z}^{k}\rightarrow\mathcal{Z}^{\prime}\) be a measurable map onto any space \(\mathcal{Z}^{\prime}\). Then the followings are equivalent_
1. \(f\) _is redundancy invariant and for any_ \(k\in\mathbb{N}\)_, the restriction of_ \(f\) _to_ \(\mathcal{Z}^{k}\) _is symmetric._
2. _There exists a function_ \(G:\mathcal{M}\rightarrow\mathcal{Z}^{\prime}\) _such that for any_ \(k\in\mathbb{N}\)_, for any_ \((z_{1},\cdots,z_{k})\in\mathcal{Z}^{k}\) _we have_ \(f(z_{1},\cdots,z_{k})=G\left(\frac{1}{k}\sum_{j=1}^{k}\delta_{z_{j}}\right)\)_._
In particular, we may apply Proposition 3.1 to any algorithms \(\mathcal{A}\) with \(\mathcal{Z}^{\prime}=\mathcal{P}^{\prime}\).
Interestingly, if an algorithm minimizes an empirical cost, then it is of the latter kind. In particular, maximum likelihood based algorithms or Bayesian methods from Sablayrolles et al. (2019) are special cases. We now give the proof of Proposition 3.1.
Proof of Proposition 3.1.: We only prove that \((i)\) implies \((ii)\). The fact that \((ii)\) implies \((i)\) is straightforward.
Let \(f:\bigcup_{k>0}\mathcal{Z}^{k}\rightarrow\mathcal{Z}^{\prime}\) be a measurable map satisfying condition \((i)\). Let \(\mathcal{M}^{\mathsf{emp}}\) be the set of all possible empirical distributions, that is the subset of \(\mathcal{M}\) containing all \(\frac{1}{k}\sum_{j=1}^{k}\delta_{z_{j}}\) for all integer \(k\) and all \((z_{1},\cdots,z_{k})\in\mathcal{Z}^{k}\). We shall define \(G\) on \(\mathcal{M}^{\mathsf{emp}}\) such that \((ii)\) holds true.
For any \(Q\in\mathcal{M}^{\mathsf{emp}}\), let \(\{z_{1},\cdots,z_{m}\}\) be its support and \(q_{1},\ldots,q_{m}\in(0,1)\) be such that \(Q=\sum_{j=1}^{m}q_{j}\delta_{z_{j}}\). Since \(Q\) is an empirical distribution, there exists positive integers \(k_{1},\ldots,k_{m}\) (for each \(j\), \(k_{j}\) is the number of occurences of \(z_{j}\) in the sample from which \(Q\) is the empirical distribution) such that \(q_{j}=\frac{k_{j}}{K}\), with \(K=\sum_{j=1}^{m}k_{j}\).
Let \(r=gcd(k_{1},\ldots,k_{m})\) be the greatest common divisor of the \(k_{j}\)'s and define \(k_{j}^{\prime}=k_{j}/r\) for \(j=1,\ldots,m\). Then with \(K^{\prime}\coloneqq\sum_{j=1}^{m}k_{j}^{\prime}\), we have \(q_{j}=\frac{k_{j}^{\prime}}{K^{\prime}}\).
Now, for any other sequence of positive integers \(\ell_{1},\ldots,\ell_{m}\) such that \(q_{j}=\frac{\ell_{j}}{L}\), with \(L=\sum_{j=1}^{m}\ell_{j}\), we get for all \(j\), \(\ell_{j}=sk_{j}^{\prime}\) with \(s=gcd(\ell_{1},\ldots,\ell_{m})\). Thus we may define \(G(Q)=f(\boldsymbol{z})\) where \(\boldsymbol{z}\) is the dataset consisting of all \(z_{j}\)'s with \(k_{j}^{\prime}\) repetitions.
We now prove that such a \(G\) satisfies \((ii)\). Indeed, for any integer \(k\) and any \(Z\coloneqq(z_{1}^{\prime},\cdots,z_{k}^{\prime})\in\mathcal{Z}^{k}\), define \(V\coloneqq((\ell_{1},z_{1}),\cdots,(\ell_{m},z_{m}))\) where \((z_{1},\cdots,z_{m})\) are the distinct elements of \(Z\) and \((\ell_{1},\cdots,\ell_{m})\) are their occurrences. Define \(r\) as their greatest common divisor, and \((k_{1},\ldots,k_{m})=(\ell_{1},\cdots,\ell_{m})/r\). By using the fact that \(f\) is symmetric and redundancy invariant, we get that \(f(Z)=f(Z_{0})=G(Q)\) where \(Z_{0}\) is the dataset consisting of all \(z_{j}\)'s with \(k_{j}\) repetitions and \(Q=\sum_{j=1}^{m}\frac{k_{j}}{K}\delta_{z_{j}}=\frac{1}{n}\sum_{j=1}^{n}\delta_{z _{j}^{\prime}}\). Thus \((ii)\) holds.
**Theorem 3.2** (Key bound on accuracy).: _Suppose \(P\) is any distribution and \(\mathcal{A}\) is any symmetric redundancy invariant algorithm. Then the accuracy of any MIA \(\phi\) satisfies:_
\[\nu_{*}-\nu_{*}\Delta_{n}(P,\mathcal{A})\leq\text{Acc}_{n}(\phi;P,\mathcal{A}) \leq 1-\nu_{*}+\nu_{*}\Delta_{n}(P,\mathcal{A}).\]
_In particular,_
\[\text{Sec}_{n}(P,\mathcal{A})\geq 1-\Delta_{n}(P,\mathcal{A}).\]
Theorem 3.2 shows that an upper bound on \(\Delta_{n}(P,\mathcal{A})\) translates into a lower bound for the MIS of any algorithm. When \(\nu=1/2\), \(\Delta_{n}(P,\mathcal{A})\) is the quantity that controls the best possible accuracy of MIAs as the result below proves.
**Theorem 3.3**.: _Suppose \(P\) is any distribution, \(\mathcal{A}\) is any symmetric redundancy invariant algorithm and \(\nu=1/2\). Then_
\[\text{Sec}_{n}(P,\mathcal{A})=1-\Delta_{n}(P,\mathcal{A}).\]
We see that \(\Delta_{n}(P,\mathcal{A})\) appears to be a key mathematical quantity for assessing the accuracy of MIAs. We thus study in Sections 4 and 5 situations in which we are able to give precise controls on \(\Delta_{n}(P,\mathcal{A})\). The proof of Theorems 3.2 and 3.3 is given below.
Proof of Theorem 3.2 and Theorem 3.3.: From the law of total probability, we have
\[\text{Acc}_{n}(\phi;P,\mathcal{A}) =P(\phi(\hat{\theta}_{n},\bar{z})=1-T)\] \[=P\left(\phi(\hat{\theta}_{n},\bar{z})=1-T|T=1\right)P(T=1)+P\left( \phi(\hat{\theta}_{n},\bar{z})=1-T|T=0\right)P(T=0)\] \[=\nu P\left(\phi(\hat{\theta}_{n},\bar{z})=0\right)+(1-\nu)P \left(\phi(\hat{\theta}_{n},\mathbf{z}_{1})=1\right),\]
where the third equality comes from the definition of \(\bar{\mathbf{z}}\) and \(T\). We now define \(B\coloneqq\{(\theta,z)\in\Theta\times\mathcal{Z}:\phi(\theta,z)=1\}\) and rewrite \(\text{Acc}_{n}(\phi;P,\mathcal{A})\) as
\[\text{Acc}_{n}(\phi;P,\mathcal{A})=\nu\left(1-P\left((\hat{\theta}_{n},\bar{z} )\in B\right)\right)+(1-\nu)P\left((\hat{\theta}_{n},\mathbf{z}_{1})\in B \right). \tag{4}\]
Taking the maximum over all MIAs \(\phi\) then reduces to taking the maximum of the r.h.s. of Equation (4) over all measurable sets \(B\). Setting \(\gamma=\frac{\nu}{1-\nu}\), we then get
\[\max_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A})=(1-\nu)\underset{B}{\max}\Big{[} P\left((\hat{\theta}_{n},\bar{z})\in B\right)-\gamma P\left((\hat{\theta}_{n}, \mathbf{z}_{1})\in B\right)\Big{]}+\nu, \tag{5}\]
where the maximum is taken over all measurable sets \(B\). Let now \(\zeta\) be a dominating measure of the distributions of \((\hat{\theta}_{n},\bar{z})\) and \((\hat{\theta}_{n},\mathbf{z}_{1})\) (for instance their average). We denote by \(p\) (resp. \(q\)) the density of the distribution of \((\hat{\theta}_{n},\bar{z})\) (resp. \((\hat{\theta}_{n},\mathbf{z}_{1})\)) with respect to \(\zeta\). Then, the involved maximum in the r.h.s. of Equation (5) is reached on the set
\[B^{*}\coloneqq\{p/q\geq\gamma\}.\]
The maximum being taken over all measurable sets in Equation (5), we may consider replacing \(B\) by its complementary \(B^{c}\) in the expression giving
\[\max_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A})=(1-\nu)\underset{B}{\max}\left[ \gamma P\left((\hat{\theta}_{n},\mathbf{z}_{1})\in B\right)-P\left((\hat{ \theta}_{n},\bar{z})\in B\right)\right]+(1-\nu), \tag{6}\]
where in this case the maximum is reached on the set
\[B^{*}{}^{c}\coloneqq\{p/q<\gamma\}.\]
Taking the average on Equations (5) and (6), we get
\[\max_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A})=\frac{1}{2}+\frac{1}{2}\int \Bigl{|}(1-\nu)p-\nu q\Bigr{|}d\zeta. \tag{7}\]
By the triangular inequality, we may obtain the two following inequalities:
\[\max_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A}) \leq\frac{1}{2}+\frac{1-\nu}{2}\int\Bigl{|}p-q\Bigr{|}d\zeta+ \frac{|1-2\nu|}{2}\int qd\zeta,\] \[\max_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A}) \leq\frac{1}{2}+\frac{\nu}{2}\int\Bigl{|}p-q\Bigr{|}d\zeta+\frac{ |1-2\nu|}{2}\int pd\zeta.\]
With \(\int qd\zeta=\int pd\zeta=1\), it holds that when \(\nu\leq 1/2\), we have \(1-2\nu\geq 0\) so that \(1/2+|1-2\nu|/2=1-\nu\). Similarly, we get \(1/2+|1-2\nu|/2=\nu\) when \(\nu\geq 1/2\). Then, setting \(\nu_{*}\coloneqq\min\{\nu,1-\nu\}\) we have in both cases
\[1/2+|1-2\nu|/2=1-\nu_{*}.\]
Since \(\Delta_{n}(P,\mathcal{A})=(1/2)\int|p-q|d\zeta\) from the definition of the total variation distance, by taking the minimum over the two previous expressions, we have
\[\max_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A})\leq 1-\nu_{*}+\nu_{*}\Delta_{n}(P, \mathcal{A}),\]
from which we deduce
\[\text{Sec}_{n}(P,\mathcal{A})\geq 1-\Delta_{n}(P,\mathcal{A}).\]
Following the same steps for the minimum, we have
\[\min_{\phi}\text{Acc}_{n}(\phi;P,\mathcal{A})\geq\nu_{*}-\nu_{*}\Delta_{n}(P, \mathcal{A}),\]
hence Theorem 3.2. Theorem 3.3 comes from plugging \(\nu=1/2\) in Equation (7).
## 4 Main Results - Functions of Empirical Means
In this section, we study the case of algorithms for which the parameters \(\hat{\theta}_{n}\) can be expressed in the form of functions of empirical means (e.g., linear regression with mean-squared error, method of moments...). Specifically, for any (fixed) measurable maps \(L:\mathcal{Z}\to\mathbb{R}^{d}\) and \(F:\mathbb{R}^{d}\to\mathbb{R}^{q}\) for some \(d,q\in\mathbb{N}\), we consider the algorithm
\[\mathcal{A}:(z_{1},\cdots,z_{n})\mapsto\delta_{F\left(\frac{1}{n}\sum_{j=1}^{ n}L(z_{j})\right)}, \tag{8}\]
where \(\delta_{\theta}\) stands for the Dirac mass at \(\theta\).
Without loss of generality, we may assume that
\[\hat{\theta}_{n}\coloneqq F\left(\frac{1}{n}\sum_{j=1}^{n}L(z_{j})\right).\]
Let \(m_{j}\coloneqq\mathbb{E}\left[\left\|C^{-1/2}\Big{\{}L(\mathsf{z}_{1})- \mathbb{E}\left[L(\mathsf{z}_{1})\right]\Big{\}}\right\|_{2}^{j}\right]\) for any positive integer \(j\), that is the expectation of the \(j\)-th power of the norm of the centered and reduced version of \(L(\mathsf{z}_{1})\), and \(C\) be the covariance matrix of \(L(\mathsf{z}_{1})\).
**Theorem 4.1**.: _Suppose that the distribution of \(L(\mathsf{z}_{1})\) has a non zero absolutely continuous part with respect to the Lebesgue measure, and suppose \(m_{3}<\infty\). Then_
\[\Delta_{n}(P,\mathcal{A})\leq\left(C(d)(1+m_{3})+\frac{m_{1}}{2}\right)n^{-1/ 2}+\frac{\sqrt{d}}{2n}, \tag{9}\]
_for some constant \(C(d)\) depending only on the dimension \(d\) of the involved moments._
The proof of Theorem 4.1 can be bound in Appendix A.
Theorem 4.1 implies that for distributions \(P\) satisfying the hypotheses, for any positive \(\epsilon\), for any algorithm \(\mathcal{A}\) that can be expressed as a function of empirical means, \(\text{Sec}_{n}(P,\mathcal{A})\) can be made larger than \(1-\epsilon\) as soon as \(\Delta_{n}(P,\mathcal{A})\leq\epsilon\), which holds as soon as
\[\left(C(d)(1+m_{3})+\frac{m_{1}}{2}\right)n^{-1/2}+\frac{\sqrt{d}}{2n}\leq\epsilon. \tag{10}\]
We get the following result.
**Corollary 4.1.1**.: _Assume \(P\) and \(\mathcal{A}\) satisfy the assumptions of Theorem 4.1. Then, for all \(\epsilon\in(0,1)\), \(\text{Sec}_{n}(P,\mathcal{A})\geq 1-\epsilon\) as soon as \(n\geq\sqrt{d}/\epsilon\)._
The proof of Corollary 4.1.1 can be bound in Appendix A. Remarkably, Corollary 4.1.1 gives a fundamental lower bound, independent of the distribution, on the number of samples required to have a security of at least \(1-\varepsilon\).
We now provide examples for which Theorem 4.1 allows us to give an upper bound on \(\Delta_{n}(P,\mathcal{A})\).
**Example 4.2** (solving equations).: _We seek to estimate an (unknown) parameter of interest \(\theta_{0}\in\Theta\subseteq\mathbb{R}^{d}\). We suppose that we are given two functions \(h:\Theta\to\mathbb{R}^{l}\) and \(\psi:\mathcal{Z}\to\mathbb{R}^{l}\) for some \(l\in\mathbb{N}\), and that \(\theta_{0}\) is solution to the equation_
\[h(\theta_{0})=\mathbb{E}[\psi(x)]. \tag{11}\]
_where \(\mathsf{z}\) is a random variable of distribution \(P\). Having access to data samples \(\mathsf{z}_{1},\ldots,\mathsf{z}_{n}\) drawn independently from the distribution \(P\), we estimate \(\mathbb{E}[\psi(\mathsf{z})]\) by \(\frac{1}{n}\sum_{j=1}^{n}\psi(\mathsf{z}_{j})\). The estimate \(\hat{\theta}_{n}\) of \(\theta_{0}\) is then set to be the solution (if it exists) to the equation_
\[h(\hat{\theta}_{n})=\frac{1}{n}\sum_{j=1}^{n}\psi(\mathsf{z}_{j}).\]
_If the solution exists and \(h\) is invertible, one can set \(\hat{\theta}_{n}=h^{-1}\left(\frac{1}{n}\sum_{j=1}^{n}\psi(\mathsf{z}_{j})\right)\)._
_In particular, when \(\mathcal{Z}=\mathbb{R}\), this method generalizes the method of moments by setting \(\psi(z)=(z,z^{2},\cdots,z^{l})\). We then may apply Theorem 4.1 to any estimators obtained by solving equations._
**Example 4.3** (Linear Regression).: _Let \(\mathsf{x}\) be a random variable of distribution \(P\) taking values in \(\mathbb{R}^{s}\) for some \(s\in\mathbb{N}\) and \(\mathsf{w}\) be a random variable independent of \(\mathsf{x}\) following a normal distribution of parameters \((0,\sigma^{2})\) for some fixed \(\sigma>0\). Let \(\mathsf{y}\coloneqq\beta^{T}\mathsf{x}+\mathsf{w}\) for some unknown fixed vector \(\beta\in\mathbb{R}^{s}\). We seek to estimate \(\beta\) by minimizing least squares
error. Assume we have access to data samples \(((\mathbf{x}_{1},\mathbf{y}_{1}),\cdots,(\mathbf{x}_{n},\mathbf{y}_{n}))\) drawn independently from the distribution of \((\mathbf{x},\mathbf{y})\). By minimizing least squares error, and by setting \(\mathbf{x}=(\mathbf{x}_{1},\cdots,\mathbf{x}_{n})\) and \(\mathbf{y}=(\mathbf{y}_{1},\cdots,\mathbf{y}_{n})\), one can show that the estimator \(\hat{\beta}_{n}\) of \(\beta\) is given by_
\[\hat{\beta}_{n}\coloneqq(\mathbf{x}\mathbf{x}^{T})^{-1}\mathbf{x}\mathbf{y}^ {T}.\]
_Based on Equation (8), if we set \(F(K,b)\coloneqq K^{-1}b^{T}\) and \(L((x,y))\coloneqq((x^{i}x^{j})_{i,j=1}^{d},(x^{i}y)_{i=1}^{d})\), where \(x^{i}\) is the \(i^{\text{th}}\) coordinate of \(x\), then we can express the estimator as_
\[\hat{\beta}=F\left(\frac{1}{n}\sum_{j=1}^{n}L((x_{j},y_{j}))\right).\]
_We then may apply Theorem 4.1 to the least squares estimator for Linear Regression._
## 5 Main Results - Discrete Data Distributions
Throughout this section, the common distribution of the points in the data set is \(P\coloneqq\sum_{j=1}^{K}p_{j}\delta_{u_{j}}\) for some fixed \(K\in\mathbb{N}\cup\{\infty\}\), some fixed probability vector \((p_{1},\cdots,p_{K})\) and some fixed points \(u_{1},\ldots,u_{K}\) in \(\mathcal{Z}\). Without loss of generality, we may assume that \(p_{j}>0\) for all \(j\in\{1,\cdots,K\}\).
**Theorem 5.1**.: _For \(j=1,\cdots,K\), let \(B_{j}\) be a random variable having Binomial distribution with parameters \((n,p_{j})\). Then,_
\[\max_{\mathcal{A}}\,\Delta_{n}(P,\mathcal{A})=\frac{1}{2}\sum_{j=1}^{K} \mathbb{E}\left[\left|\frac{B_{j}}{n}-p_{j}\right|\right], \tag{12}\]
_where the \(\max\) is taken over all symmetric and redundancy invariant algorithms and is reached on algorithms of the form \(\mathcal{A}(z_{1},\cdots,z_{n})=\delta_{F(\frac{1}{n}\sum_{j=1}^{n}\delta_{z_{ j}})}\) for some injective maps \(F\)._
Theorem 5.1 allows to get upper bounds on \(\Delta_{n}(P,\mathcal{A})\) for any algorithm \(\mathcal{A}\), and consequently lower bounds on \(\text{Sec}_{n}(P,\mathcal{A})\) for any algorithm \(\mathcal{A}\). The proof of Theorem 5.1 can be found in Appendix B
We now propose an analysis of the r.h.s. of (12). Define \(C(P)\coloneqq\sum_{j=1}^{K}\sqrt{p_{j}(1-p_{j})}\). We first give a general upper bound which is meaningful as soon as \(C(P)<\infty\).
**Corollary 5.1.1**.: _In all cases (\(K\) finite or infinite), if \(C(P)<\infty\), then_
\[\max_{\mathcal{A}}\,\Delta_{n}(P,\mathcal{A})\leq\frac{C(P)}{2}n^{-1/2}.\]
_If \(K=\infty\) and \(C(P)=\infty\), \(\max_{\mathcal{A}}\,\Delta_{n}(P,\mathcal{A})\) still tends to \(0\) as \(n\) tends to infinity, but the (depending on \(P\)) rate can be arbitrarily slow._
Corollary 5.1.1 implies that for distributions \(P\) such that \(C(P)<\infty\), for any positive \(\epsilon\), for any algorithm \(\mathcal{A}\), \(\text{Sec}_{n}(P,\mathcal{A})\) can be made larger than \(1-\epsilon\) as soon as the data set contains more than \((C(P)/2\epsilon)^{2}\) points. Notice that \(C(P)\) can be estimated using the dataset also. However, for distributions with \(C(P)=\infty\), finding the amount of data needed to get the same control on \(\text{Sec}_{n}(P,\mathcal{A})\) requires the estimation of the r.h.s. of (12) which is not obvious.
Notice that when \(K\) is finite, \(C(P)\) is also finite. We now prove that in this case, the way \(\Delta_{n}(P,\mathcal{A})\) depends in the number of data is indeed \(n^{-1/2}\).
**Corollary 5.1.2**.: _Suppose \(K<\infty\). For all \(n\geq 2\),_
\[\max_{\mathcal{A}}\,\Delta_{n}(P,\mathcal{A})\geq\frac{\exp(-13/6)}{\sqrt{2\pi }}(C(P)-1/\sqrt{2n})n^{-1/2}.\]
The proof of Corollaries 5.1.1 and 5.1.2 can be found in Appendix B.
When the algorithm deterministically maps \(\mathbf{z}\) to some element of \(\Theta\), that is \(\mathcal{A}\) can be written as \(\mathcal{A}(z_{1},\cdots,z_{n})=\delta_{F(\frac{1}{n}\sum_{j=1}^{n}\delta_{z_{ j}})}\) for some map \(F\), and the support \(\Theta\) of the image distribution has finite cardinal \(L\in\mathbb{N}\), it is not always possible to construct functions \(F\) that are injective. In this case, one may rewrite Theorem 5.1 as follows.
**Lemma 5.2**.: _Let \(\mathbf{r}\coloneqq(N_{1},\cdots,N_{K})\) be a random vector having multinomial distribution with parameters \((n;p_{1},\cdots,p_{K})\). There exists a partition \((D_{l})_{l=1\cdots L}\) of the support of \(\mathbf{r}\) such that_
\[\Delta_{n}(P,\mathcal{A})=\frac{1}{2}\sum_{j=1}^{K}\sum_{l=1}^{L}\left|\mathbb{ E}\left[\left\{p_{j}-\frac{N_{j}}{n}\right\}1\{\mathbf{r}\in D_{l}\}\right] \right|.\]
Lemma 5.2 is a tool to understand the behaviour of \(\Delta_{n}(P,\mathcal{A})\) depending on the structure of the algorithm \(\mathcal{A}\). Although there is a strong similarity between Lemma 5.2 and Theorem 5.1, the value of \(\Delta_{n}(P,\mathcal{A})\) in Lemma 5.2 is smaller than the right hand side of Equation (12). This could informally mean that discretizing/quantizing an algorithm improves its security.
We conclude this section by providing an example in which \(\Delta_{n}(P,\mathcal{A})\) has a much faster rate than \(n^{-1/2}\).
**Lemma 5.3**.: _Let \(P\) be the Bernoulli distribution with parameter \(p\in(0,1)\) and let \(\hat{\theta}_{n}\coloneqq\sup_{j}\mathbf{z}_{j}\). Then,_
\[\Delta_{n}(P,\mathcal{A})=2p(1-p)^{n}.\]
The proofs of Lemmas 5.2 and 5.3 can be found in Appendix B. Using Lemma 5.3 and the second part of Corollary 5.1.1, one see that in the case of algorithms with finite support output, \(\Delta_{n}(P,\mathcal{A})\) may have many different behaviours.
## 6 Summary and Discussion
The findings presented in this article should be interpreted as demonstrating the impossibility of successful attacks. Specifically, when dealing either with discrete data distributions or functionals of empirical means, our research has established that if there are a sufficient number of samples relative to the range of possibilities, attacks are unable to effectively deduce membership. In such scenarios, the information acquired by attackers becomes irrelevant, ensuring the security of the models. Notably, the rates of convergence consistently follow an order of \(n^{-1/2}\), with the constants in the rates of convergence scaling with the number of discrete data points and the dimension of the parameters in the case of functionals of empirical means.
**Future work.** Although a thorough examination of the broader continuous case is yet to be addressed, we view the empirical mean example as a fundamental starting point for future investigations. Specifically, we believe that this study may be extended to the study of flow of empirical means, which would lead to the complete study of maximum likelihood estimation, empirical loss minimization algorithms and Stochastic Gradient Descent (SGD). Additionally, we think that the crucial statistical quantity we have identified paves the way for a new avenue of research, offering intriguing and promising links with various domains. These potential connections include but are not limited to differential privacy, out-of-distribution detection, statistical generalization of learning algorithms, and more.
|
2306.01464 | Theoretical Behavior of XAI Methods in the Presence of Suppressor
Variables | In recent years, the community of 'explainable artificial intelligence' (XAI)
has created a vast body of methods to bridge a perceived gap between model
'complexity' and 'interpretability'. However, a concrete problem to be solved
by XAI methods has not yet been formally stated. As a result, XAI methods are
lacking theoretical and empirical evidence for the 'correctness' of their
explanations, limiting their potential use for quality-control and transparency
purposes. At the same time, Haufe et al. (2014) showed, using simple toy
examples, that even standard interpretations of linear models can be highly
misleading. Specifically, high importance may be attributed to so-called
suppressor variables lacking any statistical relation to the prediction target.
This behavior has been confirmed empirically for a large array of XAI methods
in Wilming et al. (2022). Here, we go one step further by deriving analytical
expressions for the behavior of a variety of popular XAI methods on a simple
two-dimensional binary classification problem involving Gaussian
class-conditional distributions. We show that the majority of the studied
approaches will attribute non-zero importance to a non-class-related suppressor
feature in the presence of correlated noise. This poses important limitations
on the interpretations and conclusions that the outputs of these XAI methods
can afford. | Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe | 2023-06-02T11:41:19Z | http://arxiv.org/abs/2306.01464v1 | # Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables
###### Abstract
In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'. However, a concrete problem to be solved by XAI methods has not yet been formally stated. As a result, XAI methods are lacking theoretical and empirical evidence for the 'correctness' of their explanations, limiting their potential use for quality-control and transparency purposes. At the same time, Haufe et al. (2014) showed, using simple to examples, that even standard interpretations of linear models can be highly misleading. Specifically, high importance may be attributed to so-called suppressor variables lacking any statistical relation to the prediction target. This behavior has been confirmed empirically for a large array of XAI methods in Wilming et al. (2022). Here, we go one step further by deriving analytical expressions for the behavior of a variety of popular XAI methods on a simple two-dimensional binary classification problem involving Gaussian class-conditional distributions. We show that the majority of the studied approaches will attribute non-zero importance to a non-class-related suppressor feature in the presence of correlated noise. This poses important limitations on the interpretations and conclusions that the outputs of these XAI methods can afford.
Machine Learning, ICML, ICML
## 1 Introduction
The field of 'explainable artificial intelligence' (XAI) is devoted to answering the broad question of why an automatic decision system put forward a certain prediction. This is often addressed by techniques that attribute a so-called 'importance' score to each feature of an individual test input. It is commonly agreed that being able to answer this question is necessary to create trust in and a better understanding of the behavior of such decision systems (Baehrens et al., 2010; Ribeiro et al., 2016; Binder et al., 2016; Lundberg and Lee, 2017; Fisher et al., 2019). In Haufe et al. (2014) and Wilming et al. (2022), it was shown that features which certain XAI methods determine to be important, e.g. by inspecting their corresponding weights of a linear model, may actually not have any statistical association with the predicted variable. As a result, the provided 'explanation' may not agree with prior domain knowledge of an expert user and might undermine that user's trust in the predictive model, even if it performs optimally. Indeed, a highly accurate model might exploit so-called suppressor features (Conger, 1974; Friedman and Wall, 2005), which can be statistically independent of the prediction target yet still lead to increased prediction performance. On the other hand, incorrect explanations may implant misconceptions about the data, the model and/or the relationship between the two into a user's mind, which could lead to misguided actions that could be harmful.
While Haufe et al. (2014) have introduced low-dimensional and well-controlled examples to illustrate the problem of suppressor variables for model interpretation, Wilming et al. (2022) showed empirically that the emergence of suppressors indeed poses a problem for a large group of XAI methods and diminishes their 'explanation performance'. Here, we go one step further and derive analytical expressions for commonly used XAI methods for a simple two-dimensional linear data generation process capable of creating suppressor variables by parametrically inducing correlations between features. In particular, we investigate which XAI approaches attribute non-zero importance to plain suppressor variables that are by construction independent of the prediction target and thereby violate a data-driven definition of feature importance recently put forward by Wilming et al. (2022).
## 2 Related Work
XAI methods often analyze ML models in a post-hoc manner (Arrieta et al., 2020), where a trained model deemed to be 'non-interpretable', such as a deep neural network, is given, while the XAI methods attempt to'reverse-engineer' its decision for a given input sample. A crucial limitation of
the field of XAI is that it is still an open question what formal requirements _correct_ explanations would need to fulfill and what conclusions about data, model, and their relationship the analysis of an importance map provided by XAI methods should afford. The lack of a clear definition of what problem XAI is supposed to solve led to multiple studies evaluating explanation methods (e.g. Doshi-Velez and Kim, 2017; Kim et al., 2018; Alvarez-Melis and Jaakkola, 2018; Adebayo et al., 2018; Sixt et al., 2020). Yet, these studies primarily employ auxiliary metrics to measure secondary quality aspects, such as the stability of the provided maps. For example, Yang and Kim (2019) investigate how importance maps for one model change relative to another model. Until recently, it has been considered difficult to define and evaluate the correctness of importance maps, because real-world datasets, which are ubiquitous in the ML community as benchmarks for supervised prediction tasks, do not offer access to the 'true' set of important features. However, several XAI benchmarks using controlled synthetic data have emerged in the past three years. Agarwal et al. (2022) propose a benchmark that can generate synthetic data and assess XAI methods on a broad set of evaluation metrics. The authors state that their framework predominantly serves the purpose of gaining a better understanding of a model's internal mechanics, which would primarily show the debugging capabilities of XAI methods rather than their ability to generate knowledge of'real-world' effects. Sixt et al. (2020) provide a theoretical analysis of convergence problems of so-called saliency methods, especially Layer-wise Relevance Propagation (LRP, Bach et al., 2015), Deep Taylor Decomposition (DTD, Montavon et al., 2017), and DeepLIFT (Shrikumar et al., 2017). Notably, the provided derivations do not take the model's input data into account. Kindermans et al. (2018) use a minimal data generation example, to mainly motivate a discussion about drawbacks of saliency maps to finally propose novel explanation techniques based on the DTD framework. Janzing et al. (2020) consider a structural data generation model, promoting unconditional expectations as a value function for SHAP (Lundberg and Lee, 2017) by demonstrating that observational conditional expectations are flawed. In an extensive study on Partial Dependency Plots (Friedman, 2001) and M-plots (Apley and Zhu, 2020), Gromping (2020) theoretically analyse a regression task via a pre-defined regression model \(\mathbb{E}(Y|\mathbf{x})\) with multivariate Gaussian distributed data. They argue that M-plots can lead to deceptive results, especially if machine learning models rely on interaction effects. Wilming et al. (2022) empirically study common post-hoc explanation methods using a carefully crafted dataset based on a linear data generation process. Here, all statistical dependencies and absolute feature importances are well defined, giving rise to ground-truth importance maps. This empirical study showed that most XAI methods indeed highlight suppressor features as important.
### Definition of Feature Importance
In this paper, we adopt a data-driven notion proposed by Wilming et al. (2022) as a tentative definition of feature importance. We consider a supervised learning task, where a model \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) learns a function between an input \(\mathbf{x}^{(i)}\in\mathbb{R}^{d}\) and a target \(y^{(i)}\in\mathbb{R}\), based on training data \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)})_{i=1}^{N}\). Here, \(\mathbf{x}^{(i)}\) and \(y^{(i)}\) are realizations of the random variables \(\mathbf{X}\) and \(Y\), with joint probability density function \(p_{\mathbf{X},Y}(\mathbf{x},y)\). Then a feature \(X_{j}\) can be defined to be important if it has a statistical association to the target variable \(Y\), i.e.
\[X_{j}\text{ is important}\Rightarrow X_{j}\not\perp Y. \tag{1}\]
### Suppressor Variables
To illustrate the characteristics of suppressor variables, consider a binary classification problem with two measured scalar input features \(x_{1}\) and \(x_{2}\), where \(x_{1}\) carries all discriminative information, following Haufe et al. (2014). We design the input data such that \(x_{1}\) holds the signal of interest \(z\in\{-1,1\}\), which is identical to the target variable \(y=z\). Furthermore, during the measuring process, feature \(x_{1}\) is inadvertently obfuscated by a _distractor_\(\eta\): \(x_{1}=z+\eta\). The second feature only consists of the distractor signal, i.e. \(x_{2}=\eta\). Our goal is to learn a function that can discriminate between the two states \(y=-1\) and \(y=1\) or, in other words, recover the signal of interest \(z\). We can build a model solely based on feature \(x_{1}\) to solve the classification problem, as \(x_{1}\) is the only feature that contains information about \(y=z\). Yet, the obfuscation of \(x_{1}\) by the distractor \(\eta\) diminishes its predictive power. On the other hand, feature \(x_{2}\) does not contain any information about \(y=z\). Therefore, a model solely based on \(x_{2}\) cannot reach above chance-level classification accuracy. However, a bivariate linear model with a weight vector \(w=(1,-1)^{\top}\) can perfectly recover the signal of interest and, thereby, the target: \(w^{\top}\mathbf{x}=z+\eta-\eta=z=y\). Additionally, Structural equation models (SEM) are depicting different ways in which a variable \(X_{2}\) can influence the prediction of a target variable
Figure 1: In (a), feature \(X_{2}\) is a confounder variable influencing \(Y\) and another feature \(X_{1}\), causing spurious associations. In contrast, in (b) \(X_{2}\) is a so-called suppressor variable that has no statistical association with the target \(Y\), although both influence feature \(X_{1}\), which is called a collider.
\(Y\). In Figure 0(a) \(X_{2}\) is a confounder variable influencing \(Y\) and another feature \(X_{1}\), causing spurious associations. Confounders can appear, for example, as watermarks in image classification tasks, as studied by Lapuschkin et al. (2019) and can reduce the generalization capabilities of a model to new data where confounders might be absent. However, in contrast, we consider suppressor variables \(X_{2}\) (see Figure 0(b)) that have no statistical associations with a target variable \(Y\), while \(X_{1}\) is a collider variable, taking input from both \(Y\) and \(X_{2}\). Here, we can establish the relation \(\mathrm{P}(X_{2}\mid X_{1})\neq\mathrm{P}(X_{2}\mid X_{1},Y)\) showing a conditional dependency of the suppressor \(X_{2}\) on the target \(Y\). These conditional dependencies are used by multivariate methods to improve the accuracy of predictions. In practice, XAI methods do not distinguish whether a feature is a confounder or a suppressor, which can lead to misunderstandings about a model's performance and interpretation.
## 3 Methods
The purpose of this paper is to use a simple model of suppressor variables as a device to analyze the importances produced by a number of popular XAI methods, and to compare these importance scores to our data-driven definition of feature importance (1). In the following, we introduce notation that we will use throughout the text, define the data generation model, derive the Bayes optimal classifier, and provide further technical remarks.
### Linear Generative Model
We now slightly extend the generative data model of the former section 2.2 and provide a full specification of it. Again, we consider a binary classification problem with a two-dimensional feature space where feature \(x_{1}\), by construction, is statistically associated with the target \(y\), while feature \(x_{2}\) fulfills the definition of a suppressor variable. Correlations between both features are introduced parametrically through a Gaussian noise process, as a result of which the Bayes optimal classifier generally needs to make use of the suppressor variable. We define \(H\) and \(Z\) as the random variables of the realizations \(\eta\) and \(z\), respectively, to describe the linear generative model
\[\mathbf{x}=\mathbf{a}z+\eta,\quad y=z, \tag{2}\]
with \(Z\sim Rademacher(1/2)\), \(\mathbf{a}=(1,0)^{\top}\) and \(H\sim N(\mathbf{0},\Sigma)\) with a covariance matrix parameterized as follows:
\[\Sigma=\begin{bmatrix}s_{1}^{2}&cs_{1}s_{2}\\ cs_{1}s_{2}&s_{2}^{2}\end{bmatrix}\, \tag{3}\]
where \(s_{1}\) and \(s_{2}\) are non-negative standard deviations and \(c\in[-1,1]\) is a correlation. The vector \(\mathbf{a}\) is also called signal _pattern_(Haufe et al., 2014; Kindermans et al., 2018). With that, the generative model (2) induces a binary classification problem, where \(\mathbf{X}=(X_{1},X_{2})\) is the random variable of the realization \(\mathbf{x}\) with the joint density
\[p(\mathbf{x})=\pi p_{1}(\mathbf{x}\mid Y=1)+(1-\pi)p_{2}(\mathbf{x}\mid Y=-1 )\, \tag{4}\]
and prior probabilities, \(\pi=\mathrm{P}(Y=\pm 1)=\nicefrac{{1}}{{2}}\). The densities \(p_{1/2}\) are the class-conditional densities which are both multivariate normal, with \(\mathbf{X}\mid Y=y\sim N(\mu_{i},\Sigma)\) for \(y\in\{-1,1\}\) and \(i=1,2\) and have identical covariance matrix \(\Sigma\in\mathbb{R}^{2\times 2}\) and expectations \(\mu_{1}=(1,0)^{\top}\) and \(\mu_{2}=(-1,0)^{\top}\). A graphical depiction of the data generated by our data model is provided in Figure 2.
### Bayes Optimal Classifier
The classifier \(g:\mathbb{R}^{d}\to\{-1,1\}\) that minimizes the error \(\mathrm{P}(g(\mathbf{X})\neq Y)\) is called the Bayes optimal classifier and defined by \(g(\mathbf{x})=\mathbb{I}_{f^{*}(\mathbf{x})>1/2}\), with the conditional probability \(f^{*}(\mathbf{x})=\mathrm{P}(Y=1|\mathbf{X}=\mathbf{x})\). For multivariate normal class-conditional densities, we can calculate the exact Bayes rule \(f:\mathbb{R}^{d}\to\mathbb{R}\), which in this case is a linear discriminant function with \(g(\mathbf{x})=\mathbb{I}_{f(\mathbf{x})>0}\) and \(f(\mathbf{x})=\mathbf{w}^{\top}\mathbf{x}+b\).
The generative data model, defined above in section 3.1, induces a binary classification problem yielding two class-conditional densities which are both multivariate normal. We solve the classification task in a Bayes optimal way if we assign \(\mathbf{x}\) either to class \(Y=1\) or to class \(Y=-1\) based on the minimal squared Mahalanobis distance \(\delta^{2}(\mathbf{x},\mu_{i})=(\mathbf{x}-\mu_{i})^{\top}\Sigma^{-1}(\mathbf{ x}-\mu_{i})\) between \(\mathbf{x}\) and the two class means \(\mu_{i},i=1,2\). Then the concrete form of the linear Bayes rule is determined by the coefficients
\[w_{1}=\alpha,\quad w_{2}=-\alpha cs_{1}/s_{2} \tag{5}\]
for \(\alpha\coloneqq(1+(cs_{1}/s_{2})^{2})^{-\frac{1}{2}}\) and \(||\mathbf{w}||_{2}=1\). Note, the classification problem is set up such that the linear decision rule requires no offset or bias term, i.e. \(b=0\). In Appendix A we provide further details for deriving the Bayes optimal decision rule \(f\).
### Notation
Throughout, \(f:\mathbb{R}^{d}\to\mathbb{R}\) is a learned function, in our case the Bayes optimal classifier, where \(f\) usually represents the linear decision rule itself. The dimension of the input domain, \(d\in\mathbb{N}\), is set to \(d=2\). We define an index set of all features \([d]\coloneqq\{1,\ldots,d\}\), in order to define features of interest as a subset \(S\subset[d]\), where \(x_{S}\) denotes the restriction of \(\mathbf{x}\in\mathbb{R}^{d}\) to the index set \(S\). Analogously, we define the complement \(C=[d]\setminus S\), defining \(x_{C}\) as all other features that are not of interest in a particular explanation task. We also define the output of any XAI method as a mapping \(e_{S}:\mathbb{R}^{d}\to\mathbb{R}\) representing the importance or'relevance' assigned by the method to the feature set \(S\).
## 4 Analysis of Common Explanation Methods
In the following, we provide a theoretical analysis of popular XAI methods. The linear generative model (2) is our device to assess those methods' behavior in the presence of suppressor features.
### Gradient
A ML model's gradient itself is often used for explanations, as it describes the change of the model output as a function of the change of the input parameters (e.g. Gevrey et al., 2003; Selvaraju et al., 2017). For linear models, the gradient is identical to the model weights, and thus independent of the input sample. This might be in part a reason why linear models are sometimes described as 'glass-box' models, particularly when it comes to explaining complex non-linear models via linear surrogate models (e.g. Ribeiro et al., 2016). However, we can see that the Bayes optimal classifier's weights (5), which are the gradient of the optimal decision function \(f\), clearly attribute non-zero importance to the suppressor variable \(x_{2}\), which is inconsistent with the data-driven definition of feature importance (1).
### Pattern
Haufe et al. (2014) argue that the coefficients of linear models are difficult to interpret. In particular, they may highlight suppressor variables. Instead, the authors propose a transformation to convert weight vectors into parameters \(\mathbf{a}\) of a corresponding linear _forward model_\(\mathbf{x}=\mathbf{a}f(\mathbf{x})+\varepsilon\). The solution is provided by the covariance between the model output and each input feature: \(a_{j}=\mathrm{Cov}(x_{j},f(\mathbf{x}))=\mathrm{Cov}(x_{j},f(\mathbf{x}))= \mathrm{Cov}(x_{j},w^{\top}\mathbf{x})\), for \(j=1,\ldots d\), which yields a global importance map
\[e_{S}(\mathbf{x})\coloneqq(\mathrm{Cov}(\mathbf{x},\mathbf{x})w)_{S} \tag{6}\]
called _linear activation pattern_(Haufe et al., 2014). For the generative model (2) and the Bayes optimal classifier (5), we obtain
\[e_{\{1\}}(\mathbf{x})=\alpha s_{1}^{2}(1-c^{2})\,\quad e_{\{2\}}(\mathbf{x})=0. \tag{7}\]
Thus, the pattern approach does not attribute any importance to the suppressor feature \(x_{2}\).
### Faithfulness and Pixel Flipping
It is widely acknowledged that the correctness of any XAI method as well as the correctness of a given importance map is notoriously hard to assess. This is, because there exists no agreed upon definition of importance as well as because 'true' importances scores are rarely available when it comes to solving problems with learning algorithms. Nonetheless, surrogate metrics have been defined to work around this problem. These metrics are often referred to as 'faithfulness' and, rather than being based on fundamental properties of the data and/or model, they are often based on predictability arguments. Faithfulness is not a well-defined concept and has numerous notions, some of which are tied to specific XAI methods (Jacovi and Goldberg, 2020). As these metrics are often defined algorithmically, they can be regarded as XAI methods in their own right.
The most widely adopted notion of faithfulness is that the omission or obfuscation of an important feature will lead to a decrease in a model's prediction performance. One algorithmic operationalization to assess this is the 'pixel flipping' method (Samek et al., 2017). For linear models, the simplest form of flipping or removing features is just by setting their corresponding weights \(w_{j}\) to zero. With this, we can approximate the classification losses through squared errors as
\[e_{S}(\mathbf{x})\coloneqq\mathbb{E}\big{(}(Y-f_{w_{S}=0}(\mathbf{x}))^{2} \big{)}-\mathbb{E}\big{(}(Y-f(\mathbf{x}))^{2}\big{)}. \tag{8}\]
Figure 3: Analytical approximations of faithfulness and permutation feature importance. Shown is a family of curves as a function of feature correlation \(c\in[-1,1]\) variance \(s_{1}^{2}\) for constant variance \(s_{2}^{2}=0.5\). Importance maps differ in offsets, indicating consistently higher importance for the informative feature \(x_{1}\). Yet, both methods allocate importance also to the suppressor feature \(x_{2}\) for \(c>0\). Analogous figures for different \(s_{2}^{2}\) values are contained in the supplementary Figures 6 and 7.
Figure 2: Data sampled from the generative process (2) for different correlations \(c\) and constant variances \(s_{1}^{2}=0.8\) and \(s_{2}^{2}=0.5\). Boundaries of Bayes optimal decisions are shown as well. The marginal sample distributions illustrate that feature \(x_{2}\) does not carry any class-related information.
For features \(x_{1}\) and \(x_{2}\), we obtain
\[\begin{split} e_{\{1\}}(\mathbf{x})&=2\alpha-\alpha^{2 }+\alpha^{2}s_{1}^{2}(2c^{2}-1),\\ e_{\{2\}}(\mathbf{x})&=\alpha^{2}c^{2}s_{1}^{2} \,\end{split} \tag{9}\]
as derived in Appendix C. We can observe that for non-zero correlation \(c\), \(e_{\{2\}}\) is non-zero; that is, pixel-flipping assigns importance to the suppressor feature \(x_{2}\).
### Permutation Feature Importance
Proposed by Breiman (2001), the permutation feature importance (PFI) for features \(x_{S}\) measures the drop in classification performance when the associations between \(x_{S}\) and the corresponding class labels is broken via random permutation of the values of \(x_{S}\). As in pixel flipping, a significant drop in performance defines an important feature (set). Let \(\pi_{S}(\mathbf{x})\) be the randomly permuted version of \(\mathbf{x}\), where features with indices in \(S\) are permuted and the remaining components are untouched. The randomly permuted features \(\pi_{S}(\mathbf{x})\) and \(x_{S}\) are independent and identically distributed now, which leads to the following approximation of PFI:
\[\begin{split} e_{S}(\mathbf{x})\coloneqq\mathbb{E}\big{(}(Y-f( \pi_{S}(\mathbf{x})))^{2}\big{)}-\mathbb{E}\big{(}(Y-f(\mathbf{x}))^{2}\big{)} \.\end{split} \tag{10}\]
For features \(x_{1}\) and \(x_{2}\), we obtain
\[\begin{split} e_{\{1\}}(\mathbf{x})=2\alpha+2\alpha^{2}c^{2}s_{ 1}^{2}\quad e_{\{2\}}(\mathbf{x})=2\alpha^{2}c^{2}s_{1}^{2}\.\end{split} \tag{11}\]
Thus, similar to faithfulness, PFI assigns non-zero importance to \(x_{2}\) if \(|c|>0\). This similarity is expanded upon in Appendix D, and a graphical depiction of that behavior is presented for both methods in Figure 3.
### Partial Dependency Plots
Partial dependency (PD) plots are a visualization tool for (learned) high-dimensional functions, aiming to foster a deeper understanding of the relations between their in- and outputs. PD plots also became widely appreciated in the XAI community, where they have been proposed as model-agnostic 'interpretation' or 'explanation' tools (e.g., Molnar, 2020). For a group of features of interest \(x_{S}\) and remaining features \(x_{C}\), the partial dependency function is the average function
\[\begin{split} e_{S}(\mathbf{x})\coloneqq\mathbb{E}_{x_{C}} \big{(}f(\mathbf{x})\big{)}=\int_{\mathbb{R}}f(x_{S},x_{C})p(x_{C})\mathrm{d} x_{C}\,\end{split} \tag{12}\]
where \(p(x_{C})\) denotes the marginal probability density function, or'marginal expectation', of \(x_{C}\). The Bayes optimal decision (5) allows us to directly state the partial dependency functions for features \(x_{1}\) and \(x_{2}\) as
\[\begin{split} e_{\{1\}}(\mathbf{x})=\alpha x_{1}\quad e_{\{2\}}( \mathbf{x})=-\alpha cs_{1}s_{2}^{-1}x_{2}\.\end{split} \tag{13}\]
These results indicate that the PD function does vary as a function of the suppressor feature \(x_{2}\). This is further illustrated in Figure 4, which shows PD plots with corresponding scatter plots of the log odds \(f(\mathbf{x})\) as a function of the feature of interest \(x_{S}\). The partial dependency function for \(x_{2}\) is heavily influenced by the correlation of \(x_{1}\) and \(x_{2}\) and only vanishes for \(c=0\), indicating that PD plots are indeed merely a tool to visualize relations between in- and outputs of a function rather than providing 'explanations' compatible with the data-driven definition of feature importance (1). This is in line with works reporting problematic behavior of PD plots when applied to strongly correlated data (Apley & Zhu, 2020; Molnar, 2020).
Marginal PlotsFor exploratory analyses of tabular datasets, it is common to start by visually assessing simple scatter plots of the target variable as a function of individual features. As such, it is common to fit curves to pairs of in- and outputs \((x_{1},y)\) and \((x_{2},y)\). This can be done by estimating the conditional expectations \(\mathbb{E}\big{(}Y|X_{1}=x_{1}\big{)}\) or \(\mathbb{E}\big{(}Y|X_{2}=x_{2}\big{)}\). A variation of this is to replace output parameters by their model predictions, leading to conditional expectations \(e_{S}(\mathbf{x})\coloneqq\mathbb{E}\big{(}f(x_{S},x_{C})|X_{S}=x_{S}\big{)}\), which were coined M-plots by Apley & Zhu (2020). Their Calculation requires the conditional expectations \(\mathbb{E}\big{(}X_{2}|X_{1}=x_{1}\big{)}=\frac{cs_{2}}{s_{1}}h(x_{1})\) and \(\mathbb{E}\big{(}X_{1}|X_{2}=x_{2}\big{)}=\frac{cs_{1}}{s_{2}}x_{2}\), where
\[\begin{split} h(x_{1})\coloneqq(x_{1}-1)\vartheta(\nicefrac{{2x_{1 }}}{{s_{1}^{2}}})+(x_{1}+1)(1-\vartheta(\nicefrac{{2x_{1}}}{{s_{1}^{2}}})) \end{split} \tag{14}\]
Figure 4: The Partial Dependency Plots (black solid line) and M-plots (red dashed line) for different correlations (columns), and different features \(x_{1}\) (upper row) and \(x_{2}\) (bottom row) corresponding for Figure fig:1. The background shows a scatter plot of the corresponding predictions \(f(\mathbf{x})\) vs. the feature of interest \(\mathbf{x}_{S}\). The Partial Dependency Plots and M-plots both ‘follow’ the ‘trend’ of the samples showing an apparent dependency on the feature \(x_{1}\) (upper row). For feature \(x_{2}\) the scatter plots show no structural direction, where we would suspect no ‘directional response’ from explanation methods like those shown by the M-plots. While PD plots show a dependency on \(x_{2}\). The figures depict cropped versions; the scatter plots and explanation functions extend beyond the axes’ limits for some plots.
and with \(\vartheta(x)\coloneqq(1+\exp(-x))^{-1}\) as the sigmoid function. For the generative model (2) and corresponding Bayes optimal classifier with weights (5), the conditional expectations for the model given \(x_{1}\) or \(x_{2}\), respectively, amount to
\[e_{\{1\}}(\mathbf{x})=\alpha x_{1}-\alpha c^{2}h(x_{1})\quad e_{\{2\}}( \mathbf{x})=0. \tag{15}\]
This is shown in Appendix E. Thus, the M-plot assigns a vanishing conditional expectation value to the suppressor variable \(x_{2}\), which is also confirmed visually in Figure 4 (bottom row). As such, M-plots appear to be suitable tools to identify important features according to definition (1). However, M-plots have been reported to lead to misinterpretations of main effects if \(y\) depends on \(x_{1}\) and \(x_{2}\), especially when there is an interaction between the two features (Gromping, 2020). Studying the case of interacting features, however, goes beyond the scope of this paper.
### Shapley Values
Another class of XAI methods leverages game theoretic considerations to assign importance scores to individual features. Originally introduced by Shapley (1953), the concept of distributing gains of a coalition game among players fairly was extended by Lipovetsky & Conklin (2001) and Lundberg & Lee (2017), who propose the use of Shapley values (Shapley, 1953) as a procedure to quantify the contribution of a feature to a decision function by considering all possible combinations of features. One can quantify the contribution of a feature \(x_{j}\) to a coalition of features \(S\) via the Shapley value
\[e_{\{j\}}=\sum_{S\subseteq[d]\setminus\{j\}}\gamma_{d}(S)\left[v(S\cup\{j\}) -v(S)\right]\, \tag{16}\]
with the weighting factor \(\gamma_{d}\) representing the proportion of coalitions \(S\) not including the \(j\)th feature, defined as \(\gamma_{d}(S)=\nicefrac{{|S|\!\left(d-|S|-1\right)!}}{{d!}}\). The value function \(v:2^{[d]}\to\mathbb{R}\), with \(v(\emptyset)=0\), is a set function that assigns a quantity of 'worth' to a coalition and can have many forms. But, for our analysis, we are focusing on the choices made by Lipovetsky & Conklin (2001); Lundberg & Lee (2017) and Aas et al. (2021). In general, the purpose of the value function \(v(S)\coloneqq g_{S}(\mathbf{x}_{S})\), \(g_{S}:\mathbb{R}^{|S|}\to\mathbb{R}\) is to measure the impact of a reduced subset of feature values \(x_{S}\) on the model output. In the following paragraphs, we analyze three different value functions to assess: (1) their impact on feature attribution within the Shapley value framework, and (2) the consequences for models relying on suppressor variables.
Coefficient of Multiple DeterminationIn the Shapley value regression context, Lipovetsky & Conklin (2001) leverage the coefficient of determination (Hoffman, 1960) as a value function, which we decompose as \(R^{2}=\sum_{j=1}^{d}w_{j}r_{j}\). Here, \(w_{j}\) are the learned model weights, and \(r_{j}\coloneqq(X^{\top}y)_{j}\) defines the sample correlation between feature \(x_{j}\) and target \(y\), for standardized features \(x_{j}\). We can directly define \(R^{2}\) for a subset of features as \(g_{S}(\mathbf{x}_{S})\coloneqq R_{S}^{2}=\sum_{j\in S}w_{j}r_{j}\), and utilize it as value function \(v(S)\coloneqq g_{S}(x_{S})\), which can be interpreted as shares of the overall \(R^{2}\). If we recall the data generation process (2) and consider the covariances \(\mathrm{Cov}(Y,X_{1})=1\), and \(\mathrm{Cov}(Y,X_{2})=0\), respectively, we can state the marginal Pearson correlations \(\rho_{Y,X_{1}}=(s_{1}^{2}+1)^{-1/2}\) and \(\rho_{Y,X_{2}}=0\) directly, without relying on the sample correlations \(r_{j}\).
First, we consider the case of calculating the Shapley values \(e_{\{j\}}\) with respect to the \(R_{S}^{2}\) value function, and, as originally intended by Lipovetsky & Conklin (2001), three hypothetically trained models: One bivariate model, here the Bayes rule (5), and two univariate models \(f_{\{1\}}(\mathbf{x})=\hat{w}x_{1}\) and \(f_{\{2\}}(\mathbf{x})=\tilde{w}x_{2}\). We specify \(e_{\{1\}}\), \(e_{\{2\}}\) as
\[e_{\{1\}}(\mathbf{x})=\frac{\alpha+1}{2(s_{1}^{2}+1)^{1/2}}\quad e_{\{2\}}( \mathbf{x})=\frac{\alpha-1}{2(s_{1}^{2}+1)^{1/2}}, \tag{17}\]
where the rules \(f_{\{1\}}(\mathbf{x})=x_{1}\) and \(f_{\{2\}}(\mathbf{x})=x_{2}\), with \(\hat{w}=1\) and \(\tilde{w}=0\) correspond to the optimal decisions for the univariate models. We can observe that the Shapley values are 'governed' by the factor \(\alpha\) of the bivariate model. As long as \(c\neq 0\), it holds that \(\alpha\neq 1\), and this method attributes importance to the suppressor feature \(x_{2}\). Now, we approximate this procedure using only the bivariate model containing all variables - this is the 'common' scenario, as it can be quite computationally expensive to train new models on many feature subsets. Using the Shapley value framework together with the \(R^{2}\) measure, we obtain
\[e_{\{1\}}(\mathbf{x})=\alpha(s_{1}^{2}+1)^{-1/2},\quad e_{\{2\}}(\mathbf{x})=0. \tag{18}\]
Since \(e_{\{2\}}=0\), we can conclude that \(R^{2}\) measure in combination with Shapley values is an appropriate value function for assessing feature importance for our linear data generation process (2). This, and the work of the following section, is expanded upon in Appendix F.
ShapLundberg & Lee (2017) propose the conditional expectation for a suitable approximation of \(f\), but for computational reasons the authors decided to approximate it with the non-conditional expectation, assuming feature independence. This is called the SHAP (Shapley additive explanations) approach. Later, Aas et al. (2021) suggested an estimation method for the conditional expectation, extending SHAP by actively incorporating potential dependencies among features. We start by defining the value function via the marginal expectation \(g_{S}(\mathbf{x}_{S})\coloneqq\mathbb{E}_{x_{C}}\big{(}f(x_{S},x_{C})\big{)}\), and with the results of Section 4.5, we obtain the Shapley values
\[e_{\{1\}}(\mathbf{x})=\alpha x_{1},\quad e_{\{2\}}(\mathbf{x})=-\alpha cs_{1}s_{ 2}^{-1}x_{2}. \tag{19}\]
This, in essence, resembles the partial dependency functions (13). In a similar way, we calculate the Shapley values for the set function defined via the conditional expectation \(g_{S}(\mathbf{x}_{S})\coloneqq\mathbb{E}\big{(}f(x_{S},x_{C})|X_{S}=x_{S}\big{)}\) as
\[\begin{split}& e_{\{1\}}(\mathbf{x})=\alpha x_{1}-\frac{\alpha c^{2}}{ 2}h(x_{1})-\frac{\alpha cs_{1}}{2s_{2}}x_{2}\\ & e_{\{2\}}(\mathbf{x})=\frac{\alpha c^{2}}{2}h(x_{1})-\frac{ \alpha cs_{1}}{2s_{2}}x_{2}\,,\end{split} \tag{20}\]
where \(h\) is defined in (14). Thus, the Shapley value \(e_{\{2\}}\) does not just reflect an attribution of importance to the suppressor variable \(x_{2}\) but is also affected by feature \(x_{1}\) if \(c\neq 0\).
### Counterfactual Explanations
Wachter et al. (2017) propose an explanation framework based on counterfactual explanations, which we can think of as statements depicting an "alternative world". Formally, we have a given instance \(\xi\in\mathbb{R}^{d}\) and the desired outcome \(y^{*}\), and try to find a minimizer
\[\mathbf{x}^{*}=\arg\min_{\mathbf{x}}\,\max_{\lambda}\,\lambda(f(\mathbf{x})- y^{*})^{2}+\delta(\mathbf{x},\xi)\, \tag{21}\]
for \(\lambda\in\mathbb{R}\) and a suitable distance function \(\delta\)(Wachter et al., 2017). To find a counterfactual sample according to (21) for our linear model \(f(\mathbf{x})=w^{\top}\mathbf{x}\), it is sufficient to consider points that are located on the linear decision boundary \(f(\mathbf{x}^{*})=0\) of the Bayes optimal classifier (5), since the decision can be flipped in any epsilon-neighborhood around any such point. The closest such counterfactual \(\mathbf{x}^{*}\) for a given instance \(\xi\) is the point that has minimal distance to \(\xi\) in the Euclidean sense. We can also think of that point as the orthogonal projection of \(\xi\) onto the decision hyperplane via its orthogonal subspace
\[\langle\xi-au,u\rangle=0\quad\text{with}\quad\mathbf{x}^{*}\coloneqq\xi-au\, \tag{22}\]
where \(u\) is an element of the orthogonal complement of \(w\), and \(a\in\mathbb{R}\). Then, with \(u=(cs_{1}/s_{2},1)^{\top}\) and \(a=\langle\xi,u\rangle/\|u\|_{2}^{2}\), the counterfactual explanation \(\mathbf{x}^{*}\) results in
\[\begin{split}& x_{1}^{*}=\beta(\xi_{1}-\xi_{2}cs_{1}s_{2}^{-1})\\ & x_{2}^{*}=\beta cs_{1}s_{2}^{-1}(\xi_{2}cs_{1}s_{2}^{-1}+\xi_{ 1})\,\end{split} \tag{23}\]
with \(\beta\coloneqq((cs_{1}s_{2}^{-1})^{2}+1)^{-1}\). Thus, to change the decision of the Bayes optimal classifier with minimal interventions, a shift from \(\xi\) to \(\mathbf{x}^{*}\) would be required, and this shift would not only involve a change in the informative feature \(x_{1}\) but also in the suppressor feature \(x_{2}\) (see also Figure 5 for a graphical depiction). Based on this result it may be, erroneously, concluded from this counterfactual explanation, that feature \(x_{2}\) has a correlation with or even a causal influence on the classifier decision.
### Firm
Another post-hoc method to assess the importance of features of an arbitrary function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is the feature importance ranking measure (FIRM) proposed by Zien et al. (2009). Inspired by the feature sensitivity measure of Friedman (2001), the authors utilize the conditional expectation \(\mathbb{E}\big{(}f(\mathbf{x})|X_{S}=x_{S}\big{)}\) and define the importance ranking measure as
\[e_{S}(\mathbf{x})\coloneqq\mathrm{Var}(\mathbb{E}\big{(}f(\mathbf{x})|X_{S} \big{)})^{\frac{1}{2}}. \tag{24}\]
Computing this expression, in general, is infeasible since we need access to the data distribution. For the generative model (2) it is possible to prove that
\[\begin{split} e_{\{1\}}(\mathbf{x})&=\alpha \mathrm{Var}(X_{1}-c^{2}h(X_{1}))^{\frac{1}{2}}\\ &\geq\frac{\alpha}{2}\left(2\vartheta(2/s_{1}^{2})-1\right)\\ e_{\{2\}}(\mathbf{x})&=0\.\end{split} \tag{25}\]
A derivation of the lower bound is provided in Appendix G. As also noted in Haufe et al. (2014), the variability of \(e_{\{2\}}\) is zero, indicating that FIRM does not assign importance to suppressor features.
### Integrated Gradients
Integrated gradients (Sundararajan et al., 2017) belongs to the family of path methods (Friedman, 2004), which aggregate a model's gradients along a predefined path or curve \(\gamma:[0,1]\to\mathbb{R}^{d}\) with \(\gamma(0)=\mathbf{x}^{\prime}\) and \(\gamma(1)=\mathbf{x}\). If we think of images, then \(\mathbf{x}\in\mathbb{R}^{d}\) can be an image we seek an explanation for, and \(\mathbf{x}^{\prime}\) represents a corresponding baseline image, where a black image \(\mathbf{x}^{\prime}\equiv 0\) is a common choice. For the curve \(\gamma:t\mapsto\mathbf{x}^{\prime}+t(\mathbf{x}-\mathbf{x}^{\prime})\), a general baseline \(\mathbf{x}^{\prime}\), and a model \(f\), the integrated gradient importance map is given by (Sundararajan et al., 2017)
\[e_{\{j\}}(\mathbf{x})\coloneqq(x_{j}-x_{j}^{\prime})\int_{[0,1]}\frac{ \partial f(\mathbf{x}^{\prime}+t(\mathbf{x}-\mathbf{x}^{\prime}))}{\partial x_ {j}}\,\mathrm{d}t. \tag{26}\]
Figure 5: Counterfactual \(x^{*}\) for a given instance of interest \(\xi\) in the generative setting \(c=0.8\), \(s_{1}^{2}=0.8\), and \(s_{2}^{1}=0.5\). As can be seen, for \(|c|>0\), reaching a counterfactual decision always involves a manipulation of the suppressor feature \(x_{2}\).
For the Bayes optimal linear classifier (5), the importance scores for features \(x_{1}\) and \(x_{2}\) are given by
\[\begin{split} e_{\{1\}}(\mathbf{x})&=\frac{\alpha}{2} (x_{1}^{2}-(\mathbf{x}^{\prime})^{2}),\\ e_{\{2\}}(\mathbf{x})&=-\frac{\alpha cs_{1}}{2s_ {2}}(x_{2}^{2}-(\mathbf{x}^{\prime})^{2})\,\end{split} \tag{27}\]
respectively. Thus, independent of the baseline \(\mathbf{x}^{\prime}\) (provided that \(\mathbf{x}^{\prime}\neq\mathbf{x}\)), the integrated gradients for the suppressor feature \(x_{2}\) are non-zero for \(|c|>0\).
### Lime
The idea of LIME (Ribeiro et al., 2016) is to 'explain' a model's decision for a given instance \(\mathbf{x}\) by sampling data points in the vicinity of \(\mathbf{x}\) and using these samples to build a 'glass-box' model, which is assumed to be more easily interpretable. Typically, a linear model is chosen as a surrogate model. In the scenario studied here, the Bayes optimal model (5) is already linear with non-zero weight \(w_{2}\). Thus, we would expect that a local linear approximation would show the same behavior. Indeed, Garreau and von Luxburg (2020) show that for a 'linear black-box' model and a Gaussian _i.i.d._ sampling procedure from \(N(\mu,\sigma^{2}\mathbf{I}_{d})\), the local weights \(\hat{w}_{j}\) estimated by LIME are approximately proportional to the partial derivatives of \(f\). Since these derivates reduce to the weights (5) of the Bayes optimal linear classifier in the studied setting, we have \(w_{j}\propto\hat{w}_{j}\). Therefore, LIME resembles the global model and attributes non-zero importance to the suppressor variable \(x_{2}\).
### Saliency Maps, LRP and DTD
Saliency map explanations estimate how a prediction \(f(\mathbf{x})\) is influenced when moving along a specific direction in the input space. If the direction is along the model's gradient, this is known as sensitivity analysis (Baehrens et al., 2010; Simonyan et al., 2014). Several explanation techniques for neural networks are based on this approach (e.g. DeConvNet and Guided BackProp), primarily distinguishing themselves by their treatment of rectifiers (Kindermans et al., 2018; Zeiler and Fergus, 2014; Springenberg et al., 2015). For single-layer neural networks without rectifiers, that is, linear models, the saliency maps of these explanation methods reduce to the gradient itself (cf. Section 4.1). Layerwise relevance propagation (LRP, Bach et al., 2015) and its generalization Deep Taylor Decomposition (DTD, Montavon et al., 2017) are methods that propagate a quantity termed'relevance' from output to input neurons backwards through a neural network, following a set of rules. The DTD approach develops, for each layer \(l\) of a neural network, a first-order Taylor expansion around a root point \(\mathbf{x}_{0}\), which gives rise to a relevance score for each neuron \(j\) with the propagation rule \(e_{\{j\}}(\mathbf{x})\coloneqq R_{j}^{l-1}=w\odot(\mathbf{x}-\mathbf{x}_{0})(w ^{\top}\mathbf{x})^{-1}R_{j}^{l}\), where \(\odot\) is the Hadamard product. Choosing an appropriate root point is essential in the DTD framework, and Kindermans et al. (2018) notice that by estimating the distractor \(\eta\) and understanding it as root point \(x_{0}=\eta\), DTD recovers the pattern estimator for linear models proposed by Haufe et al. (2014). Kindermans et al. (2018) derive the signal estimator \(S_{\mathbf{a}}=\operatorname{Cov}(\mathbf{x},y)w^{\top}\mathbf{x}\), yielding the DTD propagation rule
\[e_{\{j\}}(\mathbf{x})=(w\odot\mathbf{a})_{j} \tag{28}\]
for \(j=1,2\) (cf. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg Eg. Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg Eg. Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg. Eg Eg. Eg Eg. Eg. Eg. Eg Eg. Eg. Eg Eg. Eg. Eg
target. We use this definition to construct a standard binary classification problem with Gaussian class-conditional distributions. By introducing noise correlations within this model, we create a suppressor variable, which has no statistical relation to the target but whose inclusion in any model will lead to better predictions (Haufe et al., 2014).
We view this simple, yet very insightful, classification problem primarily as a minimal counterexample, where the existence of suppressor variables challenges the assumptions of many XAI methods as well as the assumptions underlying metrics such as faithfulness, which are often considered a gold-standard for quantitative evaluation and an appropriate surrogate for 'correctness'. Indeed, authors have shown empirically that XAI methods can lead to suboptimal 'explanation performance' even when applied to linear data with suppressor variables (Wilming et al., 2022). Here, we complement the study of Wilming et al. (2022) by deriving analytical expression of popular XAI methods employing a two-dimensional linear binary classification problem that has the same problem structure as the 64-dimensional problem presented by Wilming et al. (2022). These analytical expressions allow us to study the factors that lead to non-zero importance attribution, and to expose the mathematical mechanism by which different properties of the data distribution influence XAI methods. Our results demonstrate that outputs of explanation methods must be interpreted in combination with knowledge about the underlying data distribution. Conversely, it may be possible that XAI methods with improved behavior could be designed by reverse-engineering the analytical importance functions \(e_{S}\).
We found that several XAI methods are incapable of nullifying the suppressor feature, i.e., assigning non-zero importance to it, when correlations between features are present. This is the case for the naive pixel flipping and the PFI methods representing operationalization of faithfulness, but also for actively researched methods like SHAP, LIME, and counterfactuals, as well as partial dependency plots. Note that these methods can typically also not be 'fixed' by just ranking features according to their importance scores and considering only the top features 'important'. In fact, we can devise scenarios where the weight \(w_{2}\) corresponding to the suppressor variable \(x_{2}\) is more than twice as high as the weight \(w_{1}\) (see Appendix B and Haufe et al. (2014)), which may lead to the misconception that the feature \(x_{2}\) is 'twice' as important as feature \(x_{1}\). XAI methods based on the Shapley value framework yield particular diverging results, as the strong influence of the value function is reflected in the diversity of analytical solutions. SHAP-like approaches, based on the conditional or marginal expectations 4.6, show how heavily dependent such methods are on the correlation structure of the dataset. In contrast, the M-Plot approach, FIRM, PATTERN, and the Shapley value approach using the \(R^{2}\) value function, deliver promising results by assigning exactly zero importance to the suppressor variable. This positive result can be attributed to the fact that all methods make explicit use of the statistics of the training data including the correlation structure of the data. This stands in contrast to methods using only the model itself to assign importance to a test sample.
### Limitations
Here we studied a linear generative model and used a univariate data-driven definition of feature importance to design our ground truth data. In real-world scenarios, we do not expect that suppressor variables are always perfectly uncorrelated with the target. In Appendix A we provide deliberations for the case where the suppressor variable \(x_{2}=\varepsilon z+\eta_{2}\) consists of a small portion \(\varepsilon\in\mathbb{R}\) of the signal \(z\) as well. However, in this case, it is not exactly clear what numerical value for the importance we can assume as ground-truth, other than zero. Furthermore, modern machine learning model architectures excel in dealing with highly complex non-linear data involving, among other characteristics, feature interactions. Most XAI methods have been designed to 'explain' the predictions of such complex models. To better understand the behavior of both machine learning models and XAI methods in such complex settings, future work needs to focus on non-linear cases, and develop clear definitions of feature importance in complex settings.
## 6 Conclusion
We study a two-dimensional linear binary classification problem, where only one feature carries class-specific information. The other feature is a suppressor variable carrying no such information yet improving the performance of the Bayes optimal classifier. Analytically, we derive closed-form solutions for the outputs of popular XAI methods, demonstrating that a considerable number of these methods attribute non-zero importance to the suppressor feature that is independent of the class label. We also find that a number of methods do assign zero significance to that feature by accounting for correlations between the two features. This signifies that even the most simple multivariate models cannot be understood without knowing essential properties of the distribution of the data they were trained on.
## Acknowledgements
This result is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 758985), the German Federal Ministry for Economy and Climate Action (BMWK) in the frame of the QI-Digital Initiative, and the Heidenhain Foundation. We thank Jakob Runge for a fruitful discussion. |
2307.16231 | Strategies for targeting chondrosarcomas in vivo and molecular
dissection of oncogenic events in chondrosarcomas: is epigenetics the
culprit? | It is obvious that both epigenetic and non-epigenetic actors contribute to
tumorigenesis in chondrosarcomas and more generally in other cancers. Thus, the
main altered pathways in chondrosarcomas are now well established and include
both epigenetic and non-epigenetic pathways such as the PI3K-AKT signaling,
EGFR overexpression, SPARC overexpression, c-myc overexpression, IHH/GLI1 axis,
loss of Rb function, HIF1-alpha stabilization, IDH1 mutations, hypermethylation
and SIRT1. This review aims to provide a detailed analysis of these pathways
and highlights recurrent interactions between non-epigenetic and epigenetic
actors in chondrosarcomas, raising the intriguing possibility of developing
therapeutics targeting both epigenetic and non-epigenetic actors and supporting
data from previous studies. Finally, we propose some strategies for targeting
chondrosarcomas in vivo based on properties of this tumor. | Rédoane Daoudi | 2023-07-30T13:43:33Z | http://arxiv.org/abs/2307.16231v1 | **Strategies for targeting chondrosarcomas _in vivo_ and molecular dissection of oncogenic events in chondrosarcomas: is epigenetics the culprit?**
## Abstract
It is obvious that both epigenetic and non-epigenetic actors contribute to tumorigenesis in chondrosarcomas and more generally in other cancers. Thus, the main altered pathways in chondrosarcomas are now well established and include both epigenetic and non-epigenetic pathways such as the PI3K-AKT signaling, EGFR overexpression, SPARC overexpression, c-myc overexpression, IHH/GLI1 axis, loss of Rb function, HIF1-alpha stabilization, IDH1 mutations, hypermethylation and SIRT1. This review aims to provide a detailed analysis of these pathways and highlights recurrent interactions between non-epigenetic and epigenetic actors in chondrosarcomas, raising the intriguing possibility of developing therapeutics targeting both epigenetic and non-epigenetic actors and supporting data from previous studies. Finally, we propose some strategies for targeting chondrosarcomas in vivo based on properties of this tumor.
Keywords: chondrosarcomas; chondrosarcoma; epigenetics; epigenetic; non-epigenetic; AKT; methylation; acetylation; IDH1; pathways
## Background
Chondrosarcomas are malignant cartilaginous sarcomas that are chemoresistant and radioresistant. Although chondrosarcomas are rare, they are potentially lethal tumors because they can spread to another part of the body like lungs. Epigenetic and non-epigenetic pathways are involved in the tumorigenesis of chondrosarcomas. Oncogenic events in chondrosarcomas mainly include cell proliferation, cell cycle progression, cell migration, cell survival, chemoresistance, radioresistance, angiogenesis and epithelial to mesenchymal transition. The latter appears paradoxical because sarcomas are, by definition, mesenchymal _ab initio_. However, it is thought that sarcomas can undergo EMT-related processes and display a biphenotypic morphology with properties of both mesenchymal (vimentin) and epithelial tumors (E cadherin expression)[1]. Hence, sarcomas can become either more mesenchymal (epithelial to mesenchymal transition) or epithelial (mesenchymal to epithelial transition). Here we describe the main epigenetic and non-epigenetic pathways involved in these oncogenic events in chondrosarcomas with regard to their interactions. As expected, we show that both epigenetic and non-epigenetic actors could constitute an interesting therapeutic target in
chondrosarcomas and we summarize the involved pathways in this tumor. In the second part of this review, we describe some strategies for targeting chondrosarcomas _in vivo_ based on properties of this tumor.
## 2 Oncogenic pathways in chondrosarcoma: the role of epigenetics
### AKT-related pathways and oncogenic events in chondrosarcoma
The class I phosphoinositide-3-kinase (PI3K) produces PtdIns (3,4,5)P3 (PIP3) from PtdIns (4,5)P2 (PIP2) by phosphorylating PIP2. PIP3 is a phospholipid that resides on the plasma membrane of cells and it activates downstream targets like the kinase Akt for example. PIP3 can be dephosphorylated by the tumor suppressor PTEN, frequently deregulated in human cancers. The pleckstrin homology domain (PH) is a protein domain of approximately 120 amino acids found in several proteins such as Akt or PDK1. This domain is able to bind phosphatidylinositol lipids within membranes like PIP3 or PIP2. When PIP3 is produced by the class I PI3K, PH domains of Akt and PDK1 bind to PIP3. PDK1 can activate Akt by phosphorylating it on threonine 308[2]. Another kinase called PDK2 can activate Akt by phosphorylating it on serine 473[2]. Then Akt (Akt1) regulates several downstream targets in cells and the expression of these targets depends on both cell types and cell context. By regulating its downstream targets Akt has been shown to be involved in many different cellular processes, such as regulation of glucose metabolism in insulin-response tissues, cell proliferation, cell survival, protein synthesis (via mTOR1), angiogenesis and tumorigenesis. In chondrosarcoma AKT appears to be a key master regulator of a wide range of oncogenic events including EMT, migration, proliferation, survival and angiogenesis. In this part, we explore the involvement of the AKT pathway in tumorigenesis in chondrosarcoma by describing the main AKT-related pathways implicated in chondrosarcoma. Moreover, we assess whether epigenetics plays a significant role in tumorigenesis depending on these different pathways in chondrosarcoma.
### The double face of EGFR in chondrosarcoma: epigenetic and genetic alterations
EGFR is highly expressed in a wide range of solid tumors, including chondrosarcoma[3, 4, 5], thereby contributing to tumor aggressiveness. However, there are multiple molecular mechanisms underlying this overexpression in chondrosarcoma. EGFR acts as an upstream activator of the AKT pathway by activating the Ras pathway. In his turn, Ras is able to activate the PI3K[6]. The PI3K can also be activated by interacting with phosphorylated tyrosine residues in the EGFR. Because AKT seems to be a major regulator of oncogenic events in chondrosarcoma and that several EGFR inhibitors are already in clinical trials, EGFR could be a relevant therapeutic target in chondrosarcoma. Here, we explore the main pathways that may lead to EGFR overexpression in chondrosarcoma since it is important to elucidate these pathways in order to propose adequate treatments.
Firstly, EGFR amplification occurs in a subset of chondrosarcoma[5] leading to EGFR overexpression and AKT-related pathways activation. Secondly, D2Nep, an S-adenosyl-I-homocysteine hydrolase (SAHH) inhibitor leading to a global methyltransferases inhibition, decreases EGFR expression in various cancer cell types[7, 8]. Moreover, the activating histone modifications H3K27Ac and H3K4me3 are associated with EGFR expression in gliomas[9]. This result suggests that D2Nep could decrease EGFR expression by preventing H3K4 trimethylation. In chondrosarcoma our preliminary results show that D2Nep reduces EGFR expression. It may reduce EGFR expression either indirectly or directly. D2Nep acts directly if it prevents H3K4 trimethylation of the EGFR promoter. D2Nep may act indirectly if it decreases the
expression of an activator of the EGFR promoter and/or enhances the expression of an inhibitor of the EGFR promoter. Additionally, a study shows that HDAC inhibitors like vorinostat decrease EGFR expression in colorectal cancer cells [10]. Surprisingly, even in the presence of HDAC inhibitors, the histones on EGFR promoter are hypoacetylated. This result can be explained by the fact that HDAC inhibitors prevent the association of CBP, a histone acetyltransferase, on EGFR promoter. EGFR expression is reduced because SP1, a transcription factor that enhances EGFR expression, is acetylated in the presence of HDAC inhibitors and the acetylation reduces the activity of SP1. These results indicate that HDAC/3 (HDAC I), which are the histone acetyltransferase interacting with SP1, deacetylase SP1 to enhance its transcriptional activity and subsequent EGFR expression in colorectal cancer cells. Moreover, HDAC I possess not only an epigenetic activity through histones deacetylation but also a non-epigenetic activity through interacting with other proteins like SP1 in colorectal cancer cells. This non-epigenetic activity of HDAC I on EGFR expression remains to be explored in chondrosarcoma. Because D2Nep decreases EGFR expression in chondrosarcoma, EGFR is at least in part under epigenetic regulation. The role of acetylation on EGFR expression in chondrosarcoma is still unclear. EGFR also seems to depend on genetic factors because EGFR amplification is found in chondrosarcoma [5].
Furthermore, gefitinib, an EGFR tyrosine kinase inhibitor, induces growth arrest and inhibition of metastasis in JJ012, SW1353 and OUMS27 chondrosarcoma cell lines [4]. Gefitinib also decreases the expression of MMP2 and MMP9, that are known to be beta-catenin target genes. This result shows that the AKT pathway could be involved in MMP2 and MMP9 expression in chondrosarcoma. In fact, GSK3b is a direct downstream target of AKT and GSK3b inhibits beta-catenin by phosphorylating it and promoting its degradation via the E3 ubiquitin ligase component beta-TrCP [11]. AKT indirectly activates the beta-catenin pathway by inhibiting GSK3b [11]. As described previously, another evidence that the AKT pathway could be activated in chondrosarcoma is the fact that EGFR, highly expressed in chondrosarcoma, is an upstream activator of the AKT pathway. These data suggest that EGFR may be involved in EMT in chondrosarcoma because beta-catenin has been shown to be involved in both transcriptional repression of E-cadherin and transcriptional induction of EMT-related genes [12]. D2Nep induces apoptosis and reduces cell migration in chondrosarcoma [13] but it remains unclear whether the cell death is caused by the decreased EGFR expression. In fact, gefitinib doesn't induce cell death in the previous study but it is possible that authors didn't assess this phenomenon. The effect of D2Nep on EMT remains to be elucidated in chondrosarcoma but we speculate that D2Nep may prevent EMT in chondrosarcoma because it decreases EGFR expression and EGFR may be involved in EMT in chondrosarcoma.
These findings reveal that EGFR could be an interesting therapeutic target in chondrosarcoma because its pharmacological inhibition reduces growth, metastasis and MMP expression. Even if an epigenetic modulation (HDAC inhibitors or methyltransferases inhibitors) seems to be sufficient to decrease EGFR expression, genetic factors like EGFR amplification should be taken into consideration during designing therapeutic strategies targeting EGFR in chondrosarcoma. Furthermore, HDAC I may possess a non-epigenetic activity through interacting with other proteins like SP1 to promote EGFR expression in chondrosarcoma. This result suggests that both epigenetic and non-epigenetic inhibitors may represent an interesting approach to treat chondrosarcoma because a close relationship between epigenetic and non-epigenetic actors may exist in chondrosarcoma. For instance, SP1 inhibition may alter SP1 activity and may prevent the interaction between the non-epigenetic actor SP1 and the epigenetic actor HDAC I in chondrosarcoma, inhibiting the non-epigenetic activity of HDAC I. Conversely, HDAC I inhibition not only may alter its epigenetic activity through histones deacetylation but may also inhibit SP1 activity in chondrosarcoma.
### SPARC as an inducer of the AKT signaling in chondrosarcoma?
In addition to EGFR-induced AKT activation, SPARC may be involved in AKT activation in chondrosarcoma. SPARC is overexpressed in chondrosarcoma[5] and it has been shown that SPARC mediates cell survival of gliomas by activating the AKT pathway[14]. Furthermore, SPARC also activates the AKT pathway in melanoma[15]. These data suggest that SPARC overexpression may be involved in AKT activation in chondrosarcoma and that it may work in concert with EGFR overexpression to induce oncogenic events related to AKT activation. On first glance, AKT appears to be a non-epigenetic actor that induces non-epigenetic pathways like cyclin D1 activation, protein synthesis via mTOR1 activation, c-myc activation. However, emerging evidence indicates that AKT is able to induce epigenetic pathways in chondrosarcoma. For example, the AKT pathway could induce EMT in human sacral chondrosarcoma by recruiting Snail[16]. In his turn snail interacts with the histone demethylylase LSD1 to demethylate H3K4me3 of E-cadherin gene, resulting in loss of E-cadherin expression and subsequent EMT[17, 18, 19]. In breast cancer cells, AKT induces slug expression via HSF-1[20] but this pathway is not elucidated in chondrosarcoma. These results suggest that SPARC may induce not only non-epigenetic pathways, but also epigenetic pathways in chondrosarcoma.
These data provide a better understanding of the role of SPARC in chondrosarcoma and shows, once again, that epigenetic and non-epigenetic pathways appear to be closely linked, impacting targeted therapies.
### C-myc in chondrosarcoma
AKT may be able to induce proliferation in chondrosarcoma by activating the proto-oncogene c-myc. In fact, AKT phosphorylates and inactivates GSK3b[11] and in his turn GSK3b phosphorylates c-myc on threonine 58, resulting in its ubiquitin-mediated proteasomal degradation[21]. Additionally, Rasactivated Erks stabilize c-myc by phosphorylating it on serine 62[22] and the Ras pathway can be induced by EGFR activation. Interestingly EGFR is highly expressed in chondrosarcoma (see above). Furthermore, it is known that both c-myc amplification and c-myc overexpression occur in chondrosarcoma[23], a phenomenon that likely contributes to cell proliferation in this tumor.
C-myc may also be involved in Warburg effect in chondrosarcoma. Warburg effect is the phenomenon in which cancer cells perform lactate fermentation even in the presence of oxygen. Multiple actors are thought to be implicated in this phenomenon, including c-myc, lactate deshydrogenase, HIF1-alpha and PDK1. C-myc induces LDH-A expression[24] and stabilizes HIF1-alpha[25]. In his turn, HIF1-alpha is able to activate PDK1 and PDK1 inhibits pyruvate deshydrogenase[26]. Additionally, HIF1-alpha induces LDH-A expression in cancer cells[27]. Snail is a target gene for HIF1-alpha[28], suggesting that c-myc may also induce EMT in chondrosarcoma by activating snail via HIF1-alpha.
These findings also indicate that c-myc may act via non-epigenetic pathways and epigenetic pathways in chondrosarcoma.
### The hedgehog signaling pathway in chondrosarcoma
The IHH pathway is active in some chondrosarcomas. An inactivating SUFU mutation and a gli1 amplification is identified in some chondrosarcomas[29, 30] whereas PTCH1 or SMO gene mutations are infrequent in chondrosarcoma[31]. Gli1 inhibition suppress cell growth, cell cycle progression and induces apoptosis in human chondrosarcoma cells[32]. A cross-talk between AKT and IHH pathways may exist in chondrosarcoma. Indeed, AKT inhibits PKA and in his turn PKA can inactivate Gli1 by phosphorylating it on threonine 374[33]. Because c-myc is a target gene for gli1[34], the activation of IHH pathway could be associated with an epigenetic regulation depending on snail activation.
Consequently, the IHH pathway may induce EMT and other related events in chondrosarcoma. This hypothesis is consistent with the fact that HH pathway promotes EMT in lung squamous cell carcinomas[35]. In this study, gli1 is reversely associated with E-cadherin expression, suggesting that gli1 may induce c-myc expression and then c-myc may activate snail via HIF1-alpha. Finally, snail may recruit LSD1 to repress E-cadherin expression. However, as described previously, the IHH pathway is active in only some chondrosarcomas like central chondrosarcomas. In high-grade peripheral chondrosarcoma, the IHH signaling is decreased[36, 37].
### The Rb pathway in chondrosarcoma: a role for Rb in suppressing EMT?
Rb alterations are found in chondrosarcoma[38, 39, 29]. Rb loss of heterozygosity (Rb-LDH) occurs in high-grade chondrosarcomas[38]. However, Rb gene mutations are rare in chondrosarcoma[39]. Thus, chondrosarcomas express low levels of Rb. Because Rb binds to the E2F1 transcription factor, involved in DNA replication-related genes expression, it is considered a tumor suppressor. The cyclin D1-cdk4 and cyclin D1-cdk6 complexes phosphorylate Rb during G1 phase. Then cyclin E-cdk2 complex also phosphorylates Rb to allow the S-phase cell cycle progression by disrupting association between Rb and E2F1. Because AKT pathway is active in chondrosarcoma and that AKT induces cyclin D1 activation, it also inhibits Rb and induces the S-phase cell cycle progression in chondrosarcoma. Moreover, Rb-LDH is responsible for the cell cycle progression in chondrosarcoma.
Interestingly, another tumor suppressor function of Rb is proposed in a study that shows that Rb inhibits EMT in MCF10A human mammary epithelial cells[40]. In this work, knockdown of Rb decreases E-cadherin expression but significantly induces slug and ZEB-1 expression. Moreover, Rb binds to the E-cadherin promoter. Because Rb prevents Slug and ZEB-1 expression, two transcription factors that play a major role in the induction of EMT by interacting with epigenetic factors such as LSD1, Rb indirectly regulates downstream epigenetic pathways in MCF10A human mammary epithelial cells. A non-epigenetic pathway regulated by Rb includes E2F1 sequestration for example.
Nevertheless, it remains unclear whether Rb inhibits epigenetic pathways and EMT in chondrosarcoma. Reexpression of Rb or cyclin D1 inhibitors could be proposed as a therapeutic strategy in chondrosarcoma.
### Deacetylation pathways in chondrosarcoma: more than just an epigenetic regulation?
SIRT1 (HDAC III) is a NAD-dependent protein deacetylase that plays an important role in deacetylation histone and nonhistone proteins. SIRT1 induces EMT in human chondrosarcoma cell lines SW1353 and HS.819.T[41]. In addition, SIRT1 induces Twist expression, a transcription factor involved in EMT. However, the mechanisms linking SIRT1 to Twist expression are not described in this study. Interestingly, another study shows that ET-1 promotes EMT in chondrosarcomas by inhibiting miR-300, a microRNA targeting Twist, via the AMPK pathway[42]. Other studies report that SIRT1 deacetylates LKB1 and induces translocation of LKB1 from the nucleus to the cytoplasm where it activates AMPK[43, 44]. Taken together, these results indicate that SIRT1 deacetylates LKB1 in chondrosarcoma. In his turn, LKB1 activates AMPK and AMPK induces Twist expression by repressing miR-300 in chondrosarcoma. Then, Twist induces EMT by recruiting the methyltransferase SET8, which mediates H4K20 monomethyllation[12], a histone mark that is associated with repression at E cadherin promoters. These results don't contradict the tumor suppressor role of AMPK. In fact, although AMPK is often classified as a tumor suppressor since it inhibits mTOR1 and protein synthesis, it can also promote tumor progression through non-epigenetic mechanisms, depending on both cell context and tumor type[45, 46, 47]. For example, another work shows that AMPK is responsible for metastasis in human chondrosarcoma
cells[48]. Finally, a study indicates that SIRT1 is able to induce AKT activation[49] but it is unclear whether this phenomenon occurs in chondrosarcoma.
Therefore, SIRT1 has not only an epigenetic activity by deacetylating histones, but also a non-epigenetic activity by interacting with the LKB1-AMPK axis in chondrosarcoma. Additionally, SIRT1 may have a non-epigenetic activity by activating the AKT pathway in chondrosarcoma but further studies are required to address the question.
## IDH1 mutations in chondrosarcoma: beyond epigenetics?
Isocitrate dehydrogenase (IDH) exists in three isoforms and catalyzes the oxidative decarboxylation of isocitrate, producing alpha-ketoglutarate and carbon dioxide. Since metabolic alterations constitute a hallmark of several cancers, IDH mutations are involved in tumorigenesis. Interestingly, IDH mutations occur in a wide range of malignancies, including chondrosarcoma (IDH1 R132H, IDH1 R132G, IDH1 R132C). The purpose of this part is to describe the role of mutated IDH1 in chondrosarcoma. When IDH is mutated (i.e IDH1 and/or IDH2) it has a gain of function to produce the onometabololite 2-HG, structurally similar to alpha-ketoglutarate and that acts as a potent inhibitor of alpha-ketoglutate-dependent reactions[50], including histone demethylation (JmJC domain containing histone demethylases), DNA demethylation process (TET enzymes[51]) and HIF1-alpha degradation. The first remark is that IDH mutations can be responsible for non-epigenetic events in cancer cells through HIF1-alpha stabilization (angiogenesis for example).
In HCT116 and MCF-10A cells IDH-related mutations are responsible for EMT[52]. Importantly, 2-HG accumulation produces the same effect. In these cells, this result indicates that EMT may be induced in an epigenetic manner through hypermethylation of both DNA and histones. The global hypermethylation may induce the EMT phenotype in these cells by silencing tumor suppressors (H3K9me3, H3K27me3) and/or by activating oncogenes (H3K4me1, H3K4me2, H3K4me3, H3K36me3, H3K79me3)[53, 54]. Moreover, HIF1-alpha could contribute to the EMT phenotype in these cells. In fact 2-HG accumulation prevents the degradation of HIF1-alpha and snail is a target gene for HIF1-alpha[28]. In his turn the transcription factor snail recruits the histone demethylase LSD1[17], that is not inhibited by 2-HG accumulation, to repress E-cadherin expression, an epigenetic modulation that is believed to contribute largely to the EMT process in cancer cells.
For these reasons, mutated IDH inhibition appears to be a good strategy in order to suppress both EMT and mutated IDH-related oncogenic process in mutated IDH tumors. AGI-S198, a selective mutant IDH1 inhibitor, causes demethylation of histone H3K9me3, growth inhibition of IDH1-mutant glioma cells _in vitro_ as well as IDH1-mutant glioma xenografts in mice[55]. In the same way, in chondrosarcoma cell lines J1012 and HT1080, AGI-5198 prevents colony formation (inhibition of cell proliferation) and migration[56]. Furthermore AGI-5198 induces apoptosis, cell cycle arrest and decreases 2-HG levels in a dose-dependent manner in JJ012 cells. Another study shows that both mutated IDH and wild-type IDH primary chondrosarcomas have no difference in the levels of H3K4me3, H3K9me3, H3K27me3, 5-hydroxymethylcytosine (5-hmC) or 5-methylcytosine (5-mC)[57]. At first glance, these results seem to contradict the fact that IDH mutants lead to a global hypermethylation in cells. However, they can be explained because chondrosarcomas are known to have a hypoxic microenvironment. Because histone demethylases and TET enzymes need molecular oxygen to perform demethylation, it is not surprising that wild-type IDH primary chondrosarcomas also exhibit hypermethylation of histones and high levels of 5-mC.
In another study, AGI-5198 has no effect on cell viability, colony formation, cell migration and global methylation (DNA and histones) in IJ012 cells [58]. This is in contrast to results of the previous study using the same cell line [56]. Although tumorigenic properties of IJ012 cells are not affected by AGI-5198, it is interesting to note that mutated IDH1 inhibition reduces 2-HG levels in a dose-dependent manner in JJ012 cells, similarly to the previous study. Taken together, the results suggest that the differences observed between the two studies may be due to the use of different techniques (for migration for example) or different experimental conditions. Note that these two papers were almost simultaneously published in two different journals. In this study [58] cell lines with IDH1 mutations have hypermethylation of CpG islands, a phenotype known as the CpG Island Methylation Phenotype (CIMP) and IDH1 mutated enchondromas also exhibit this phenotype [59, 60]. Although some exceptions exist [61], the methylation of DNA represses transcription and could lead to cancer progression in chondrosarcomas with the CIMP. The wild-type cell lines lack this phenotype [58], suggesting that hypoxia doesn't occur in cell lines _in vitro_ in contrast to primary chondrosarcomas. A study shows that T-cell acute lymphoblastic leukemia associated with good prognosis lacks the CIMP [62]. This result indicates that chondrosarcomas without IDH mutations may define a similar subtype of tumors associated with good prognosis. In the previous study, mutated IDH1 inhibition fails to reduce the global methylation in JJ012 cell lines [58]. This result is very interesting and strongly suggests that the global hypermethylation (CIMP and hypermethylation of histones) occurs through mechanisms other than the inhibition of demethylases in IJ012 cells. We postulate that both histone methyltransferases and DNA methyltransferases have a high methylation activity and/or their recruitment is enhanced at several loci in IJ012 cells with IDH1 mutations. Because the hypermethylation is not observed in cells without IDH1 mutations [58], we hypothesize that this high methylation activity and/or enhanced recruitment is selective to chondrosarcoma cell lines with IDH1 mutations. Indeed, in a previous study, mutated IDH1 inhibition in glioma cells is sufficient to reduce the methylation of histones [55], indicating that a high methylation activity doesn't occur in these cells. Taken together, these conclusions suggest that mutated IDH1 acts in concert with other epigenetic mechanisms to establish a global hypermethylation in mutated IDH1 chondrosarcoma cells. Furthermore, these results suggest that mutations in IDH1 are not essential for tumor progression in chondrosarcomas but only in enchondromas wherein IDH1 mutations seem to be involved in the initiation of enchondromas. This is consistent with the fact that IDH1 mutations are often early events in the development of other cancers [63]. In enchondromas, we propose that the other epigenetic mechanisms (i.e that don't depend on mutated IDH1 like methyltransferases) responsible for the high levels of methylation in chondrosarcomas would be negligible. According to the previous study [55], we hypothesize that both mutated IDH1 glioma cells and mutated IDH1 enchondroma exhibit a relatively similar methylation pattern (i.e cells wherein only mutated IDH1 induces hypermethylation without the help of other epigenetic mechanisms like methyltransferases) because mutated IDH1 inhibition in glioma dramatically reduces the hypermethylation. Additionally, we propose that the epigenetic mechanisms leading to the hypermethylation in mutated IDH1 chondrosarcoma cells consist in an increased methylation activity and/or in an increased recruitment of methyltransferases at several loci. This hypothesis is consistent with the fact that chondrosarcoma cell lines express high levels of histone methyltransferases like EZH [13]. However, both wild-type IDH1 and mutated IDH1 chondrosarcoma cell lines express high levels of histone methyltransferases. For this reason, the global hypermethylation in mutated IDH1 chondrosarcoma cell lines is likely due to an increased recruitment of methyltransferases at several loci rather than an increased methylation activity.
In a previous study, mutated IDH1 inhibition reduces colony formation, migration and 2-HG levels whereas it induces apoptosis and cell cycle arrest in IJ012 cells [56]. This is in contrast with the second study in which mutated IDH1 inhibition has no effect on colony formation, migration and global
methylation[88]. According to our previous conclusions and because this is the same cell line in which IDH1 is mutated, we propose that the methyltransferases are responsible for the hypermethylation in JJ012 cells in the two studies. Therefore, we anticipate that mutated IDH1 inhibition with AGI-5198 doesn't affect the hypermethylation in JJ012 cells in the first study[56]. The authors say that this investigation is ongoing in their laboratory. Now the question is "if AGI-5198 doesn't affect the hypermethylation in JJ012 cells in the first study, why it reduces colony formation, migration, induces apoptosis and cell cycle arrest?" and the second question is "Why AGI-5198 doesn't affect these oncogenic events in the second study using the same cell line?" A moderate dose of AGI-5198 (i.e 10 \(\upmu\)M) is not sufficient to impair colony formation in both JJ012 and HT1080 chondrosarcoma cell lines while 2-HG levels are drastically decreased[56]. However, a high dose of AGI-5198 (i.e 20 \(\upmu\)M) effectively prevents colony formation in the cells. This result indicates that the effect of AGI-5198 on colony formation doesn't depend on demethylases reactivation in these cells. Rather, it could depend on either a non-epigenetic activity of mutated IDH1 or an epigenetic activity of mutated IDH1 that differs from demethylases inhibition and that would be selectively inhibited by high doses of AGI-5198. A non-epigenetic activity for mutated IDH1 has been proposed in two studies[55, 64] but if it exists, it remains unknown. This non-epigenetic activity differs from HIF1-alpha stabilization induced by mutated IDH1 since 2-HG levels are dramatically decreased by 10 \(\upmu\)M of AGI-5198 and colony formation (proliferation) is unchanged. At first, this result seems to be in contrast with the fact that hypoxia induces proliferation via Erk activation in several cancer cells[65]. Moreover, our preliminary results suggest that hypoxia induces proliferation in chondrosarcoma cell lines by activating the Erk pathway. Note that Erk is also able to activate HIF1-alpha[66] to mediate a positive feedback loop that drives cell proliferation under hypoxic conditions and/or mutated IDH1 context. Nevertheless, the previous result is not in contrast with these studies if we consider that mutated IDH1 inhibition and subsequent HIF1-alpha-induced proliferation suppression is replaced by other proliferation-related pathways in chondrosarcoma cell lines, like the Akt pathway previously described in this review. A second possible explanation is the Warburg effect. For example, HIF1-alpha may be overexpressed in chondrosarcoma cell lines so that HIF1-alpha can't be totally degraded by polylyl-hydroxylases resulting in HIF1-alpha accumulation and lactate fermentation under normoxic condition (Warburg effect). Consequently, even if 2-HG levels are decreased, HIF1-alpha is not totally degraded and the Erk pathway remains activated in chondrosarcoma cell lines. This hypothesis seems consistent with a study showing that HIF1-alpha is overexpressed in chondrosarcoma tissues compared with normal tissues[67]. However, HIF1-alpha overexpression in this study may be due to hypoxia and not to a mutation. In fact, another study reveals the presence of HIF1 binding site in the HIF1-alpha promoter[68], suggesting a positive autoregulation by HIF1-alpha itself when it is stabilized under hypoxic condition. For this reason, the mechanism responsible for HIF1-alpha stabilization upon mutated IDH1 inhibition could be different from mutation-induced HIF1-alpha overexpression. We speculate that c-myc may induce Warburg effect by stabilizing HIF1-alpha in chondrosarcoma since a study shows that c-myc is responsible for HIF1-alpha stabilization under normoxic condition in MCF7 and T47D cells[25]. Moreover, as described previously, c-myc amplification is found in chondrosarcoma[23]. Note that a study indicates that SW1353 chondrosarcoma cells don't exhibit a metabolic profile consistent with the Warburg effect but authors didn't work with other chondrosarcoma cell lines[69].
In the first study, AGI-5198 (20 \(\upmu\)M) also induces cell cycle arrest and apoptosis in JJ012 cells but not in HT1080 cells[56]. However, low doses of AGI-5198 has no effect on these events in the two cell lines. This result strongly suggests that a non-epigenetic activity of mutated IDH1 and/or an epigenetic activity not related to demethylases inhibition prevents cell cycle arrest and apoptosis in JJ012 cells but not in HT1080 cells, highlighting the differences between the chondrosarcoma cell lines and the importance to use different cell lines in order to better mimic the tumor context observed _in vivo_. Finally, in the
first study, both low and high doses of AGI-5198 (from 1 uM to 20 uM) reduce cell migration in the two cell lines. This result indicates that decreased 2-HG levels are sufficient to reduce cell migration. Consequently, demethylases reactivation may cause an epigenetic modulation leading to a reduced migration in the two cell lines. Our previous hypothesis stating that methyltransferases occur in mutated IDH1 chondrosarcoma cell lines to establish the hypermethylation doesn't contradict the fact that demethylases reactivation is sufficient to reduce migration in these cells. Indeed, although we observe a global hypermethylation in mutated IDH1 chondrosarcoma cell lines, it is possible that certain loci (migration-related genes) are weakly methylated and that mutated IDH1 inhibition is sufficient to completely demethylate these genes and to inhibit the expression of these genes. This implies that local demethylation exists in chondrosarcoma cell lines and that these migration-related genes are initially activated by histone methylation (H3K4me3, H3K36me3 or H3K79me3 for example). Additionally, AGI-5198 (20 uM) reduces migration more effectively than AGI-5198 (10 uM)[56]. At these two doses, 2-HG levels are drastically decreased and similar. So, the observed difference could be due to a non-epigenetic activity of mutated IDH1 in these cells.
Now we describe the second study in which the previous oncogenic events are unchanged upon mutated IDH1 inhibition and using the same cell line as the first study: JJ012 cells[58]. The colony formation is not altered by the treatment (AGI-5198). However, authors use a maximum concentration of 10 uM AGI-5198. According to the previous study, this concentration appears to be insufficient to prevent colony formation in JJ012 cells. Cell migration is unchanged with AGI-5198 and the concentration should be sufficient to reduce cell migration. We propose that the observed differences in cell migration between the two studies are caused by the different techniques used to assess migration in JJ012 cells.
EMT is not described in these two studies and it is unclear whether mutated IDH1 inhibition is able to prevent EMT in chondrosarcoma cell lines. AGI-5198 could constitute an interesting approach to inhibit EMT in chondrosarcoma cell lines and further studies are needed. Interestingly, mutated IDH1 can be associated with favourable prognosis in glioblastoma[70, 71, 72]. This result shows that mutated IDH1 inhibition is not always a good therapeutic strategy and that the role of mutated IDH1 in cancer cells merit deeper investigation. It may depend on both cell lines and cell context (i.e _in vivo, in vitro or the species_). We propose two hypotheses to explain this favourable prognosis. The first is that methylation may lead to tumor suppressors inhibition (H3K9me3, H3K27me3, DNA methylation) and oncogenes activation (H3K4me3, H3K36me3, H3K79me3) in mutated IDH1 cells of poor prognosis patients whereas it may lead to tumor suppressors activation and oncogenes inhibition in mutated IDH1 cells of favourable prognosis patients. This "epigenetic switch" may depend on both cell lines and cell context. However, this first hypothesis seems inconsistent with the fact that DZNp impairs glioblastoma cancer stem cell self-renewal _in vitro_ and _in vivo[73]_. In fact, this result suggests that methylation initially activates oncogenes in mutated IDH1 glioblastoma cells of favourable prognosis. The second hypothesis is that methylation is unchanged between mutated IDH1 cells of favourable prognosis patients and mutated IDH1 cells of poor prognosis patients (the same group of genes is methylated), this is the recruitment of demethylases that may be different. Demethylases may be recruited to tumor suppressors loci (that are initially inactivated by methylation) and oncogenes loci (that are initially activated by methylation) in mutated IDH1 cells of poor prognosis patients. In mutated IDH1 cells of favourable prognosis patients, demethylases may be recruited to other tumor suppressors loci (that are initially activated by methylation) and to other oncogenes loci (that are initially inactivated by methylation). Although certain tumor suppressors loci are activated by methylation and certain oncogenes loci are inactivated by methylation, this second hypothesis doesn't contradict the previous anti-cancer role of DZNp in glioblastoma because it is possible that the majority of methylation leads to the activation of oncogenes and the inactivation of tumor
suppressors. However, long-term survival in glioblastoma patients is weakly correlated to IDH1 mutation [74]. This can be explained by the fact that, even if mutated IDH1 prevents both tumor suppressors and oncogenes demethylation leading to the favourable prognosis, it also prevents HIF1-alpha degradation resulting in enhanced tumorigenesis. Moreover, a non-epigenetic activity of mutated IDH1 responsible for proliferation, as previously described in this part, may still exist in cells of favourable prognosis patients. Finally, mechanisms other than mutated IDH1 (and previously mentioned) are responsible for tumorigenesis in glioblastoma and more generally in cancer. Therefore, AGI-5198 should be used only in mutated IDH1 cells of poor prognosis patients in order to selectively reactivate demethylases in these cells. These conclusions suggest that further studies are required to determine the role of mutated IDH1 in chondrosarcoma _in vivo_, that could differ from _in vitro_ studies. Nonetheless, mutated IDH1 inhibition could be insufficient in chondrosarcoma _in vivo_ since hypoxia strongly occurs in a wide range of solid tumors, including chondrosarcoma. In fact, even if mutated IDH1 inhibition is thought to reactivate demethylases and to allow degradation of HIF1-alpha _in vitro_, hypoxia inhibits demethylases and allows stabilization of HIF1-alpha. Additionally, high doses of AGI-5198 (20 uM) are required to suppress the eventual non-epigenetic activity of mutated IDH1 responsible for proliferation in chondrosarcoma _in vitro_ and a high dose could be toxic to patients _in vivo_. Because methyltransferases-induced hypermethylation may occur in mutated IDH1 chondrosarcoma cells, DZ Nep may be exploited for therapeutic applications for chondrosarcoma. It inhibits methyltransferases and may improve the efficiency of AGI-5198 by preventing the demethylation by methyltransferases. Thus, even if DZ Nep alone is capable of triggering apoptosis in chondrosarcoma [13], a co-treatment with DZ Nep and low doses of AGI-5198 may constitute an alternative treatment for mutated IDH1 chondrosarcoma of poor prognosis patients _in vivo_.
To conclude, mutated IDH1 may act not only via its well-known epigenetic effects in chondrosarcoma, but also by interfering with either non-epigenetic pathways or epigenetic pathways not related to demethylases inhibition. Furthermore, mutated IDH1 inhibition appears to be insufficient to treat chondrosarcoma that harbor IDH1 mutations linked to poor prognosis _in vivo_. Instead, a co-treatment should be used. Unfortunately, both the chemoresistance and hypoxia limit the efficiency of treatments in chondrosarcoma.
## Conclusion
We note that the AKT pathway is intimately linked to tumor progression in chondrosarcoma. AKT is a central oncogenic signaling that orchestrates tumor progression through other pathways such as hedgehog, Rb or c-myc. As expected, non-epigenetic actors can indirectly modulate epigenetic pathways. For example, AKT that is a non-epigenetic actor, induces an epigenetic regulation of E cadherin expression by activating snail, a transcription factor that interacts with LSD1 to repress E cadherin expression in chondrosarcoma. Therefore, several epigenetic and non-epigenetic pathways are closely linked in chondrosarcoma. Surprisingly, several epigenetic actors (SIRT1, mutated IDH1) have also a non-epigenetic activity by interacting with non-epigenetic actors (AMPK, HIF1-alpha and other unidentified pathways) in chondrosarcoma, raising the possibility of developing therapeutics targeting these different non-epigenetic actors in order to prevent the non-epigenetic activity of epigenetic actors in chondrosarcoma. For instance, AMPK inhibition may prevent SIRT1-induced AMPK activation, Twist activation and oncogenic pathways related to AMPK activation. Another example is the inhibition of HIF1-alpha that may prevent mutated IDH1-induced HIF1-alpha activation, HIF1-alpha-induced snail activation and oncogenic pathways related to HIF1-alpha like angiogenesis. Conversely, SIRT1 inhibition inhibits its epigenetic activity and SIRT1-induced AMPK activation. Mutated IDH1 inhibition prevents its epigenetic activity (demethylases inhibition), its association with HIF1-alpha and its potential non-epigenetic activities. The interplay between epigenetic and non-epigenetic activity and non-epigenetic activity is important for the study of the effect of the
epigenetic actors in chondrosarcoma suggests that epigenetics is not the culprit but a culprit and that non-epigenetic actors may play a more significant role than we thought and therefore constitute rational therapeutic targets in chondrosarcoma.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Target & Role & Drug & Mechanism \\ \hline \hline \multicolumn{4}{|c|}{**Epigenetic actors**} \\ \hline
**Mutated IDH1** & Demethylases inhibition (except LSD1 family) & AG-120 (lvosidenib) & IDH1 mutant inhibitor \\ \hline
**SIRT1** & Deacetylation of histones and non-histone proteins & Selisistat & SIRT1 inhibitor \\ \hline
**Methyltransferases** & Hypermethylation & D2Nep & Competitive inhibitor of SAH \\ \hline
**EGFR** & Participates in tumorigenesis by being an upstream activator of several pathways such as Ras or PI3K/AKT pathways & Gefitinib & EGFR inhibitor \\ \hline
**PI3K/AKT signaling** & Proliferation, migration, survival, angiogenesis, EMT, activation of several downstream pathways involved in tumorigenesis & Perifosine & AKT inhibitor \\ \hline
**C-myc** & Proliferation, Warburg effect & 10058-F4 & Inhibition of the c-myc/max heterodimerization \\ \hline
**IHH/GUI1 axis** & Proliferation, Warburg effect via c-myc activation & GANT-61 & GII1 and 2 inhibitor \\ \hline
**Rb-LOH** & Proliferation & Palbociclib & CDK4 and 6 inhibitor \\ \hline \end{tabular}
\end{table}
Table 1: An overview of possible epigenetic and non-epigenetic targets in chondrosarcomas.
## 3 Strategies for targeting chondrosarcoma: _in vivo_ applications
Because chondrosarcomas exhibit specific tumor characteristics, it is possible to imagine drug delivery systems that can selectively target chondrosarcomas _in vivo_. Here we present strategies based on
Figure 1: **The epigenetic and non-epigenetic actors involved in chondrosarcomas are closely linked.** The dashed lines represent pathways that remain to be elucidated in chondrosarcomas.
features of chondrosarcomas: hypoxia, acidosis, EGFR and survivin overexpression. However, note that most cancers share these characteristics with chondrosarcomas and that there are differences among cancer cells within a tumor, called intra-tumor heterogeneity. Consequently, chondrosarcoma cells produce lactic acid (hypoxia and/or Warburg effect) while others undergo aerobic metabolism through Krebs' cycle (normoxia).
### An orthotopic mouse model for human chondrosarcoma
Firstly, it is important to work with a functional mouse model. A study shows that mice injected with the chondrosarcoma cell line J012 develop tumors and lung micro-metastases whereas mice injected with FS090 cells don't develop tumors [75]. This mouse model replicates the site, morphology and many characteristics of human chondrosarcoma and can be used to test different strategies targeting chondrosarcoma _in vivo_.
### Acid sensitive linkers and hypoxia-activated prodrugs
Acidic extracellular pH is a property of tumors. The tumor microenvironment is generally more acidic than the normal tissues because hypoxia is frequently seen in solid tumors, including chondrosarcoma. As described previously, HIF1-alpha is induced and stabilized under hypoxic condition. It can be stabilized under normoxic condition (Warburg effect). Then, cancer cells produce lactic acid and an acidic microenvironment is formed. Acid sensitive linkers and hypoxia-activated prodrugs are useful to selectively deliver anticancer agents to chondrosarcoma and more generally to the cancer site [76]. The linker is attached to the prodrug and the prodrug is released at the cellular target site. The selection of a suitable linker is important since the linker may sometimes be inadequate, the linker and the prodrug may not combine, due to steric hindrance effect of the prodrug.
### Temperature-sensitive systems
_Background._ Another feature of tumors is that cancer cells have a higher temperature than the surrounding normal cells because their high metabolism generates heat. In fact, heat energy is released from cells during aerobic metabolism. At first, a thermosensitive polymer such as the poly(N-isopropylacrylamide) (poly-(NIPAAm)) appears to be a good strategy to improve drug delivery to chondrosarcoma. These polymers have a lower critical solution temperature (LCST), defined as the temperature at which the light transmission of the polymer solution drops to 90% of the original value. Below the LCST, the polymer is hydrophilic. Above the LCST, the polymer undergoes a phase transition to a hydrophobic state, generating turbidity and releasing the drug. For pure P-NIPAAm, the LCST is about 32\({}^{\circ}\)C but it can be tuned to anywhere in the range of 32\({}^{\circ}\)C to 50\({}^{\circ}\)C by incorporating acrylamide into the polymer chain. Moreover, any drug can be trapped in the polymer in contrast to the previous linker systems. Thus, thermosensitive polymers are interesting and function as on-off switches for drug release _in vivo_. However, other organs like the brain have a high metabolism and a higher temperature than other parts of the body. Additionally, moderate fever (39\({}^{\circ}\)C) is frequently seen in patients with cancer, including chondrosarcoma [77]. For these reasons, the LCST should be tuned to a value greater than 39-40\({}^{\circ}\)C for _in vivo_ applications. Here we present one method for reaching this temperature within tumor: the technique is well documented and uses photothermal therapy.
_Photothermal therapy and chondrosarcoma._ It is possible to use gold nanocages covered with a thermosensitive polymer such as the P-NIPAAm (with a LCST tuned to a temperature greater than 40\({}^{\circ}\)C) [78, 79]. Gold nanocages have a strong absorption in the near-infrared. When gold nanocages are
exposed to near-infrared light, the light is absorbed and converted into heat through the photothermal effect. Gold nanocages containing the anticancer agent and covered with a thermosensitive polymer are injected intravenously. Then, gold nanocages are illuminated, generating local heating to directly kill cancer cells or indirectly by releasing the anticancer agent from the gold nanocages. Because EGFR is overexpressed in chondrosarcoma[3], it constitutes an interesting cell surface target and the previous system is improvable. Gold nanocages can be conjugated with anti-EGFR antibodies. The binding between EGFR and anti-EGFR antibodies initiates receptor-mediated endocytosis and leads to an increased concentration of gold nanocages inside the tumor.
However, although near-infrared window defines the range of wavelengths where light has its maximum depth of penetration in tissue, it exists a limited penetration depth of near-infrared light in tumors, reducing the effectiveness of the photothermal therapy. Because chondrosarcoma metastasizes most frequently to the lungs, it is important that near-infrared light reaches metastases or micro-metastases _in vivo_. For this reason, the treatment of metastases with the photothermal therapy appears to be ineffective and further studies are needed in order to know whether photothermal therapy can be used in chondrosarcoma, especially when there are not metastases yet.
|
2307.11238 | The clique graphs of the hexagonal lattice -- an explicit construction
and a short proof of divergence | We present a new, explicit and very geometric construction for the iterated
clique graphs of the hexagonal lattice $\mathrm{Hex}$ which makes apparent its
clique-divergence and sheds light on some previous observations, such as the
boundedness of the degrees and clique sizes of $k^n \mathrm{Hex}$ as
$n\to\infty$. | Martin Winter | 2023-07-20T21:17:59Z | http://arxiv.org/abs/2307.11238v1 | The clique graphs of the hexagonal lattice - an explicit construction and a short proof of divergence
###### Abstract.
We present a new, explicit and very geometric construction for the iterated clique graphs of the hexagonal lattice Hex which makes apparent its clique-divergence and sheds light on some previous observations, such as the boundedness of the degrees and clique sizes of \(k^{n}\operatorname{Hex}\) as \(n\to\infty\).
Key words and phrases:clique graphs, hexagonal lattice, clique convergence, clique divergence, clique dynamics, lattice graphs 2010 Mathematics Subject Classification: 05C69, 05C76, 05C63, 37E15
## 1. Introduction
Given a (potentially infinite) simple graph \(G\), its _clique graph_\(kG\) is the intersection graph of the cliques in \(G\). More precisely, \(kG\) has as its vertices the _cliques_ of \(G\) (_i.e.,_ the inclusion maximal complete subgraphs), two of which are adjacent in \(kG\) if they have non-empty intersection in \(G\). A graph is said to be _clique divergent_ if its _iterated clique graphs_\(kG,k^{2}G,k^{3}G,...\) are pairwise non-isomorphic. It is called _clique convergent_ otherwise.
The _hexagonal lattice_ Hex (shown in Figure 1) is known to be clique divergent. This was first hinted to by the findings of [2, 3], which proved the clique divergence of its finite quotient graphs (\(6\)-regular triangulations of the torus and Klein bottle). Later, divergence was proven directly in [4], building on an explicitly construction of the clique graphs of Hex (and of other graphs) introduced in [1].
The article at hand presents a new explicit, and as we find, rather neat construction of the clique graphs of the hexagonal lattice, that makes its clique-divergence completely apparent. Our construction is noteworthy in that, once the idea is presented, the proofs require little more than some \(3\)-dimensional intuition. Moreover,
Figure 1. The hexagonal lattice.
this new perspective sheds light on several previous observations, such as the boundedness of the degrees and clique sizes of \(k^{n}\operatorname{Hex}\) as \(n\to\infty\).
Even though the result applies to a single object only, we do believe that it is of interest: the hexagonal lattice itself and its quotients have received notable attention in the literature on clique dynamics, some results of which we were able to reproduced using compacter arguments.
In Section 2 we introduce the construction and the main result, which is proven in Section 3. In Section 4 we make some more observations regarding the construction and explain further relation to the literature.
## 2. Construction and statement of main result
In the following let \(G_{d}\) denote the _\(\ell^{\infty}\)-unit distance graph_ of the \(\mathbb{Z}^{d}\) lattice. That is, \(G_{d}\) has vertex set \(\mathbb{Z}^{d}\), with \(x,y\in\mathbb{Z}^{d}\) being adjacent in \(G_{d}\) if and only if their \(\ell^{\infty}\)-distance equals \(1\), that is, if
\[\|x-y\|_{\ell^{\infty}}:=\max_{i}|x_{i}-y_{i}|=1.\]
To establish our main result about the hexagonal lattice it is completely sufficient to restrict to \(d=3\), on which we shall focus in the following. The general definition is however still useful: the case \(d=2\) is especially suited for visualizations that provide intuition (_e.g._ see Figure 2). Moreover, our result generalizes in some form to \(d\in\{1,2,3\}\), fails however for \(d\geq 4\). We discuss this further in Section 4.4.
The relevance of \(d=3\) is as follows: the hexagonal lattice can be obtained as the subgraph of \(G_{3}\) induced on points with coordinate sum zero:
\[\operatorname{Hex}:=G_{3}\bigl{[}x\in\mathbb{Z}^{3}\,\big{|}\,x_{1}+x_{2}+x_{ 3}=0\bigr{]}.\]
The points with a given coordinate sum we shall call a _layer_ of \(G_{d}\). Our main observation is then that _all_ (even) clique graphs \(k^{2n}\operatorname{Hex}\) can be interpreted as subgraphs of \(G_{3}\) induced on one or more such layers.
For general \(d\geq 1\) and \(n\geq 0\) we introduce the _layered graph_
\[G_{d}(n):=G_{d}\Bigl{[}x\in\mathbb{Z}^{d}\,\big{|}\,\bigl{|}\sum_{i}x_{i} \bigr{|}\leq n\Bigr{]}.\]
In particular, \(\operatorname{Hex}=G_{3}(0)\). Our core result for the hexagonal lattice reads
\[k^{2n}\operatorname{Hex}\cong G_{3}(n),\]
from which clique-divergence is apparent (see also Section 4.2).
To also state the result for odd clique graphs \(k^{2n+1}\operatorname{Hex}\), we introduce the "dual" graph \(G_{d}^{*}\), _i.e.,_ the \(\ell^{\infty}\)-unit distance graph of the half-integer lattice \(\mathbb{Z}^{d}\,+\,\nicefrac{{1}}{{2}}\) (that is,
Figure 2. The graph \(G_{2}\) with a highlighted clique (a “\(2\times 2\) square”). All cliques are of this form.
all coordinates are half-integers). It is clearly isomorphic to \(G_{d}\). The corresponding layered graph \(G_{d}^{*}(n)\) is defined analogously:
\[G_{d}^{*}(n):=G_{d}^{*}\Big{[}x\in\mathbb{Z}^{d}+\nicefrac{{1}}{{2}}\,\Big{|}\, \big{|}\sum_{i}x_{i}\big{|}\leq n\Big{]}.\]
The main result can now be stated in full:
**Theorem 2.1**.: _There are natural isomorphisms_
\[kG_{3}(n)\cong G_{3}^{*}(n+\nicefrac{{1}}{{2}})\quad\text{and}\quad kG_{3}^{* }(n)\cong G_{3}(n+\nicefrac{{1}}{{2}}).\]
_Combining these yields \(k^{2}G_{3}(n)\cong G_{3}(n+1)\), and in particular,_
\[k^{n}\operatorname{Hex}\cong\begin{cases}G_{3}\big{(}\nicefrac{{n}}{{2}} \big{)}&\text{if $n$ is even}\\ G_{3}^{*}\big{(}\nicefrac{{n}}{{2}}\big{)}&\text{if $n$ is odd}\end{cases}.\]
## 3. Proof of main result
We believe that, once stated, verifying Theorem 2.1 is fairly straightforward. The proof below will contain no real surprises. It does however require us to verify some technical points that are best dealt with using some \(3\)-dimensional intuition.
We first present the main argument as a sequence of simple observations. Many of them are at least plausible from "visual inspection". For some of them we provide more detailed arguments further below:
1. The cliques of \(G_{3}\) are exactly the "\(2\times 2\times 2\) cubes" in \(G_{3}\), that is, they are of the form \(x+\{0,1\}^{3}\) with \(x\in\mathbb{Z}^{3}\) (Figure 2 shows the analogue situation in \(G_{2}\), where the cliques are "\(2\times 2\) squares").
2. A cube \(x+\{0,1\}^{3}\) in \(G_{3}\) has its centroid at \(x+\{\nicefrac{{1}}{{2}}\}^{3}\), which is a vertex of \(G_{3}^{*}\). This correspondence yields an isomorphism \(kG_{3}\cong G_{3}^{*}\).
3. Since \(G_{3}(n)\) is an induced subgraph of \(G_{3}\), each clique \(Q\) in \(G_{3}(n)\) extends to a clique \(\bar{Q}\) in \(G_{3}\), that is, \(Q=\bar{Q}\cap G_{3}(n)\). In fact, more is true: 1. the extension \(\bar{Q}\) is unique. 2. cliques \(Q_{1},Q_{2}\) in \(G_{3}(n)\) intersect if and only if their extensions \(\bar{Q}_{1},\bar{Q}_{2}\) intersect in \(G_{3}\). We will provide justification for a. and b. below.
The discussion so far allows us to define a graph embedding
\[\iota\colon kG_{3}(n)\hookrightarrow kG_{3}\stackrel{{\sim}}{{ \rightarrow}}G_{3}^{*},\quad Q\stackrel{{(iii)\text{ a.}}}{{ \longmapsto}}\bar{Q}\stackrel{{(i)}}{{=}}x+\{0,1\}^{3} \stackrel{{(ii)}}{{\longmapsto}}x+\{\nicefrac{{1}}{{2}}\}^{3},\]
and we can consider \(kG_{3}(n)\) as a subgraph \(kG_{3}(n)\cong\operatorname{im}(\iota)\subseteq G_{3}^{*}\). By \((iii)\) b. we can consider \(kG_{3}(n)\) even as an _induced_ subgraph of \(G_{3}^{*}\). It therefore remains to determine the vertices of \(G_{3}^{*}\) in the image of \(\iota\). We need two more observations:
Figure 3. The layered graphs \(G_{2}(0),G_{2}(1)\) and \(G_{2}(3)\).
* A clique \(\tilde{Q}\) in \(G_{3}\) is an extension of a clique in \(G_{3}(n)\) if and only if \(\tilde{Q}\) intersects \(G_{3}(n)\) in _at least two_ vertices (see Figure 4 for the analogous situation within \(G_{2}\))
* A \(2\times 2\times 2\) cube \(x+\{0,1\}^{3}\) intersects \(G_{3}(n)\) in at least two vertices if and only if its centroid \(x+\{\nicefrac{{1}}{{2}}\}^{3}\) has coordinate sum of absolute value \(\leq n+\nicefrac{{1}}{{2}}\).
This shows that \(kG_{3}(n)\cong\operatorname{im}(\iota)=G_{3}^{*}(n+\nicefrac{{1}}{{2}})\) and concludes this part of the proof. By swapping \(G_{3}\) and \(G_{3}^{*}\) we obtain an analogous proof for the other isomorphism.
We now provide arguments for (_iii_) a., b., as well as (_iv_) and (_v_).
**Claim (_iii_), a._Every clique \(Q\) in \(G_{3}(n)\) has a unique extension \(\tilde{Q}\) in \(G_{3}\).
Suppose that there are two distinct cliques (aka. cubes) \(\tilde{Q},\tilde{Q}^{\prime}\) in \(G_{3}\) that extend the clique \(Q\) of \(G_{3}(n)\). Then \(Q\subseteq\tilde{Q}\cap\tilde{Q}^{\prime}\), which is a shared face of the two cubes. However, considering Figure 2 we see that an intersections of a \(2\times 2\times 2\) cube with \(G_{3}(n)\) that has at least two vertices (such as \(Q\)) never lies completely inside a face.
Figure 4. The “squares” A, B and C intersect \(G_{2}(1)\), but only B and C intersect in more than one vertex and therefore correspond to cliques in \(G_{2}(1)\). Those squares then yield vertices of \(kG_{2}(1)\). Note that \(kG_{2}(1)\cong G_{2}^{*}(1)\) in contrast to \(d=3\), where \(kG_{3}(1)\cong G_{3}^{*}(1+\nicefrac{{1}}{{2}})\).
Figure 5. The seven ways in which a \(2\times 2\times 2\) cube can intersect \(G_{3}(n)\) in at least two vertices. The dashed lines represent the layers of \(G_{3}\) that intersect the cube (the numbers are for reference and do not necessarily indicate the coordinate sum of the layer). The line is black if the layer is in \(G_{3}(n)\).
**Claim (_iii_), b.**_Cliques \(Q_{1},Q_{2}\) in \(G_{3}(n)\) intersect in \(G_{3}(n)\) if and only if their extension \(\tilde{Q}_{1},\tilde{Q}_{2}\) intersect in \(G_{3}\)._
One direction is obvious. For the other direction consider Figure 6: it shows the five way in which two distinct \(2\times 2\times 2\) cubes in \(G_{3}\) can intersect (up to symmetry of \(G_{3}(n)\)). The figure also highlights layers of \(G_{3}(n)\) that must necessarily intersect the cubes in order for \(G_{3}(n)\) to intersect each cube in at least two vertices. It is evident from the figure that these intersections necessarily contain vertices that lie in both cubes. In other words, these cubes also intersect when restricted to \(G_{3}(n)\).
**Claim (_iv_).: _A clique \(\tilde{Q}\) in \(G_{3}\) is an extension of a clique in \(G_{3}(n)\) if and only if \(\tilde{Q}\) intersects \(G_{3}(n)\) in at least two vertices._**
If \(\tilde{Q}\cap G_{3}(n)\) has a single vertex, then it is not a clique of \(G_{3}(n)\), since \(G_{3}(n)\) has no isolated vertices. Conversely, suppose \(\tilde{Q}\) intersects \(G_{3}(n)\) in at least two vertices. Let \(Q\) be a clique of \(G_{3}(n)\) that contains \(\tilde{Q}\cap G_{3}(n)\), and let \(\tilde{Q}^{\prime}\) be its extension. If \(\tilde{Q}\neq\tilde{Q}^{\prime}\) then \(\tilde{Q}\cap G_{3}(n)\subseteq\tilde{Q}\cap\tilde{Q}^{\prime}\) must be a shared face of the cubes. But considering once more the possible ways in which a cube can intersect \(G_{3}(n)\) in at least two vertices in Figure 5, we see that this is not possible. Thus \(\tilde{Q}\cap G_{3}(n)=\tilde{Q}^{\prime}\cap G_{3}(n)=Q\) is a clique.
**Claim (_v_).: _A cube \(\tilde{Q}:=x+\{0,1\}^{3}\) intersects \(G_{3}(n)\) in at least two vertices if and only if its centroid \(x+\{\nicefrac{{1}}{{2}}\}^{3}\) has coordinate sum of absolute value \(\leq n+\nicefrac{{1}}{{2}}\)._**
Let \(s\) be the coordinate sum of \(x\). The layers of \(G_{3}\) that intersect \(\tilde{Q}\) in at least two vertices have coordinate sum \(s+1\) and \(s+2\). Thus, for \(G_{3}(n)\) to intersect \(\tilde{Q}\) we require \(|s+1|\leq n\) or \(|s+2|\leq n\). Elementary computation shows that this is equivalent to \(|s+\nicefrac{{3}}{{2}}|\leq n+\nicefrac{{1}}{{2}}\). Since \(s+\nicefrac{{3}}{{2}}\) is the coordinate sum of the centroid of \(\tilde{Q}\), the claim follows.
## 4. Further comments
### Bounded degree and clique number
The vertex degree of \(G_{3}\) is \(3^{3}-1=26\). As we have seen, \(k^{n}\operatorname{Hex}\) appears as a subgraph of \(G_{3}\), which shows that the vertex degrees in \(k^{n}\operatorname{Hex}\) stay bounded as \(n\to\infty\). The number \(26\) also played a major role in the proofs of [4], where it was considered a curiosity. Our constructions provides an explanation for the appearance of this peculiar number.
We also note that the clique number of \(G_{3}\) is \(8\). This fact gives a concise explanation for the observation that clique numbers of \(k^{n}\operatorname{Hex}\) stay bounded as \(n\to\infty\). This has previously been observed for the finite quotients of Hex in [2, 3].
Figure 6. The five configurations (up to symmetries of \(G_{3}(n)\)) in which two distinct \(2\times 2\times 2\) cubes in \(G_{3}\) can intersect. Each figure highlights the minimal amount of consecutive layers that intersects each cube in at least two vertices (strictly speaking, in the second case from the left it must be at least one of the layers \(1\) and \(2\)).
### Clique-divergence of Hex
The explicit form \(k^{2n}\operatorname{Hex}\cong G_{3}(n)\) makes apparent the clique-divergence of Hex. Here is a more explicit argument: consider the subgraph \(H_{n}\) of \(G_{3}(n)\) induced on all vertices of degree \(<26\). One can show that for \(n\) sufficiently large \(H_{n}\) has two connected components, and the graph-theoretic distance between those components diverges as \(n\to\infty\).
### Clique-convergence of \(G_{d}\)
As noted in Section 3 (_ii_), we have \(kG_{3}\cong G_{3}^{*}\), and by symmetry, \(kG_{3}^{*}\cong G_{3}\). Thus \(k^{2}G_{3}\cong G_{3}\), and \(G_{3}\) is clique-convergent.
The argument applies completely analogous to \(G_{d}\) for general \(d\geq 1\): observe first that the cliques in \(G_{d}\) are exactly the \(2\times\dots\times 2\) cubes \(x+\{0,1\}^{d}\). The isomorphism \(kG_{d}\cong G_{d}^{*}\) is then given by \(x+\{0,1\}^{d}\mapsto x+\left\{\nicefrac{{1}}{{2}}\right\}^{d}\).
For example, for \(d=1\) we have that \(G_{1}\) is the infinite path graph, which indeed is clique-convergent.
### Other values for \(d\)
In this article we have been motivated mainly by the clique dynamics of the hexagonal lattice, and therefore, the case \(d=3\). As it turns out, the statement of Theorem 2.1 and its proof given in Section 3 can be easily adjusted to also work with a few other values of \(d\), thought \(d=3\) remains the most interesting one of them:
**Theorem 4.1**.: _If \(d\in\{1,2,3\}\) and \(n\geq 0\), but \((d,n)\neq(1,0)\), then there are natural isomorphisms_
\[kG_{d}(n)\cong G_{d}^{*}(n+d/2-1)\quad\text{and}\quad kG_{d}^{*}(n)\cong G_{d} (n+d/2-1).\]
_Combining these yields \(k^{2}G_{d}(n)\cong G_{d}(n+d-2)\)._
Let us consider the values \(d\in\{1,2\}\) in some more detail, and also explain where the proof fails for \(d\geq 4\).
For \(\boldsymbol{d=1}\) the graph \(G_{1}(0)\) is a single vertex and must be excluded from Theorem 4.1 (in this case the proof in Section 3 fails in step \((\mathit{iv})\), where we require that \(G_{1}(0)\) has no isolated vertices). For general \(n\geq 1\), the graph \(G_{1}(n)\) is a path of length \(2n+1\). The peculiarity of the case \(d=1\) is that \(k^{2}G_{1}(n)\cong G_{1}(n-1)\), that is, the clique graphs are shrinking, completely in agreement with what we expect from the finite path graph.
For \(\boldsymbol{d=2}\) we have \(k^{2}G_{2}(n)\cong G_{2}(n)\), and so the clique graphs are "stable". A special case is \(G_{2}(0)\), which is the infinite path. See Figure 3 for other examples.
For \(\boldsymbol{d\geq 4}\) there is no direct analogue of Theorem 4.1. The proof of Section 3 fails in step \((\mathit{iii})\) b.: two cliques \(Q_{1},Q_{2}\) in \(G_{4}(n)\) can be disjoint, while their extensions in \(G_{4}\) intersect. Here is an example: the \(2\times 2\times 2\times 2\) cubes
\[(1,-1,0,0)+\{0,1\}^{4}\quad\text{and}\quad(0,0,1,-1)+\{0,1\}^{4}\]
intersect only in the point \((1,0,1,0)\). Yet their intersections with \(G_{4}(1)\) are disjoint, even though each cube intersects \(G_{4}(1)\) in at least two vertices. We can however still find \(kG_{d}(n)\) as a spanning subgraph of \(G_{d}^{*}(n+d/2-1)\), and vice versa.
### Triangulations of the torus and the Klein bottle
Any group action \(\Gamma\curvearrowright\) Hex extends uniquely to actions \(\Gamma\curvearrowright G_{3}\) and \(\Gamma\curvearrowright G_{3}(n)\) that preserves coordinate sums. Taking the quotient of \(G_{3}(n)\) by such an action yields an explicit description for the clique graphs of the quotient \(T:=\operatorname{Hex}/\Gamma\), which is a \(6\)-regular triangulation
of an unbounded surface (_i.e.,_ the torus, the Klein bottle, the infinite cylinder, the infinite Mobius strip or the plane):
\[k^{2n}T=k^{2n}(\operatorname{Hex}/\Gamma)\cong(k^{2n}\operatorname{Hex})/\Gamma \cong G_{3}(n)/\Gamma.\]
Some more technicalities are involved in verifying the two isomorphism (see also [4, Lemma 4.4]), but all in all, we obtain a concise description of the clique graphs first mentioned in [3], that also makes transparent their linear growth as \(n\to\infty\).
### Relation to the geometric clique graph
The geometric clique graphs \(\mathcal{G}_{n}\) (introduced in [1]) provides an alternative description for the clique graphs of the hexagonal lattice (and more generally, of all "locally cyclic graph of minimum degree \(\delta\geq 6\)"). The vertices of \(\mathcal{G}_{n}\) are the triangular shaped subgraphs of \(\operatorname{Hex}\) (shown in Figure 7) of side length \(m\), where \(m\leq n\) and \(m\equiv n\pmod{2}\), subject to a non-trivial set of rules for adjacency (see [1, Definition 4.1] or [4, Definition 2.1]). It was proven in [1, Theorem 6.8 + Corallary 7.8] that \(k^{n}\operatorname{Hex}\cong\mathcal{G}_{n}\).
Our description of \(k^{n}\operatorname{Hex}\) allows for an alternative interpretation of \(\mathcal{G}_{n}\) and yields a natural explanation for the otherwise ad hoc adjacency rules. Define the positive resp. negative orthant:
\[O^{+}:=\{x\in\mathbb{Z}^{3}\mid x_{1},x_{2},x_{3}\geq 0\}\quad\text{and}\quad O ^{-}:=\{x\in\mathbb{Z}^{3}\mid x_{1},x_{2},x_{3}\leq 0\}\]
and set \(O^{\pm}:=O^{+}\cup O^{-}\). To each vertex \(x\in G_{3}\,\cup\,G_{3}^{*}\) (which is a point with integer or half-integer coordinates) we associate a triangular-shaped subgraph \(T_{x}\subset\operatorname{Hex}=G_{3}(0)\) as follows (_cf._ Figure 8):
\[T\colon\,x\ \longmapsto\ T_{x}:=G_{3}(0)\cap(2x+O^{\pm}).\]
This yields an interpretation for the vertices of \(k^{2n}\operatorname{Hex}\cong G_{3}(n)\) resp. \(k^{2n-1}\operatorname{Hex}\cong\)
Figure 8. The intersection of \(\operatorname{Hex}=G_{3}(0)\) with \(x+O^{\pm}\), yields a triangular-shaped subgraph \(T_{x}\).
Figure 7. The “triangular-shaped subgraphs” of \(\operatorname{Hex}\) of side length \(0\),..., \(4\), as used in the construction of the geometric clique graph \(\mathcal{G}_{n}\)
\(G_{3}^{*}(n)\) as triangular-shaped subgraphs of Hex which is in accordance with the interpretation from \(\mathcal{G}_{n}\). In fact, \(x,y\in G_{3}\,\mathaccent 869{\cup}\,G_{3}^{*}\) are adjacent if and only if \(T_{x}\) and \(T_{y}\) are adjacent in \(\mathcal{G}_{n}\), providing a new interpretation for the adjacency rules in \(\mathcal{G}_{n}\).
|
2303.05962 | Entropy Coding Improvement for Low-complexity Compressive Auto-encoders | End-to-end image and video compression using auto-encoders (AE) offers new
appealing perspectives in terms of rate-distortion gains and applications.
While most complex models are on par with the latest compression standard like
VVC/H.266 on objective metrics, practical implementation and complexity remain
strong issues for real-world applications. In this paper, we propose a
practical implementation suitable for realistic applications, leading to a
low-complexity model. We demonstrate that some gains can be achieved on top of
a state-of-the-art low-complexity AE, even when using simpler implementation.
Improvements include off-training entropy coding improvement and encoder side
Rate Distortion Optimized Quantization. Results show a 19% improvement in
BDrate on basic implementation of fully-factorized model, and 15.3% improvement
compared to the original implementation. The proposed implementation also
allows a direct integration of such approaches on a variety of platforms. | Franck Galpin, Muhammet Balcilar, Frédéric Lefebvre, Fabien Racapé, Pierre Hellier | 2023-03-10T14:50:18Z | http://arxiv.org/abs/2303.05962v2 | # Entropy Coding Improvement for Low-complexity Compressive Auto-encoders
###### Abstract
End-to-end image and video compression using auto-encoders (AE) offers new appealing perspectives in terms of rate-distortion gains and applications. While most complex models are on par with the latest compression standard like VVC/H.266 on objective metrics, practical implementation and complexity remain strong issues for real-world applications. In this paper, we propose a practical implementation suitable for realistic applications, leading to a low-complexity model. We demonstrate that some gains can be achieved on top of a state-of-the-art low-complexity AE, even when using simpler implementation. Improvements include off-training entropy coding improvement and encoder side Rate Distortion Optimized Quantization. Results show a 19% improvement in BDrate on basic implementation of fully-factorized model, and 15.3% improvement compared to the original implementation. The proposed implementation also allows a direct integration of such approaches on a variety of platforms.
## 1 Introduction
Rate-distortion autoencoders [1] form the backbone of modern neural codecs, where the latent representation is conditioned to optimize a rate-distortion loss function. These methods are a special case of Variational Autoencoder (VAE) [2] with three key differences: (i) the posterior distribution is a uniform distribution centered on the encoder's outputs (latents) at training time, (ii) has a fixed variance output distribution and (iii) has trainable priors [3, 4]. It was shown that minimizing the evidence lower bound (ELBO) of this special VAE is equivalent to minimizing jointly the mean square error (MSE) of the reconstruction and the entropy of latents w.r.t the priors [5]. Existing models differ mainly by the modelling of the priors and selection of encoder/decoder architecture. The simplest model used a small number of convolutional layers and learned priors by non-parametric way called fully-factorized model [4]. This model was then improved by the hyperprior model, where the parametric prior is a function of side information and modelled by zero-mean gaussian [5], gaussian [6, 7] or a mixture of gaussian [8]. These models are deeper and use an encoder-decoder pair for the side information. From an architectural perspective, attention layers [8, 9], frequency decomposition layers [9] and invertible layers [10] are recent developments to improve the codec's performance. All those codecs with improved performance on top of the original design increase the complexity and the practicability of the decoder for two main reasons. Firstly, the complexity (measured for example in MAC/sample) is increased because of the added networks and layers.
Secondly, the parallelism potential is reduced and the latency increased because of the addition of the hyper-prior and the use of an autoregressive schema for context adaptive coding.
The most performing neural codecs are almost on par with the latest compression standard on generic image compression and propose many advantages over standard codecs, such as easy adaptation on perceptual distortion metrics and high performance on specific domains thanks to their learning ability. However, they are currently impractical on small devices because of the computational complexity and energy consumption. It has been reported that a neural decoder needs more than 1000 times the number of MACs/pixel compared to standard codecs [11][12]. Even with network quantization and distillation like in [13], the necessary number of MACs can be around 500 times more compared to standard codecs. These complexity issue is one of the most important problem that makes neural codec not practical for many applications.
Since learning-based models are trained on large datasets, they may be optimal in average for all train set, but they are likely to be suboptimal for any single image, where this problem is called the amortization gap [14]. In image compression context, this problem can be solved by improving rate-distortion objective at encoding time for a given single image. In literature, the proposed methods to reduce this gap are three folds. The first ones involved latent finetuning for the given image [15, 16, 17, 18, 19], the second ones model parameters finetuning [20, 21, 22, 23, 24] for better performance on a given single image or the re-parameterization of the entropy model in order to better fit the latents without fine-tuning the model [25]. In practice, only the first class of method allows to keep the same decoder and does not increase the decoder complexity.
In this paper, we propose a practical implementation of a low-complexity AE. More specifically, we propose to demonstrate a practical implementation on a simple a model, derived from a fully-factorized model [4] using the entropy parametrization found in [5] as implemented in in [26]:
* we replace the greedy gdn/igdn activations by Relu/Relu activations to obtain a more hardware friendly implementation.
* we distillate the neural codec's with a full 16-bits integer network using simplified integerized operations using the SADL framework [27].
* we learn a new context switching based conditional entropy model on top of the latent representation, initially learned with factorized entropy model.
* we propose a new rate distortion optimization process at encoding time in order to decrease the amortization gap, without the need of an external framework, using only the distillated decoder.
We first present the general process to train a compressive auto-encoders. We then present the process to derive an integerized version of the model. A conditional entropy model is then computed from the extracted model. The latent optimization stage is then presented. Results on the Kodak dataset are finally presented.
## 2 Compressive Auto-encoders
### Overview
In this section, we introduce the basic principles of fully-factorized compressive auto-encoder including the training, inference and quantization steps. Let \(\mathbf{x},\mathbf{\hat{x}}\in\mathbf{R}^{n\times n\times 3}\) be respectively an RGB image and its reconstruction, with a \(n\times n\) size (the image is considered square without any loss of generality). Let \(\mathbf{y},\mathbf{\hat{y}}\in\mathbf{R}^{m\times m\times s}\) be respectively the continuous latent and the quantized latent (or noise added), where \(m\times m\) is the spatial resolution and \(s\) is the number of channels. \(\mathbf{Q}(.)\) is element-wise function that applies nearest integer quantization at test time (or its continuous relaxation at train time as \(\mathbf{Q}(x)=x+\epsilon\) with \(\epsilon\sim U(-0.5,0.5)\)).
Let us now summarize all the steps of the compressive autoencoder at test time. The sender inputs the image \(\mathbf{x}\) to obtain the continuous latent \(\mathbf{y}=g_{a}(\mathbf{x};\phi)\) and its version \(\mathbf{\hat{y}}=\mathbf{Q}(\mathbf{y})\). The Quantized latents \(\mathbf{\hat{y}}\) are then converted into a bitstream using the learned entropy model \(p_{f}(.|\Psi)\). The receiver decodes the quantized latents \(\mathbf{\hat{y}}\) from the bitstream using the shared entropy model \(p_{f}(.|\Psi)\) and the reconstructed image is obtained as \(\mathbf{\hat{x}}=g_{s}(\mathbf{\hat{y}};\theta)\). The encoder block \(g_{a}\), decoder block \(g_{s}\) and entropy model \(p_{f}\) are trainable models implemented with neural networks, and parameterized by \(\phi\), \(\theta\) and \(\Psi\) respectively.
The compressive auto-encoder optimizes \(\phi\), \(\theta\) and \(\Psi\) by minimizing two objectives simultaneously. The first one is any differentiable distortion loss between \(\mathbf{x}\) and \(\mathbf{\hat{x}}\), while the second one is the length of the bitstream encoding \(\mathbf{\hat{y}}\). Since \(\mathbf{\hat{y}}\) is losslessly encoded using any entropy encoder such as range asymmetric numeral systems (RANS) [28], and because RANS is asymptotically optimal encoder, according to Shanon's entropy theorem the lower bound of bitlength can be used as an objective function. Compared to the experimental bitlength from RANS, that leads to a differentiable objective function. As a result, the loss function of the fully-factorized compressive auto-encoder can be written as follows;
\[\mathcal{L}=\mathop{\mathbb{E}}_{\begin{subarray}{c}\mathbf{x}\sim p_{x}\\ \epsilon\sim U\end{subarray}}\left[-log(p_{f}(\mathbf{\hat{y}}|\Psi))+ \lambda.d(\mathbf{x},\mathbf{\hat{x}})\right]. \tag{1}\]
Here, \(d(.,.)\) is any distortion loss such as MSE for PSNR metric, \(\lambda\) is a trade-off hyperparameter between compression ratio and quality, and \(-log(p_{f}(\mathbf{\hat{y}}|\Psi))\) is the bitlength lower bound of encoded \(\mathbf{\hat{y}}\). In order to entropy encode/decode the quantized latents, RANS needs probability mass function (PMF) of each quantized latent in \(\mathbf{\hat{y}}\). the fully-factorized entropy model implements it by learning \(s\) number of cumulative distribution function (CDF) shown by \(\bar{p}_{\Psi}^{(c)}(.):\mathbf{R}\rightarrow\mathbf{R},\ for\ c=1\dots s\) implemented by neural networks. Under nearest integer quantization, the necessary PMFs can be derived from CDFs by \(\hat{p}_{\Psi}^{(c)}(x)=\bar{p}_{\Psi}^{(c)}(x+0.5)-\bar{p}_{\Psi}^{(c)}(x-0.5)\). Since each PMF is dedicated for single \(m\times m\) slice of latent (for each feature channel) and treat them independently, the entropy model applies as follows in [4]:
\[p_{f}(\mathbf{\hat{y}}|\Psi)=\prod_{c=1}^{s}\prod_{i,j=1}^{m,m}\hat{p}_{\Psi}^ {(c)}(\mathbf{\hat{y}}_{i,j,c}). \tag{2}\]
### Architecture
In our work we propose to improve over the baseline [4] with the entropy bottleneck proposed in [5], where
* \(g_{a}\) employs 4 convolutional layers (and subsample) and 3 nonlinear Generalized Divisive Normalizations (GDN)
* \(g_{s}\) employs 4 deconvolutional layers (convolutional and upsample) and 3 nonlinear inverse Generalized Divisive Normalizations (iGDN)
The GDN is a composition of linear transformations, followed by a generalized form of divisive normalization. This activation/normalization layer includes a division and square root, which is not suitable for practical hardware implementation. As in [29], the GDN operation is replaced by ReLU, which is a more implementation friendly activation.
### Quantization
To avoid using floating point operations when using the Neural Network at inference, the network uses only 16 bits integer arithmetic. It means that both network parameters and latent should be quantized and all operations are replaced by integer equivalent operations.
This proposal uses a static quantization approach. However, to further reduce the complexity, the integerized network use a simplified quantization approach (see [27]:
* the quantized parameters or latent do not use zero point, avoiding some additional operations,
* the quantizers are only power of 2, allowing to perform operations only using bit shifting (instead of costly multiplication/division),
## 3 Entropy coding
### Post-training conditional entropy
#### 3.1.1 Context modeling
Starting from a fully-factorized model with a basic entropy model, we enhance it by computing a conditional entropy model post-training. The method can be applied on any entropy model. We apply it here on a non parametric entropy model using Cumulative Distribution Functions (CDF).
To limit the latency and the computational complexity of the entropy decoder, a simple context modeling is performed to select the distribution used for decoding a latent value which can be deduced using additions and comparisons only. In detail, for each value of the latent \(v_{i,j,k}\) where \(i,j\) are the spatial components and \(k\) the
channel component, its context is selected as
\[C=\begin{cases}0,\text{ if }v_{i-1,j,k}<\epsilon_{k}\wedge v_{i,j-1,k}<\epsilon_{k} \wedge v_{i,j,k-1}<\epsilon_{k-1}\\ 1,\text{ if }v_{i-1,j,k}>=\epsilon_{k}\oplus v_{i,j-1,k}>=\epsilon_{k}\oplus v_{i,j, k-1}>=\epsilon_{k-1}\\ 3,\text{ if }v_{i-1,j,k}>=\epsilon_{k}\wedge v_{i,j-1,k}>=\epsilon_{k}\wedge v_{i,j, k-1}>=\epsilon_{k-1}\\ 2,\text{ otherwise}\end{cases} \tag{3}\]
where \(\epsilon_{k}\) is a threshold defined per channel. The context \(C\) is then used to choose among the 4 CDFs for a particular channel \(k\).
#### 3.1.2 Channel activation
Another proposed improvement of the channels coding is the use of an activation bit for each channel. A channel is considered activated when at least one value is different from the most probable value of the channel (as deduced from the channel distribution).
In order to encode the activation bit of the channel, the probability of activation of a channel is computed over a large dataset. The activation bit is then entropy coded using this probability.
#### 3.1.3 CDF computation
Statistics on latent variables are gathered over a large dataset to compute the CDFs for each channel. In practice, the CDFs are extracted as the normalized, strictly monotonous, cumulative histograms for each channel and each context in [0..3] as shown in listing 1.
```
1foralllatentsLindataset
2forallchannelCinL
3forallvaluevinC
4ctx=computeC(v)
5H[ctx][C]+=1
6end
7end
8end
9foriin0..3
10CDF[i]=normalize(H[i])
11end
```
Listing 1: Extraction of contextualized CDF
#### 3.1.4 Channel ordering
In the above context modeling, it is assumed that the conditional entropy between 2 consecutive channels is low. As the training stage does not enforce such constraint, as opposed to a fully auto-regressive model, we perform a reordering of the channels for encoding.
We first defined 2 entropy coding methods:
* K1 uses a spatial only conditional entropy coding, using 3 CDFs per channel to encode each variable of the latent, conditioned by the values of its top and left neighbors.
* K2 uses the full conditional entropy coding, conditioned by the values of both the spatial neighbors and the collocated variable in the previous channel.
First, the channel with the highest entropy is selected to be the first encoded. K1 coding is used to select the channel over a dataset of latents. The channel giving the best entropy decrease is then selected iteratively for each next channel: for each remaining channel to encode, a full encoding using the K2 model is used to compute the entropy of the channel. The entropy difference compared to the entropy using the K1 model is computed. The channel having the best entropy gain is selected as the next channel to encode. The process is repeated until all channels have been sorted.
#### 3.1.5 Thresholds computation
The computation of the thresholds used in eq. 3 are done per channel. For each channel independently, the best threshold, which minimizes the entropy of the channel using the model K1, is computed. Each threshold is then associated with the channel and used to deduce the spatial and inter-channel based contexts.
### Inference based RDOQ
Finally, a Rate-Distortion Optimized Quantization is performed on the latent. In the proposed method, we do not rely on a backward pass of the decoder, which assumes a differentiable loss as in [17]. Instead we directly use the decoder inference to optimize the latent, like in traditional codecs. The overall naive process is to modify each value of the latent and to test if the RD-cost \(D+\lambda R\) decreases at each step. The advantages of this approach are the following:
* it only relies on the availability of the decoder inside the encoder,
* no gradient computations are needed, which would require the original floating point model,
* no differentiable loss, especially for the rate term, is required, which would not be possible when using simplified context switching entropy models.
On the other hand, this approach can be computationally expensive since it requires an iterative update of the variables of the latent. In order to reduce the complexity and speed-up the process, the following improvements are added:
* for each variable, only a limited variation of the value is considered, typically \([-1,1]\). Moreover, the variation is only tested for values that are not already optimally entropy coded, i.e., for values which are not already the most probable value in the distribution.
* as the modification of a particular latent value only have an impact inside the receptive field centered around this value, only a subset of the latent centered around the value to test is used in order to reduce the complexity. Moreover, the impact of the value modification on the border of the receptive field are negligible in the distortion change. In practice, using only a subset of the tensor with a size half of the receptive field centered on the value to test is enough.
* to speed-up the process on multi-core architectures, the latent optimization is done in parallel over several channels. As the latent updates may not be deterministic, the process is repeated several times until convergence. In practice, we found that 3 passes are enough to converge towards a minimum for the rate distortion cost.
## 4 Results
### Description
The performance are assess on the Kodak dataset for 7 bitrates in the range \([0,1.5]\) bpp. It corresponds to the lambda used during the training in the following range: \(\{0.0018,0.0035,0.0067,0.02,0.04,0.08,0.0130\}\).
The test conditions are presented in Table 1. All AE used the CompressAI framework and training conditions, with 200 epochs. The two proposed methods are implemented using the lightweight framework SADL [27] as a pair of standalone encoder/decoder in pure C++. It should be pointed that no additional frameworks are necessary, only encoder and decoder and the CDFs for each channel is needed.
The last two anchors are traditional codecs used for reference, using default AI configuration for testing. Performances are evaluated by computing the BDrate gains over the base anchor.
### Performance
The figure 1 shows the RD curves of the different methods over the bitrates range. The results using the contexts switching entropy model and RDOQ allow to recover the loss
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Name & Activation (enc/dec) & Entropy model & Latent Optimization \\ \hline Balle2018 & GDN/iGDN [4] & CDFs simple [5] & none \\ Base & ReLU/ReLU & CDFs simple [5] & none \\ LatentTune & ReLU/ReLU & CDFs simple [5] & optimized latent [17] \\ FillGap & ReLU/ReLU & Re-parameterized CDF [25] & none \\ LatentTune+FillGap & ReLU/ReLU & Re-parameterized CDF [25] & optimized latent [17] \\ Contexts (ours) & ReLU/ReLU & contexts switching & none \\ Contexts+RDOQ (ours) & ReLU/ReLU & contexts switching & inference based RDOQ \\ HM 16.22 [30] & traditional codec & & \\ VTM 11.0 [31] & traditional codec & & \\ \hline \end{tabular}
\end{table}
Table 1: Tests list.
from original Balle2018 model (using GDN, float model and pytorch implementation), having a simple model with about -15% gains on top of this reference. The table 2 shows the results in BDrate and also compared to similar methods using optimized entropy coding or latent optimization.
### Code
All source code, training scripts, results and resulting codecs of the proposed methods are available at [26]. The full codecs are standalone without any library dependencies, including the model inference. It allows easy integration in existing code. Moreover, complexity comparison with public implementation of traditional codecs which usually uses CPU, single threaded implementation (like the ones of HM or VTM) is easier.
## 5 Conclusion
We have proposed in this paper a practical implementation of a low complexity end-to-end auto-encoder. Even though our distillated decoder has some loss compared to
\begin{table}
\begin{tabular}{|c|c|} \hline Method & BDrate (\%) \\ \hline Balle2018 & -3.15\% \\ FillGap & -6.49\% \\ LatentTune & -10.86\% \\ Contexts & -11.07\% \\ LatentTune+FillGap & -15.09\% \\ Contexts+RDOQ & -18.56\% \\ HM 16.22 & -30.93\% \\ VTM 11.0 & -46.99\% \\ \hline \end{tabular}
\end{table}
Table 2: BDrate gains over base anchor.
Figure 1: RD curves of the different methods.
the original base, the proposed encoder side optimizations and context based entropy coding allow to save 19% bitrate compared to the distillated model, while saving 15.3% compared to the original model. We believe these optimizations are the building bricks towards a neural codec that could be deployed.
|
2306.11111 | Local automorphisms of $p$-filiform Leibniz algebras | This paper is devoted to study local automorphisms of $p$-filiform Leibniz
algebras. We prove that $p$-filiform Leibniz algebras as a rule admit local
automorphisms which are not automorphisms. | Bakhtiyor Yusupov | 2023-06-19T18:25:23Z | http://arxiv.org/abs/2306.11111v1 | # Local automorphisms of \(p\)-filiform Leibniz algebras
###### Abstract.
This paper is devoted to study local automorphisms of \(p\)-filiform Leibniz algebras. We prove that \(p\)-filiform Leibniz algebras as a rule admit local automorphisms which are not automorphisms.
_Keywords:_ Leibniz algebra, \(p\)-filiform Leibniz algebras, automorphism, local automorphism.
_AMS Subject Classification:_ 17A36, 17B20, 17B40.
## 1. Introduction
In recent years non-associative analogues of classical constructions become of interest in connection with their applications in many branches of mathematics and physics. The notions of local and \(2\)-local derivations (automorphisms) are also popular for some non-associative algebras such as the Lie and Leibniz algebras.
R.Kadison [18] introduced the concept of a local derivation and proved that each continuous local derivation from a von Neumann algebra into its dual Banach bemodule is a derivation. B. Jonson [17] extended the above result by proving that every local derivation from a C*-algebra into its Banach bimodule is a derivation. In particular, Johnson gave an automatic continuity result by proving that local derivations of a C*-algebra \(A\) into a Banach \(A\)-bimodule \(X\) are continuous even if not assumed a priori to be so (cf. [17, Theorem 7.5]). Based on these results, many authors have studied local derivations on operator algebras.
A similar notion, which characterizes non-linear generalizations of automorphisms, was introduced by P.Semrl in [22] as \(2\)-local automorphisms. He described such maps on the algebra \(B(H)\) of all bounded linear operators on an infinite dimensional separable Hilbert space \(H\). After P.Semrl's work, numerous new results related to the description of local and \(2\)-local derivation of some varieties have been appeared (see, for example, [9, 14, 23]).
The first results concerning to local and 2-local derivations and automorphisms on finite-dimensional Lie algebras over algebraically closed field of zero characteristic were obtained in [7, 10]. Namely, in [10] it is proved that every 2-local derivation on a semi-simple Lie algebra \(\mathcal{L}\) is a derivation and that each finite-dimensional nilpotent Lie algebra with dimension larger than two admits 2-local derivation which is not a derivation. In [7] the authors have proved that every local derivation on semi-simple Lie algebras is a derivation and gave examples of nilpotent finite-dimensional Lie algebras with local derivations which are not derivations. Sh.Ayupov, K.Kudaybergenov, B.Omirov proved similar results concerning local and 2-local derivations and automorphisms on simple Leibniz algebras in their recent paper [9]. Local automorphisms of certain finite-dimensional simple Lie and Leibniz algebras are investigated in [8]. Concerning local automorphism, T.Becker, J.Escobar, C.Salas and R.Turdibaev in [13] established that the
set of local automorphisms \(LAut(sl_{2})\) coincides with the group \(Aut^{\pm}(sl_{2})\) of all automorphisms and anti-automorphisms. Later in [15] M.Costantini proved that a linear map on a simple Lie algebra is a local automorphism if and only if it is either an automorphism or an anti-automorphism. Similar results concerning local and 2-local derivations and automorphisms on Lie superalgebras were obtained in [14, 23] and [24]. In [12] local derivations of solvable Leibniz algebras are investigated and it is shown that in the class of solvable Leibniz algebras there exist algebras which admit local derivations which are not derivation and also algebras for which every local derivation is a derivation. Moreover, it is proved that every local derivation on a finite-dimensional solvable Leibniz algebras with model nilradical and maximal dimension of complementary space is a derivation. The results of the paper [11] show that p-filiform Leibniz algebras as a rule admit local derivations which are not derivations. The authors proved similar results concerning local automorphism on the solvable Leibniz algebras with null-filiform and naturally graded non-Lie filiform nilradicals, whose dimension of complementary space is maximal is an automorphism [5]. J.Adashev and B.Yusupov proved similar results concerning local derivations of naturally graded quasi-filiform Leibniz algebras in their recent paper [2]. J.Adashev and B.Yusupov proved that direct sum null-filiform nilpotent Leibniz algebras as a rule admit local derivations which are not derivations [3]. J.Adashev and B.Yusupov proved that quasi-filiform Leibniz algebras of type I, as a rule, admit local automorphisms which are not automorphisms [4]. The first example of a simple (ternary) algebra with nontrivial local derivations is constructed by B.Ferreira, I.Kaygorodov and K.Kudaybergenov in [16]. After that, in [6] Sh.Ayupov, A.Elduque and K.Kudaybergenov constructed an example for a simple (binary) algebra with non-trivial local derivations.
In the paper [20], I.A.Karimjanov, S.M.Umrzaqov, and B.B.Yusupov describe automorphisms, local and 2-local automorphisms of solvable Leibniz algebras with a model or abelian null-radical. They show that any local automorphisms on solvable Leibniz algebras with a model nilradical, the dimension of the complementary space of which is maximal, is an automorphism. But solvable Leibniz algebras with an abelian nilradical with a \(1\)-dimensional complementary space admit local automorphisms which are not automorphisms.
In the present paper we study automorphisms and local automorphisms \(p\)-filiform Leibniz algebras. In Section 3 we describe the automorphisms \(p\)-filiform Leibniz algebras. In Section 4 we describe the local automorphisms \(p\)-filiform Leibniz algebras. We show that in section 4 we describe \(p\)-filiform Leibniz algebras as a rule admit local automorphisms which are not automorphisms.
## 2. Preliminaries
In this section we give some necessary definitions and preliminary results.
**Definition 2.1**.: A vector space with bilinear bracket \((\mathcal{L},[\cdot,\cdot])\) is called a Leibniz algebra if for any \(x,y,z\in\mathcal{L}\) the so-called Leibniz identity
\[\big{[}x,[y,z]\big{]}=\big{[}[x,y],z\big{]}-\big{[}[x,z],y\big{]},\]
holds.
Let \(\mathcal{L}\) be a Leibniz algebra. For a Leibniz algebra \(\mathcal{L}\) consider the following central lower and derived sequences:
\[\mathcal{L}^{1}=\mathcal{L},\quad\mathcal{L}^{k+1}=[\mathcal{L}^{k}, \mathcal{L}^{1}],\quad k\geq 1,\]
\[\mathcal{L}^{[1]}=\mathcal{L},\quad\mathcal{L}^{[s+1]}=[\mathcal{L}^{[s]}, \mathcal{L}^{[s]}],\quad s\geq 1.\]
**Definition 2.2**.: A Leibniz algebra \(\mathcal{L}\) is called nilpotent (respectively, solvable), if there exists \(p\in\mathbb{N}\ (q\in\mathbb{N})\) such that \(\mathcal{L}^{p}=0\) (respectively, \(\mathcal{L}^{[q]}=0\)).The minimal number \(p\) (respectively, \(q\)) with such property is said to be the index of nilpotency (respectively, of solvability) of the algebra \(\mathcal{L}\).
Now let us define a naturally graduation for a nilpotent Leibniz algebra.
**Definition 2.3**.: Given a nilpotent Leibniz algebra \(\mathcal{L}\), put \(\mathcal{L}_{i}=\mathcal{L}^{i}/\mathcal{L}^{i+1},\ 1\leq i\leq n-1\), and \(gr(\mathcal{L})=\mathcal{L}_{1}\oplus\mathcal{L}_{2}\oplus\cdots\oplus \mathcal{L}_{n-1}\). Then \([\mathcal{L}_{i},\mathcal{L}_{j}]\subseteq\mathcal{L}_{i+j}\) and we obtain the graded algebra \(gr(\mathcal{L})\). If \(gr(\mathcal{L})\) and \(L\) are isomorphic, then we say that an algebra \(\mathcal{L}\) is naturally graded.
Now we define the notion of characteristic sequence, which is one of the important invariants. For a finite-dimensional nilpotent Leibniz algebra \(N\) and for the matrix of the linear operator \(R_{x}\) denote by \(C(x)\) the descending sequence of its Jordan blocks' dimensions. Consider the lexicographical order on the set \(C(N)=\{C(x)\mid x\in N\}\).
**Definition 2.4**.: The sequence
\[\left(\max_{x\in N\setminus N^{2}}C(x)\right)\]
is said to be the characteristic sequence of the nilpotent Leibniz algebra \(N\).
**Definition 2.5**.: A Leibniz algebra \(\mathcal{L}\) is called \(p\)-filiform, if the characteristic sequence is \(C(\mathcal{L})=(n-p,\underbrace{1,\ldots,1}_{p})\).
Now we give the definitions of automorphisms and local automorphisms.
**Definition 2.6**.: A linear bijective map \(\varphi:\mathcal{L}\to\mathcal{L}\) is called an automorphism, if it satisfies \(\varphi([x,y])=[\varphi(x),\varphi(y)]\) for all \(x,y\in\mathcal{L}\).
**Definition 2.7**.: Let \(\mathcal{L}\) be an algebra. A linear map \(\Delta:\mathcal{L}\to\mathcal{L}\) is called a local automorphism, if for any element \(x\in\mathcal{L}\) there exists an automorphism \(\varphi_{x}:\mathcal{L}\to\mathcal{L}\) such that \(\Delta(x)=\varphi_{x}(x)\).
It was proved in [1, Theorem 2.9] that any naturally graded indecomposable non-Lie \(p\)-filiform Leibniz algebra, is isomorphic to one of the following pairwise non-isomorphic algebras \((n-p\geq 4)\):
if \(p=2k\), then
\[\mu_{1}: \left\{\begin{array}{ll}[e_{i},e_{1}]=e_{i+1},&1\leq i\leq n-2k-1,\\ [e_{1},f_{j}]=f_{k+j},&1\leq j\leq k,\end{array}\right.\] \[\mu_{2}: \left\{\begin{array}{ll}[e_{i},e_{1}]=e_{i+1},&1\leq i\leq n-2k -1,\\ [e_{1},f_{1}]=e_{2}+f_{k+1},&\\ [e_{i},f_{1}]=e_{i+1},&2\leq i\leq n-2k-1,\\ [e_{1},f_{j}]=f_{k+j},&2\leq j\leq k,\end{array}\right.\]
if \(p=2k+1\), then
\[\mu_{3}: \left\{\begin{array}{ll}[e_{1},e_{1}]=e_{3},\\ [e_{i},e_{1}]=e_{i+1},&2\leq i\leq n-2k-1,\\ [e_{1},f_{j}]=f_{k+j},&1\leq j\leq k,\\ [e_{2},f_{j}]=f_{k+j},&1\leq j\leq k,\end{array}\right.\]
where \(\{e_{1},e_{2},\ldots,e_{n-p},f_{1},f_{2},\ldots,f_{p}\}\) is the basis of the algebra and the omitted products are equal to zero.
## 3. Automorphisms of naturally graded non-Lie \(p\)-filiform Leibniz algebras
In order to start the description we need to know the automorphisms of naturally graded non-Lie \(p\)-filiform Leibniz algebras.
**Proposition 3.1**.: _Any automorphisms of the algebra \(\mu_{1}\) has the following matrix form:_
\[\Phi =\begin{pmatrix}\varphi_{1,1}&\varphi_{1,2}\\ \varphi_{2,1}&\varphi_{2,2}\end{pmatrix},\]
_where_
\[\varphi_{1,1}=\sum_{i=1}^{n-2k}a_{1}^{i}e_{i,i}+\sum_{i=1}^{n-2k-1}a_{1}^{i-1 }\sum_{j=i+1}^{n-2k}a_{j-i+1}e_{j,i},\]
\[\varphi_{2,1}=\sum_{i=1}^{2k}b_{i}e_{i,1}+a_{1}\sum_{i=1}^{k}b_{i}e_{k+i,2}, \quad\varphi_{1,2}=\sum_{i=1}^{k}c_{i}e_{n-2k,i},\]
\[\varphi_{2,2}=\begin{pmatrix}\varphi_{2,2}^{(1)}&0\\ \varphi_{2,2}^{(2)}&a_{1}\varphi_{2,2}^{(1)}\end{pmatrix},\]
\[\varphi_{1,1}\in M_{n-2k,n-2k},\;\varphi_{2,1}\in M_{2k,n-2k},\;\varphi_{1,2} \in M_{n-2k,2k},\;\varphi_{2,2}^{(1)},\varphi_{2,2}^{(2)}.\]
_Let \(\{e_{i,j}:1\leq i,j\leq n\}\) - be the system of matrix units, i.e., the \((n\times n)\)-matrix \(e_{i,j}\) is such that the \((i,j)\)th component is equal to 1 and all other components are zeros._
Proof.: Let \(\{e_{1},f_{1},f_{2},\ldots,f_{k}\}\) be a generator basis elements of the algebra \(\mu_{1}\).
We put
\[\varphi(e_{1})=\sum_{i=1}^{n-2k}a_{i}e_{i}+\sum_{i=1}^{2k}b_{i}f_{i},\qquad \varphi(f_{i})=\sum_{j=1}^{n-2k}c_{j,i}e_{j}+\sum_{j=1}^{2k}d_{j,i}f_{j},\quad 1 \leq i\leq k.\]
From the automorphisms property (2.6) we have
\[\varphi(e_{2}) =\varphi([e_{1},e_{1}])=[\varphi(e_{1}),\varphi(e_{1})]=\left[ \sum_{i=1}^{n-2k}a_{i}e_{i}+\sum_{i=1}^{2k}b_{i}f_{i},\sum_{i=1}^{n-2k}a_{i}e_ {i}+\sum_{i=1}^{2k}b_{i}f_{i}\right]=\] \[=a_{1}\sum_{i=2}^{n-2k}a_{i-1}e_{i}+a_{1}\sum_{i=1}^{k}b_{i}f_{k+i}.\]
Buy applying the induction and the automorphisms property (2.6) we derive
\[\varphi(e_{i})=a_{1}^{i-1}\sum_{t=i}^{n-2k}a_{t-i+1}e_{t},\quad 3\leq i\leq n-2k.\]
Consider
\[0 =\varphi([f_{i},e_{1}])=[\varphi(f_{i}),\varphi(e_{1})]=\left[ \sum_{j=1}^{n-2k}c_{j,i}e_{j}+\sum_{j=1}^{2k}d_{j,i}f_{j},\sum_{j=1}^{n-2k}a_{ j}e_{j}+\sum_{j=1}^{2k}b_{j}f_{j}\right]=\] \[=a_{1}\sum_{j=1}^{n-2k-1}c_{j,i}e_{j+1}+c_{1,i}\sum_{j=1}^{k}b_{ j}f_{j},\quad 1\leq i\leq k.\]
Consequently,
\[a_{1}\neq 0,\ c_{j,i}=0,\quad 1\leq i\leq k,\quad 1\leq j\leq n-2k-1.\]
Similarly, from \(\varphi(f_{k+i})=\varphi([e_{1},f_{i}]),\ 1\leq i\leq k\), we deduce
\[\varphi(f_{k+i})=a_{1}\sum_{j=1}^{k}d_{j,i}f_{k+j},\qquad 1\leq i\leq k.\]
**Proposition 3.2**.: _Any automorphisms of the algebra \(\mu_{2}\) has the following matrix form:_
\[\Phi\ \ =\begin{pmatrix}\varphi_{1,1}&\varphi_{1,2}\\ \varphi_{2,1}&\varphi_{2,2}\end{pmatrix},\]
_where_
\[\varphi_{1,1}=\sum_{i=1}^{n-2k}(a_{1}+b_{1})^{i-1}a_{1}e_{i,i}+\sum_{i=1}^{n- 2k-1}\sum_{j=i+1}^{n-2k}(a_{1}+b_{1})^{i-1}a_{j-i+1}e_{j,i},\]
\[\varphi_{2,1}=\sum_{i=1}^{2k}b_{i}e_{i,1}+a_{1}\sum_{i=1}^{k}b_{i}e_{k+i,2}, \quad D_{1,2}=\sum_{i=1}^{k}c_{i}e_{n-2k,i},\]
\[\varphi_{2,2}=\begin{pmatrix}\varphi_{2,2}^{(1)}&0\\ \varphi_{2,2}^{(2)}&\varphi_{2,2}^{(3)}\end{pmatrix},\]
\[\varphi_{2,2}^{(1)}=\sum_{i=1}^{k}\sum_{j=2}^{k}d_{j,i}e_{j,i}+(a_{1}+b_{1})e_ {1,1},\quad\varphi_{2,2}^{(3)}=a_{1}\varphi_{2,2}^{(1)}-a_{1}\sum_{j=1}^{k}b_{ j}e_{j,1},\]
\[\varphi_{1,1}\in M_{n-2k,n-2k},\ \varphi_{2,1}\in M_{2k,n-2k},\ \varphi_{1,2}\in M _{n-2k,2k},\ \varphi_{2,2}^{(1)},\varphi_{2,2}^{(2)},\varphi_{2,2}^{(3)},\mathbb{E}\in M_{ k,k}.\]
_Let \(\{e_{i,j}:1\leq i,j\leq n\}\) - be the system of matrix units, i.e., the \((n\times n)\)-matrix \(e_{i,j}\) is such that the \((i,j)\)th component is equal to 1 and all other components are zeros._
Proof.: The proof follows by straightforward calculations similarly to the proof of Proposition 3.1.
**Proposition 3.3**.: _Any automorphisms of the algebra \(\mu_{3}\) has the following matrix form:_
\[\Phi\ \ =\begin{pmatrix}\varphi_{1,1}&\varphi_{1,2}\\ \varphi_{2,1}&\varphi_{2,2}\end{pmatrix},\]
_where_
\[\varphi_{1,1}= a_{1}e_{1,1}+\sum_{i=2}^{n-2k}a_{1}^{i-2}(a_{1}+a_{2})e_{i,i}+\sum_ {i=2}^{n-2k}a_{i}e_{i,1}+\sum_{i=3}^{n-2k-1}a_{i}e_{i,2}+\] \[+\beta e_{n-2k,2}+\sum_{i=3}^{n-2k-1}\sum_{j=i+1}^{n-2k}a_{1}^{j-3 }a_{j-i+2}e_{j,i},\] \[\varphi_{2,1}= \sum_{i=1}^{2k}b_{i,1}e_{i,1}+\sum_{i=1}^{k}b_{i,2}e_{k+i,2}+(a_{1 }+a_{2})\sum_{i=1}^{k}b_{i,1}e_{k+i,3},\quad\varphi_{1,2}=\sum_{i=1}^{k}c_{i}e _{n-2k,i},\] \[\varphi_{2,2}= \begin{pmatrix}\varphi_{2,2}^{(1)}&0\\ \varphi_{2,2}^{(2)}&(a_{1}+a_{2})\varphi_{2,2}^{(1)}\end{pmatrix}\]
\(\varphi_{1,1}\in M_{n-2k,n-2k},\;\varphi_{2,1}\in M_{2k,n-2k},\;\varphi_{1,2} \in M_{n-2k,2k},\;\varphi_{2,2}^{(1)},\varphi_{2,2}^{(2)}\in M_{k,k}.\)
_Let \(\{e_{i,j}:1\leq i,j\leq n\}\) - be the system of matrix units, i.e., the \((n\times n)\)-matrix \(e_{i,j}\) is such that the \((i,j)\)th component is equal to 1 and all other components are zeros._
Proof.: The proof follows by straightforward calculations similarly to the proof of Proposition 3.1.
**Remark 3.4**.: The dimensions of the space of automorphisms of the algebras \(\mu_{1},\mu_{2}\) and \(\mu_{3}\) are
\[\dim Aut(\mu_{1})= n+2k^{2}+k,\] \[\dim Aut(\mu_{2})= n+2k^{2}+1,\] \[\dim Aut(\mu_{3})= n+2k^{2}+2k+1,\]
where \(k\in\mathbb{N}\) and \(n\geq 2k+4\).
## 4. Local automorphisms of naturally graded non-Lie \(p\)-filiform Leibniz algebras
In the following theorem we give the description of local automorphisms of the algebra \(\mu_{1}\).
**Theorem 4.1**.: _Let \(\Delta\) be a linear operator on \(\mu_{1}\). Then \(\Delta\) is a local automorphisms, if and only if its matrix has the form:_
\[\Delta=\left(\begin{array}{cc}\Delta_{1,1}&\Delta_{1,2}\\ \Delta_{2,1}&\Delta_{2,2}\end{array}\right), \tag{1}\]
_where_
\[\Delta_{1,1}=\sum_{j=1}^{n-2k}\sum_{i=j}^{n-2k}\gamma_{i,j}e_{i,j},\quad \Delta_{2,1}=\sum_{i=n-2k+1}^{n}\gamma_{i,1}e_{i,1}+\sum_{i=n-k+1}^{n}\gamma_{ i+1,2}e_{i+1,2},\]
\[\Delta_{1,2}=\sum_{i=n-2k+1}^{n-k}\gamma_{n-2k,i}e_{n-2k,i},\]
\[\Delta_{2,2}=\begin{pmatrix}\Delta_{2,2}^{(1)}&0\\ \Delta_{2,2}^{(2)}&\Delta_{2,2}^{(3)}\end{pmatrix},\]
\[\Delta_{2,2}^{(1)}=\sum_{j=n-2k+1}^{n-k}\sum_{j=n-2k+1}^{n-k}\gamma_{i,j}e_{i,j}, \quad\Delta_{2,2}^{(2)}=\sum_{j=n-2k+1}^{n-k}\sum_{j=n-k+1}^{n}\gamma_{i,j}e_{i,j},\]
\[\Delta_{2,2}^{(3)}=\sum_{j=n-k+1}^{n}\sum_{j=n-k+1}^{n}\gamma_{i,j}e_{i,j}.\]
Proof.: \((\Rightarrow)\) Assume that \(\Delta\) is a local automorphisms of \(\mu_{1}:\)
\[\Delta=\left(\begin{array}{cc}\Delta_{1,1}&\Delta_{1,2}\\ \Delta_{2,1}&\Delta_{2,2}\end{array}\right),\]
where
\[\Delta_{1,1}= \sum_{j=1}^{n-2k}\sum_{i=1}^{n-2k}\gamma_{i,j}e_{i,j},\quad\Delta _{2,1}=\sum_{i=n-2k+1}^{n}\sum_{j=1}^{n-2k}\gamma_{i,j}e_{i,j},\] \[\Delta_{1,2}= \sum_{j=n-2k+1}^{n}\sum_{i=1}^{n-2k}\gamma_{i,j}e_{i,j},\quad \Delta_{2,2}=\sum_{j=n-2k+1}^{n}\sum_{j=n-2k+1}^{n}\gamma_{i,j}e_{i,j}.\]
Take a automorphism \(\varphi_{e_{2}}\) such that \(\Delta(e_{2})=\varphi_{e_{2}}(e_{2}).\) Then
\[\Delta(e_{2})= \sum_{j=1}^{n-2k}\gamma_{j,2}e_{2}+\sum_{j=n-2k+1}^{n}\gamma_{j,2 }f_{j-n+2k},\] \[\varphi_{e_{2}}(e_{2})= a_{1}\sum_{i=2}^{n-2k}a_{i-1}e_{i}+a_{1}\sum_{i=1}^{k}b_{i}f_{k+i}.\]
Comparing the coefficients, we conclude that \(\gamma_{1,2}=\gamma_{n-2k+i,2}=0\) for \(1\leq i\leq k\).
We take a automorphism \(\varphi_{e_{i}}\) such that \(\Delta(e_{i})=\varphi_{e_{i}}(e_{i}),\) where \(3\leq i\leq n-2k.\) Then
\[\Delta(e_{i})= \sum_{j=1}^{n-2k}\gamma_{j,i}e_{j}+\sum_{j=n-2k+1}^{n}\gamma_{j,i }f_{j-n+2k},\] \[\varphi_{e_{i}}(e_{i})= a_{1}^{i-1}\sum_{j=i}^{n-2k}a_{j-i+1}e_{j}.\]
Comparing the coefficients at the basis elements for \(\Delta(e_{i})\) and \(\varphi_{e_{i}}(e_{i})\), we obtain the identities
\[\gamma_{t,j}=\gamma_{n-2k+i,j}=0,\quad 3\leq j\leq n-2k,\ 1\leq i\leq 2k,\ 1\leq t \leq n-2k-1.\]
We take a automorphism \(\varphi_{f_{i}}\) such that \(\Delta(f_{i})=\varphi_{f_{i}}(f_{i}),\) where \(1\leq i\leq k\). Then
\[\Delta(f_{i})= \sum_{j=1}^{n-2k}\gamma_{j,n-2k+i}e_{j}+\sum_{j=n-2k+1}^{n}\gamma _{j,n-2k+i}f_{j-n+2k},\] \[\varphi_{f_{i}}(f_{i})= c_{n-2k,i}e_{n-2k}+\sum_{j=1}^{2k}d_{j,i}f_{j}.\]
Comparing the coefficients at the basis elements for \(\Delta(f_{i})\) and \(\varphi_{f_{i}}(f_{i})\), we obtain
\[\gamma_{j,i}=\gamma_{j,n-2k+i}=0,\quad 1\leq j\leq n-2k-1,\ 1\leq i\leq k-1.\]
Now, take a automorphism \(\varphi_{f_{i}}\) such that \(\Delta(f_{i})=\varphi_{f_{i}}(f_{i})\), where \(k+1\leq i\leq 2k\). Then
\[\Delta(f_{i})= \sum_{j=1}^{n-2k}\gamma_{j,n-2k+i}e_{j}+\sum_{j=n-2k+1}^{n}\gamma_ {j,n-2k+i}f_{j-n+2k},\] \[\varphi_{f_{i}}(f_{i})= a_{1}\sum_{j=1}^{k}d_{j,i-k}f_{k+j},\]
which implies
\[\gamma_{j,i}=0,\quad 1\leq j\leq n-k,\;n-k+1\leq i\leq n.\]
\((\Leftarrow)\) Assume that the operator \(\Delta\) has the form (1). For an arbitrary element
\[x=\sum_{i=1}^{n-2k}\xi_{i}e_{i}+\sum_{i=1}^{2k}\zeta_{i}f_{i},\]
we have
\[\varphi(x)_{e_{1}}= a_{1}\xi_{1},\] \[\varphi(x)_{e_{i}}= a_{i}\xi_{1}+\sum_{j=1}^{i-2}a_{1}^{j}a_{i-j}\xi_{j+1}+a_{1}^{i} \xi_{i},\quad 2\leq i\leq n-2k-1,\] \[\varphi(x)_{e_{n-2k}}= a_{n-2k}\xi_{1}+\sum_{j=1}^{n-2k-2}a_{n-2k-j}\xi_{j+1}+(n-2k)a_{1} \xi_{n-2k}+\sum_{j=1}^{k}c_{j,n-2k}\zeta_{j},\] \[\varphi(x)_{f_{i}}= b_{i}\xi_{1}+\sum_{j=1}^{k}d_{j,i}\zeta_{j},\quad\;\;1\leq i \leq k,\] \[\varphi(x)_{f_{i}}= b_{i}\xi_{1}+b_{i-k}\xi_{2}+\sum_{j=1}^{k}d_{j,i}\zeta_{j}+\] \[+a_{1}\sum_{j=1}^{k}d_{j,i-k}\zeta_{k+j},\;\;k+1\leq i\leq 2k.\]
The coordinates of \(\Delta(x)\) are
\[\Delta(x)_{e_{1}}= \gamma_{11}\xi_{1}, \tag{2}\] \[\Delta(x)_{e_{i}}= \sum\limits_{j=1}^{i}\gamma_{i,j}\xi_{j},\quad 2\leq i\leq n-2k-1,\] \[\Delta(x)_{e_{n-2k}}= \sum\limits_{j=1}^{n-2k}\gamma_{n-2k,j}\xi_{j}+\sum\limits_{j=1}^{k }\gamma_{n-2k,n-2k+j}\zeta_{j},\] \[\Delta(x)_{f_{i}}= \gamma_{n-2k+i,1}\xi_{1}+\sum\limits_{j=1}^{k}\gamma_{n-2k+i,n-2k+ j}\zeta_{j},\quad\ \ 1\leq i\leq k,\] \[\Delta(x)_{f_{i}}= \gamma_{n-2k+i,1}\xi_{1}+\gamma_{n-2k+i,2}\xi_{2}+\sum\limits_{j= 1}^{2k}\gamma_{n-2k+i,n-2k+j}\zeta_{j},\ k+1\leq i\leq 2k.\]
Comparing the coordinates of \(\Delta(x)\) and \(D(x),\) we obtain
\[\left(\begin{array}{ll}a_{1}\xi_{1}&=\gamma_{11}\xi_{1}\\ a_{i}\xi_{1}+\sum\limits_{j=1}^{i-2}a_{1}^{j}a_{i-j}\xi_{j+1}+a_{1}^{i}\xi_{i}& =\sum\limits_{j=1}^{i}\gamma_{i,j}\xi_{j},\ \ 2\leq i\leq n-2k-1,\\ a_{n-2k}\xi_{1}+\sum\limits_{j=1}^{n-2k-2}a_{n-2k-j}\xi_{j+1}+(n-2k)a_{1}\xi_{ n-2k}+&\\ +\sum\limits_{j=1}^{k}c_{j,n-2k}\zeta_{j}&=\sum\limits_{j=1}^{n-2k}\gamma_{n-2 k,j}\xi_{j}+\sum\limits_{j=1}^{k}\gamma_{n-2k,n-2k+j}\zeta_{j},\\ b_{i}\xi_{1}+\sum\limits_{j=1}^{k}d_{j,i}\zeta_{j}&=\gamma_{n-2k+i,1}\xi_{1}+\\ &+\sum\limits_{j=1}^{k}\gamma_{n-2k+i,n-2k+j}\zeta_{j},\ \ 1\leq i\leq k,\\ b_{i}\xi_{1}+b_{i-k}\xi_{2}+\sum\limits_{j=1}^{k}d_{j,i}\zeta_{j}+&\\ +a_{1}\sum\limits_{j=1}^{k}d_{j,i-k}\zeta_{k+j}&=\gamma_{n-2k+i,1}\xi_{1}+ \gamma_{n-2k+i,2}\xi_{2}+\\ &+\sum\limits_{j=1}^{2k}\gamma_{n-2k+i,n-2k+j}\zeta_{j},\ \ k+1\leq i\leq 2k.\end{array}\right.\]
We show the solvability of this system of equations with respect to \(a_{i},b_{i},c_{i}\) and \(d_{i,j}\). For this purpose we consider the following possible cases.
**Case 1.** Let \(\xi_{1}\neq 0\), then putting \(c_{i}=d_{i,j}=0,\ 1\leq i\leq k,\ 1\leq j\leq k\) from (2) we uniquely determine \(a_{1},a_{2},\ldots,a_{n-2k},b_{1},b_{2},\ldots,b_{2k}\).
**Case 2.** Let \(\xi_{1}=0\) and \(\xi_{2}\neq 0\), then putting \(c_{i}=d_{i,j}=0,\ 1\leq i\leq k,\ 1\leq j\leq k\) we uniquely determine remaining unknowns \(a_{1},a_{2},\ldots,a_{n-2k},b_{1},b_{2},\ldots,b_{k}\).
**Case 3.** Let \(\xi_{1}=\xi_{2}=\cdots=\xi_{r-1}=0\) and \(\xi_{r}\neq 0,\ 3\leq r\leq n-2k\). Then putting
\[a_{2}=\ldots=a_{n-2k-m}=b_{1}=\ldots=c_{1}=\ldots=d_{t,j}=0,\ 1\leq t\leq k,\ 1\leq j\leq k.\]
we determine unknowns \(a_{1},\ a_{n-2k-m+1},\ i\leq m\leq n-2k\).
**Case 4.** Let \(\xi_{1}=\ldots=\xi_{n-2k}=\zeta_{1}=\ldots=\zeta_{r-1}=0\) and \(\zeta_{r}\neq 0,\ 1\leq r\leq k\). Then setting
\[a_{2}=\ldots=b_{1}=\ldots=0,\ c_{i}=0,\ i\neq r,\ d_{j,i}=0,\ j\neq r,\]
we determine \(a_{1},\ c_{r},d_{r,i},\ 1\leq i\leq k.\)
**Case 5.** Let \(\xi_{1}=\ldots=\xi_{n-2k}=\zeta_{1}=\ldots=\zeta_{k+r-1}=0\) and \(\zeta_{k+r}\neq 0,\ 1\leq r\leq k.\) Then setting
\[a_{2}=\ldots=b_{1}=\ldots=c_{1}=\ldots=0,\ d_{j,i}=0,\ r\neq k+r,\]
we obtain that the unknowns \(a_{1},\ d_{k+r,i},k+1\leq i\leq 2k,\) are uniquely determined from (2).
In the following theorems we obtain the descriptions of local automorphism of the algebras \(\mu_{2}\) and \(\mu_{3}\).
**Theorem 4.2**.: _Let \(\Delta\) be a linear operator on \(\mu_{2}\). Then \(\Delta\) is a local automorphism if and only if its matrix has the form:_
\[\Delta=\left(\begin{array}{cc}\Delta_{1,1}&\Delta_{1,2}\\ \Delta_{2,1}&\Delta_{2,2}\end{array}\right),\]
_where_
\[\Delta_{1,1}=\sum_{j=1}^{n-2k}\sum_{i=j}^{n-2k}\alpha_{j,i}e_{j,i},\quad\Delta _{2,1}=\sum_{i=1}^{2k}\beta_{n-2k+i,1}e_{n-2k+i,1}+\sum_{i=1}^{k}b_{n-2k+i,2}e _{n-2k+i,2},\]
\[\Delta_{1,2}=\sum_{i=1}^{k}\gamma_{n-2k,n-2k+i}e_{n-2k,n-2k+i},\]
\[\Delta_{2,2}=\begin{pmatrix}\Delta_{2,2}^{(1)}&0\\ \Delta_{2,2}^{(2)}&\Delta_{2,2}^{(3)}\end{pmatrix},\]
\[\Delta_{2,2}^{(1)}=\delta_{n-2k+1,n-2k+1}e_{n-2k+1,n-2k+1}+\sum_{i=n-2k+1}^{n- k}\sum_{j=n-2k+2}^{n-k}\delta_{j,i}e_{j,i},\]
\[\Delta_{2,2}^{(2)}=\sum_{i=n-k+1}^{n-k}\sum_{j=n-k+1}^{n}\delta_{j,i}e_{j,i},\]
\[\Delta_{2,2}^{(3)}=\delta_{n-k+1,n-k+1}e_{n-k+1,n-k+1}+\sum_{i=n-k+1}^{n}\sum_ {j=n-k+2}^{n}\delta_{j,i}e_{j,i}.\]
Proof.: The proof is similar to the proof of Theorem 4.1
**Theorem 4.3**.: _Let \(\Delta\) be a linear operator on \(\mu_{3}\). Then \(\Delta\) is a local automorphism if and only if its matrix has the form:_
\[\Delta=\left(\begin{array}{cc}\Delta_{1,1}&\Delta_{1,2}\\ \Delta_{2,1}&\Delta_{2,2}\end{array}\right),\]
_where_
\[\Delta_{1,1}=\sum_{j=1}^{n-2k}\sum_{i=j}^{n-2k}\alpha_{j,i}e_{j,i},\quad\alpha _{i,1}=\alpha_{i,2},\ \alpha_{2,2}=\alpha_{1,1}+\alpha_{2,1},\ 3\leq i\leq n-2k-1\]
\[\Delta_{2,1}=\sum_{i=1}^{2k}\beta_{n-2k+i,1}e_{n-2k+i,1}+\sum_{i=1}^{k}b_{n-2 k+i,2}e_{n-2k+i,2}+\sum_{i=1}^{k}b_{n-2k+i,3}e_{n-2k+i,3},\]
\[\Delta_{1,2}=\sum\limits_{i=1}^{k}\gamma_{n-2k,n-2k+i}e_{n-2k,n-2k+i},\]
\[\Delta_{2,2}=\begin{pmatrix}\Delta_{2,2}^{(1)}&0\\ \Delta_{2,2}^{(2)}&\Delta_{2,2}^{(3)}\end{pmatrix},\]
\[\Delta_{2,2}^{(1)}=\sum\limits_{i=n-2k+1}^{n-k}\sum\limits_{j=n-2k+1}^{n-k} \delta_{j,i}e_{j,i},\quad\Delta_{2,2}^{(2)}=\sum\limits_{i=n-k+1}^{n-k}\sum \limits_{j=n-k+1}^{n}\delta_{j,i}e_{j,i},\]
\[\Delta_{2,2}^{(3)}=\sum\limits_{i=n-k+1}^{n}\sum\limits_{j=n-k+1}^{n}\delta_{j,i}e_{j,i}\]
Proof.: The proof is similar to the proof of Theorem 4.1
**Example 4.4**.: Leibniz algebras \(\mu_{1}\) (see Proposition 3.1), admit a local automorphisms which is not a automorphisms.
Proof.: Let us consider the linear operator \(\Phi\) on \(\mathfrak{L},\) such that
\[\Phi\left(x\right)=x+x_{2}e_{n-2k},\ \ x=\sum\limits_{i=1}^{n-2k}x_{i}e_{i}+ \sum\limits_{i=1}^{2k}x_{i}f_{i}\]
By Lemma 3.1, it is not difficult to see that \(\Phi\) is not a automorphism. We show that, \(\Phi\) is a local automorphism on \(\mu_{1}.\)
Consider the automorphism \(\varphi_{1}\) and \(\varphi_{2}\) on the algebras \(\mu_{1},\) defined as:
\[\varphi_{1}\left(x\right)=x+x_{1}e_{n-2k-1}+x_{2}e_{n-2k},\]
\[\varphi_{2}\left(x\right)=x+\beta x_{1}e_{n-2k},\ \ x=\sum\limits_{i=1}^{n-2k}x_{i}e_{i}+ \sum\limits_{i=1}^{2k}x_{i}f_{i}.\]
Now, for any \(x=\sum\limits_{i=1}^{n-2k}x_{i}e_{i}+\sum\limits_{i=1}^{2k}x_{i}f_{i},\) we shall find a automorphism \(\varphi,\) such that \(\Phi(x)=\varphi(x).\)
If \(x_{1}=0,\) then
\[\varphi_{1}(x)=x+x_{2}e_{n-2k}=\Phi(x).\]
If \(x_{1}\neq 0,\) then set \(\beta=\frac{x_{2}}{x_{1}},\) we obtain that
\[\varphi_{2}(x)=x+\beta x_{1}e_{n}=x+\frac{x_{2}}{x_{1}}x_{1}e_{n-2k}=x+x_{2}e_ {n-2k}=\Phi(x)\]
Hence, \(\Phi\) is a local automorphism.
**Remark 4.5**.: The dimensions of the space of local automorphism of algebras \(\mu_{1},\mu_{2}\) and \(\mu_{3}\) are
\[\dim LocDer(\mu_{1})= \frac{n^{2}+10k^{2}-4kn+n+6k}{2},\] \[\dim LocDer(\mu_{2})= \frac{n^{2}+10k^{2}-4kn+n+2k+4}{2},\] \[\dim LocDer(\mu_{3})= \frac{n^{2}+10k^{2}-4kn-n+12k+4}{2},\]
where \(k\in\mathbb{N}\) and \(n\geq 2k+4.\)
Remarks 3.4 and 4.5 show that the dimensions of the spaces of all local automorphism of the algebras \(\mu_{i},\)\(i=1,2,3,\) are strictly greater than the dimensions of the space of all automorphism of \(\mu_{i}.\) Therefore, we have the following result.
**Corollary 4.6**.: _The algebras \(\mu_{1},\mu_{2}\) and \(\mu_{3}\) admit local automorphism which are not automorphism._
|
2307.08769 | The impact of electric currents on Majorana dark matter at freeze out | Thermal relics with masses in the GeV to TeV range remain possible candidates
for the Universe's dark matter (DM). These neutral particles are often assumed
to have vanishing electric and magnetic dipole moments so that they do not
interact with single real photons, but the anapole moment can still be nonzero,
permitting interactions with single virtual photons. This anapole moment allows
for p-wave annihilation of DM into standard model particles, and the DM
interacts with external electric currents via the anapole moment. Moving beyond
their static electromagnetic properties, these particles generically have
non-zero polarizabilities which mediate interactions with two real photons; in
particular, spin-dependent polarizibilities admit s-wave annihilation of the DM
into two photons. When the Universe cools from a temperature on the order of
the DM mass to freeze out, the DM is in thermal equilibrium with the background
plasma of particles, but the comoving DM density decreases due to annihilation.
If a collection of initially unpolarized DM particles were subjected to an
electric current, then the DM medium would become partially polarized,
according to the Boltzmann distribution, with a slight excess of anapole
moments aligned with the current, relative to those anti-aligned. For this
region of partially polarized DM particles, the s-wave annihilation mode
becomes partially suppressed because it requires a state of vanishing angular
momentum. As a consequence, the decreased DM annihilation rate in this region
will result in an excess of DM density, relative to an unpolarized region, as
DM drops out of thermal equilibrium. We explored this relative change of DM
density for DM that is subjected to an electric current through freeze out. | Lukas Karoly, David C. Latimer | 2023-07-17T18:29:21Z | http://arxiv.org/abs/2307.08769v1 | # The impact of electric currents on Majorana dark matter at freeze out
###### Abstract
Thermal relics with masses in the GeV to TeV range remain possible candidates for the Universe's dark matter (DM). These neutral particles are often assumed to have vanishing electric and magnetic dipole moments so that they do not interact with single real photons, but the anapole moment can still be nonzero, permitting interactions with single virtual photons. This anapole moment allows for p-wave annihilation of DM into standard model particles, and the DM interacts with external electric currents via the anapole moment. Moving beyond their static electromagnetic properties, these particles generically have non-zero polarizabilities which mediate interactions with two real photons; in particular, spin-dependent polarizibilities admit s-wave annihilation of the DM into two photons. When the Universe cools from a temperature on the order of the DM mass to freeze out, the DM is in thermal equilibrium with the background plasma of particles, but the comoving DM density decreases due to annihilation. If a collection of initially unpolarized DM particles were subjected to an electric current, then the DM medium would become partially polarized, according to the Boltzmann distribution, with a slight excess of anapole moments aligned with the current, relative to those anti-aligned. For this region of partially polarized DM particles, the s-wave annihilation mode becomes partially suppressed because it requires a state of vanishing angular momentum. As a consequence, the decreased DM annihilation rate in this region will result in an excess of DM density, relative to an unpolarized region, as DM drops out of thermal equilibrium. We explored this relative change of DM density for DM that is subjected to an electric current through freeze out.
## I Introduction
A concordance of observations point to a universe whose matter content is overwhelmingly comprised of some new type of particles, outside the standard model [1]. The constraints on this new matter are few: it must be non-relativistic, stable, and relatively weakly interacting. A recent history of dark matter, including a discussion of various dark matter (DM) models, can be found in Ref. [2]. Early on, one class of DM models, weakly interacting massive particles (WIMPs), was theoretically well motivated, in part because of its potential tie in with supersymmetry [3]. Additional motivation for WIMPs stemmed from a coincidence often called the "WIMP miracle." If DM were a thermal relic, then weak-scale masses and annihilation cross sections for the DM candidate naturally result in the correct relic density of DM observed today. This coincidence is compelling, but we note that a much wider class of WIMPless models can satisfy the same relic density constraint, e.g. Ref. [4].
Despite a multimodal approach to DM detection, no definitive DM signal yet exists, though some observations show tantalizing hints. Perhaps because of theoretical prejudice, many direct DM detection experiments focus upon the WIMP parameter space, broadly construed to include DM masses between the GeV scale up to a few TeV. Aside from a few exceptions [5; 6; 7; 8; 9], decades of direct detection experiments have found no evidence for DM, and as a result, strict limits on the DM-nucleus interaction cross section [10; 11; 12] have ruled out many WIMP DM models. In addition to direct detection experiments, particle colliders also place stringent constraints on DM models. In particular, the LHC has produced no particles outside the standard model, making tenuous the notion that DM is comprised by supersymmetry's neutralino [13]. One other tack to assess the presence of dark matter is via indirect detection experiments in which telescopes search for high energy cosmic rays or photons. If signals cannot be attributable to standard astrophysical sources, then they may be due to DM annihilation. In some instances, this results on constraints on the DM annihilation cross section, as with the observations of dwarf spheroidal galaxies from the Fermi Large Area Telescope (Fermi-LAT) for DM masses below 100 GeV [14], while Fermi-LAT observations of the Galactic Center hint at the possibility of a DM annihilation [15]. Additionally, observations of antiprotons in the AMS-02 detector [16] could also signal DM annihilation for a DM mass around 80 GeV [17].
Because of the severe constraints imposed by direct detection experiments and particle colliders, axion dark matter models are, perhaps, eclipsing WIMP models in terms of their favorability [18], but the WIMP paradigm is not entirely dead because there is still viable parameter space remaining [19; 20; 21]. With a narrowing parameter space, present-day modelers are opting to explore the WIMP paradigm with either simplified models or from the perspective of effective field theory (EFT) in which the modelers remain agnostic to a particular UV completion of the theory [22].
In an EFT analysis of DM, one couples DM directly to standard model (SM) particles at low energies via, often, dimensionful effective couplings that depend on a high-energy scale \(\Lambda\). As long as interaction energies are well below \(\Lambda\), the effective interactions faithfully capture the relevant physics. For neutral dark matter, the leading order electromagnetic interactions in an EFT occur through their static electromagnetic properties. Elec
tric and magnetic dipole moments proceed through mass dimension-5 operators, and several DM modelers have explored the possibility that DM predominantly interacts through such moments [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. If the DM candidate is a Majorana fermion, then both the electric and magnetic dipole moments must vanish identically, and its sole static electromagnetic property is its anapole moment [40; 41]. Anapole interactions of DM have been studied in Refs. [21; 25; 30; 35; 36; 37; 38; 39; 42; 43; 44; 45; 46]. Because the anapole interaction arises from a dimension-6 operator, the anapole moment is suppressed by \(\Lambda^{2}\) which further suppresses these leading order interactions for Majorana fermion DM. Additionally, one can choose model parameters so that annihilation precedes primarily through p-wave modes [42], and because its annihilation is velocity suppressed, it can more easily evade indirect detection constraints [21].
Moving beyond the static electromagnetic properties of fermions, higher order electromagnetic interactions with Majorana fermions are mediated by operators that are dimension-7 and beyond [22; 25; 34]. In particular, a fermion can interact with two real photons via its polarizabilities. There are six two-photon interactions, two spin independent and four spin dependent, that are separately invariant under charge conjugation (\(\mathcal{C}\)), parity (\(\mathcal{P}\)), and time reversal (\(\mathcal{T}\)) [47], and there an additional ten more polarizability terms that are either \(\mathcal{C}\)-odd and \(\mathcal{T}\)-odd or \(\mathcal{C}\)-odd and \(\mathcal{P}\)-odd [48]. The self-conjugate nature of the Majorana fermion does limit its interaction with two real photons somewhat, requiring the four \(\mathcal{C}\)-odd and \(\mathcal{P}\)-odd polarizabilites to vanish, but that still leaves a dozen modes unconstrained [49].
In this paper, we will focus upon Majorana fermion DM with a non-zero anapole moment and non-zero polarizabilities, but we will restrict our considerations to the six polarizabilities that separately preserve \(\mathcal{C}\), \(\mathcal{P}\), and \(\mathcal{T}\) symmetries. Of these six, the two spin-independent polarizabilities arise from a dimension-7 interaction, \(\mathcal{L}_{\text{SI pol}}\sim\frac{1}{\Lambda^{3}}\chi^{\dagger}\chi F^{\mu \nu}F_{\mu\nu}\), that gives rise to the low-energy interaction Hamiltonian, expressed in terms of the electric and magnetic fields, \(H_{\text{SI pol}}\sim\alpha_{E}E^{2}+\beta_{M}B^{2}\). The spin-independent electric and magnetic polarizabilities, \(\alpha_{E}\) and \(\beta_{M}\), are order \(\mathcal{O}\left(\frac{1}{\Lambda^{3}}\right)\). The four spin-dependent polarizabilities arise from interactions that depend upon _derivatives_ of the electric and magnetic fields. As a consequence, the four spin-dependent polarizabilities, \(\gamma_{j}\), are nominally \(\mathcal{O}\left(\frac{1}{\Lambda^{3}}\right)\), though their precise mass dependence depends on the particular UV completion of the theory [49]. It is these spin-dependent polarizabilities that allow for s-wave annihilation of two Majorana fermions into two photons. Nominally, this s-wave mode is suppressed relative to p-wave annihilation, but depending upon the particular UV completion, s-wave annihilation can be comparable to the p-wave mode [50].
The coupling between an anapole moment and a real photon vanishes, so the leading order electromagnetic interaction for a Majorana fermion is via the exchange of a virtual photon with a charged particle. Thus, at low energies, Majorana fermions do not couple to electric or magnetic fields; they only couple to electric currents. If a spin-\(\frac{1}{2}\) Majorana fermion is immersed in a persistent electric current, there is a difference in the two spin states of the fermion, with the lower energy state corresponding to the one in which the anapole moment is aligned with the current. In the presence of a background current, a collection of Majorana fermions can be undergo a some level of polarization, at least in principle.
This polarization can only be achieved if there are mechanisms that allow the Majorana fermion to change spin states. For particles that are not in thermal equilibrium, such as DM after freeze out, only irreversible processes can allow the higher-energy anti-aligned anapole moments to flip spin to the lower energy aligned state. Spontaneous two-photon emission or single-photon emission via virtual Compton scattering are two such irreversible mechanisms; however, because the photon coupling occurs through the polarizabilities, the rates of these irreversible processes are extremely small [51]. However, before freeze out when the DM is in equilibrium with the thermal bath, _reversible_ spin-flip processes can lead to a partially polarized DM medium in the presence of a background current, assuming the spin-flip interactions happen at a sufficient rate. The Boltzmann distribution would guarantee a slight excess of lower-energy states with spins aligned with the current.
Our focus in this paper will be on DM in the early Universe, specifically when the DM is in thermal equilibrium with the Universe from a temperature around the DM mass, \(T\sim m_{\chi}\), until freeze out. As the Universe cools and expands, whenever its temperature is around the DM mass, most SM particles no longer have sufficient energy to produce DM via annihilation. As a consequence, the comoving DM density decreases as it continues to annihilate into SM particles. This decrease in comoving DM density continues until DM drops out of thermal equilibrium with the rest of the Universe at freeze out. At this point, because DM annihilations become so rare, the comoving DM density becomes constant, yielding the relic density present today. This relic density is largely determined by the DM annihilation cross section; a larger cross section results in a smaller relic density, and vice versa.
In this paper, we explore the consequence that a persistent local background current can have on DM in this time before freeze out. If the DM polarization induced by the current is sufficiently large, then the s-wave mode of DM annihilation can be suppressed because the s-wave mode requires the annihilating particles to have opposite spin states. Overall, the DM annihilation rate will be somewhat smaller in the presence of the current than otherwise, which would result in a local relic density that is higher than regions without a background current.
## II EFT Interactions
Anapole interactions arise from an effective Lagrangian term \(\mathcal{L}_{\rm ana}=\frac{1}{2}\frac{g}{\Lambda^{2}}\bar{\chi}^{\dagger}\gamma^ {\mu}\gamma^{5}\partial^{\nu}F_{\mu\nu}\)[40; 41; 52]. The anapole moment is the dimension-2 coefficient \(f_{a}=\frac{g}{\Lambda^{2}}\) that results from a UV complete theory in which the neutral fermion effectively couples to the photon field through, at least, a one-loop process. In the UV-complete Lagrangian, there must be a parity-violating [52] trilinear term that couples the Majorana fermion to a charged fermion and (vector or scalar) boson. In terms of the mass scale \(\Lambda\) for the anapole moment, it is set by the dominant mass of the charged particle to which the Majorana fermion couples. Examples of how this coefficient depends on the underlying UV physics can be found in Refs. [49; 53; 54].
At tree-level, Majorana fermions scatter charged particles via the exchange of a virtual photon. At low energies, the resulting interaction Hamiltonian is \(H_{\rm ana}=-f_{a}\mathbf{\sigma}\cdot\mathbf{J}\), where \(\mathbf{\sigma}\) are the Pauli spin matrices and \(\mathbf{J}\) is the current density associated with the charged particle [40; 41; 52]. Given this, a background current establishes an energy difference \(\mathcal{E}=2f_{a}J\) between the Majorana fermion states aligned and antialigned with the current.
Similarly, at tree level, Majorana fermion annihilation into a charged SM particle-antiparticle pair proceeds through the coupling of the anapole moment to a virtual photon. This p-wave cross section has been computed previously in Refs. [42; 43], and we quote the results here. The thermally averaged cross section is
\[\langle\sigma_{p}|v|\rangle=16\alpha N_{\rm eff}f_{a}^{2}m_{\chi}^{2}\left( \frac{T}{m_{\chi}}\right). \tag{1}\]
The factor \(N_{\rm eff}\) accounts for all the kinematically available final states, weighted by the square of the particles' charges. For \(m_{\chi}<80\) GeV, all final states are fermionic. Annihilation into an electron-positron pair contributes 1 to \(N_{\rm eff}\); annihilation into a quark-antiquark pair, whose charges are \(\pm qe\), contributes \(3q^{2}\) to \(N_{\rm eff}\), where the factor of 3 accounts for color degrees of freedom. For \(m_{\chi}>80\) GeV, we must include the possibility that the Majorana fermions can annihilate into \(W\) bosons. We can accommodate this in \(N_{\rm eff}\) with the term \(\frac{3}{4}m_{\chi}^{2}/m_{W}^{2}\) if we employ the approximation, as in Ref. [43], that \(m_{W}\gg m_{\chi}\).
Moving beyond the anapole moment, we consider two photon interactions with a Majorana fermion. Spin-independent interactions arise from the dimension-7 effective Lagrangian discussed above. We are interested in the spin-dependent two-photon interactions because s-wave annihilation proceeds through these channels. The spin-dependent terms that couple the Majorana fermion to two photons involve derivatives of the electromagnetic field, and they have been characterized in Ref. [55] and elsewhere. The four coefficients of these dimension-8 terms, the spin-dependent polarizabilities \(\gamma_{j}\), carry mass dimension \([M]^{-4}\). Ostensibly, these polarizabilities would seem to scale as \(\Lambda^{-4}\), but in considering an explicit simplified UV complete theory, the reality is somewhat more nuanced. As with the anapole moment, the effective two-photon coupling to a Majorana fermion arises from a direct coupling of this fermion to charged particles. Let's suppose these charged particles have mass \(\Lambda\) and \(m\) with \(\Lambda>m\). From a simplified UV-completion [50], one finds that the polarizabilities can scale as \(\sim\frac{1}{\Lambda^{4}}\) and \(\sim\frac{1}{\Lambda^{2}m^{2}}\).
These two mass scales in the spin-dependent polarizability coeffecients also make an appearance in the s-wave annihilation cross section. Dimensional analysis suggests \(\langle\sigma_{s}|v|\rangle\sim\gamma^{2}m_{\chi}^{6}\sim\tilde{g}^{2}\frac{m_ {\chi}^{6}}{\Lambda^{8}}\) where \(\gamma\) is some linear combination of the polarizabilities and \(\tilde{g}\) is a dimensionless coefficient. From a UV complete theory, we note that there are scenarios in which the s-wave annihilation cross section is not as suppressed as one might naively assume: \(\langle\sigma_{s}|v|\rangle\sim\tilde{g}^{2}\frac{m_{\chi}^{6}}{\Lambda^{4}m^ {4}}\)[50]. We will adopt the EFT s-wave annihilation rate to be
\[\langle\sigma_{s}|v|\rangle=\tilde{g}^{2}\frac{m_{\chi}^{2}}{\Lambda^{4}\mu^{ 4}}, \tag{2}\]
where \(\mu:=\frac{m}{m_{\chi}}\) with \(1<\mu<\frac{\Lambda}{m_{\chi}}\).
We would like to compare the relative s- and p-wave annihilation rates in order to determine their impact upon the relic density for a thermal WIMP. In the anapole interaction, the dimensionless coupling, \(g\), must be small enough in order for perturbative calculations to be viable; we will set \(g=1\). For the polarizabilities, the corresponding dimensionless coupling, \(\tilde{g}\), is relatable to \(g\) in a UV-complete theory. The polarizabilities arise from, at least, a four-vertex Feynman diagram while the anapole moment comes from a three-vertex diagram. Given this, \(\tilde{g}\) should involve an extra factor of \(e\) relative to \(g\) which would suppress \(\tilde{g}\) by a factor \(\mathcal{O}(10^{-1})\) relative to \(g\). But, at the same time, \(\tilde{g}\) may incorporate corrections that are logarithmic in the relevant mass scale that could be \(\mathcal{O}(10)\)[49]. Given this, we will also take \(\tilde{g}=1\), admitting that a particular UV-complete theory might deviate from this value by a factor of 10.
With these couplings fixed, we find that s-wave annihilation into photons is larger than or comparable to p-wave annihilation into charged particles whenever
\[\mu^{4}\lesssim\frac{1}{16}\frac{1}{\alpha N_{\rm eff}}\frac{m_{\chi}}{T}. \tag{3}\]
To see what size \(\mu\) makes the two annihilation channels comparable, we consider DM masses, \(m_{\chi}\), between 5 GeV and 80 GeV because, in this mass range, \(N_{\rm eff}\) is fixed at 6.67. As a figure of merit, thermal WIMPs typically fall out of thermal equilibrium in the early Universe for a temperature around \(T_{f}\sim\frac{m_{\chi}}{20}\). Given this, we see that s-wave annihilation can exceed the p-wave process for \(m\leq 2.3m_{\chi}\). The upshot is that the s-wave annihilation mode is subdominant unless \(m\) is \(\mathcal{O}(m_{\chi})\).
## III Rate of Reversible spin-flip processes
Before freeze out, DM is in thermal equilibrium with the rest of the universe, and if the DM medium were within a background external current, we would expect a slight polarization of the medium by virtue of the Boltzmann distribution. But, if a DM medium is initially unpolarized and then subjected to a current, there must be a sufficient rate of spin-flip interactions to assist in establishing polarization. In particular, the spin-flip rate must be much larger than the Universe's expansion rate, \(\Gamma_{\text{spin flip}}\gg H\). In the radiation-dominated era, we have
\[H=1.66g_{*}^{\frac{1}{2}}\frac{T^{2}}{M_{\text{Pl}}}, \tag{4}\]
where \(g_{*}\) represents the relativistic degrees of freedom at temperature \(T\) and \(M_{\text{Pl}}\) is the Planck mass [56].
The anapole moment is spin dependent, so interaction, via a single virtual photon, with a charged SM particle in the relativistic plasma can change a Majorana fermion's spin orientation. We first focus upon the interaction between the Majorana fermion \(\chi\) and one species of relativistic fermion \(\psi\) with charge \(qe\). We assume the background current is \(\mathbf{J}=J\hat{\mathbf{z}}\), and we average over the initial spin of the charged fermion and sum over its final spin states. Below, we compute the amplitude for the process: \(\chi(p,\downarrow)+\psi(k)\rightarrow\chi(p^{\prime},\uparrow)+\psi(k^{ \prime})\). To do so, we make several simplifying assumptions. In particular, we neglect the mass of the charged fermion because it is relativistic. Also, the Majorana fermion is non-relativistic which implies \(|\mathbf{p}|,|\mathbf{p}^{\prime}|\ll m_{\chi}\). Implementing these approximations, the leading order contribution to the squared amplitude for this process is
\[|\mathcal{M}_{\psi}|^{2}\approx 4q^{2}e^{2}f_{a}^{2}m_{\chi}^{2}(E_{\psi}E_{ \psi}^{\prime}-k_{z}k_{z}^{\prime}). \tag{5}\]
Integrating over the phase space for the final states, the total DM spin-flip cross section is
\[\sigma_{\psi\text{ flip}}=q^{2}\alpha f_{a}^{2}E_{\psi}^{2}, \tag{6}\]
where \(\alpha\) is the fine structure constant.
The rate at which the charged SM fermion can flip the spin of Majorana fermion depends on the total interaction cross section as well as the flux of incident charged particles, \(\Gamma_{\psi\text{ flip}}=n_{\psi}|v|\sigma_{\psi\text{ flip}}\). At temperature \(T\), the fermion number density in the radiation-dominated early Universe is given by
\[n_{\text{fermion}}= \frac{3}{4}\frac{\zeta(3)}{\pi^{2}}g_{\text{dof}}T^{3}, \tag{7}\]
where \(\zeta\) is the Riemann zeta function and \(g_{\text{dof}}\) represents the degrees of freedom for the particular species [56]. In the plasma, these fermions are incident upon the Majorana fermion from all directions with a thermal distribution of momenta, so we average over all possible \(\mathbf{k}\) in the distribution. The thermally averaged cross section becomes
\[\langle\sigma_{\psi\text{ flip}}\ket{v}\rangle=\frac{15\zeta(5)}{\zeta(3)}q^{2 }\alpha f_{a}^{2}T^{2}. \tag{8}\]
With this averaged cross section, we now have the thermally averaged spin-flip rate, \(\langle\Gamma_{\psi\text{ flip}}\rangle\), due to interaction with a single species of fermion in the early Universe. At a given temperature, there could be a host of different relativistic fermion species. We incorporate this through an incoherent sum of spin-flip rates induced by each individual relativistic species appropriately weighted by its charge, \(q\).
For temperatures above 80 GeV, the \(W\) boson is relativistic, so it can appreciably contribute to the spin-flip rate of the Majorana fermion. As with the charged fermion, the interaction between the \(W\) boson and Majorana fermion involves the exchange of a virtual photon, \(\chi(p,\downarrow)+W(k)\rightarrow\chi(p^{\prime},\uparrow)+W(k^{\prime})\). When computing the cross section for this process, we make several simplifying assumptions. As above, we keep only leading order terms for the fermion spin line, taking \(|\mathbf{p}|,|\mathbf{p}^{\prime}|\ll m_{\chi}\), and we assume the \(W\) boson to be highly relativistic with \(|\mathbf{k}|,|\mathbf{k}^{\prime}|\gg m_{W}\). For these relativistic bosons, the longitudinal polarization state will dominate with \(\varepsilon_{L}^{\mu}\rightarrow\frac{k^{\mu}}{m_{W}}\). Implementing these assumptions, the leading contribution to the squared amplitude is
\[|\mathcal{M}_{W}|^{2}\approx 4e^{2}f_{a}^{2}\frac{m_{\chi}^{2}}{m_{W}^{4}}(k \cdot k^{\prime})^{2}|\mathbf{S}\cdot(\mathbf{k}+\mathbf{k}^{\prime})|^{2}, \tag{9}\]
where we define the three-vector \(\mathbf{S}:=(1,-i,0)\) Integrating over the phase space of the final states, we find the total cross section to be
\[\sigma_{W\text{ flip}}\ket{v}=\frac{1}{6}\alpha f_{a}^{2}\frac{E_{W}^{4}}{m_{ W}^{2}}(5-\cos 2\theta), \tag{10}\]
where \(\theta\) is the polar angle of the initial boson's momentum, \(\mathbf{k}\). When relativistic, the boson number density in the radiation-dominated early Universe is given by [56]
\[n_{\text{boson}}= \frac{\zeta(3)}{\pi^{2}}g_{\text{dof}}T^{3}. \tag{11}\]
Averaging over the thermal distribution of relativistic \(W\) bosons we find
\[\langle\sigma_{W\text{ flip}}\ket{v}\rangle=300\alpha f_{a}^{2}\frac{\zeta(7)} {\zeta(3)}\frac{T^{4}}{m_{W}^{2}}. \tag{12}\]
For temperatures beyond \(m_{W}\), we add to the thermally averaged fermion spin-flip rate the contributions from the interactions with the \(W\) boson, \(\langle\Gamma_{W\text{ flip}}\rangle=n_{\text{boson}}\langle\sigma_{W\text{ flip}}\ket{v}\rangle\).
We now consider the rate at which all relativistic particles can flip the spin of a non-relativistic Majorana fermion in the early Universe. In comparing the spin-flip rate to the Universe's expansion rate, we find
\(\frac{\langle\Gamma\,\,\mu_{\rm l2}\rangle}{H}\approx\alpha f_{a}^{2}M_{\rm Pl}T^{3}\) whenever only charged fermions are relativistic, \(T<m_{W}\). Beyond that temperature, we have \(\frac{\langle\Gamma\,\,\mu_{\rm l2}\rangle}{H}\approx\alpha f_{a}^{2}\frac{M_{ \rm Pl}}{m_{W}^{2}}T^{5}\), accurate to within a factor of a few. Using the full expression for the spin-flip rate, we plot in Fig. 1(a) the anapole moment at which the spin-flip rate equals the Hubble parameter as a function of temperature. For anapole moments above this curve, spin-flip interactions are sufficiently frequent to allow a collection of Majorana fermions to partially polarize in a background current via thermalization.
Treating the anapole moment as an effective interaction, \(f_{a}=\frac{g}{\Lambda^{2}}\), we can constrain the energy scale for UV completion. Setting \(g=1\), we plot in Fig. 1(b) the energy \(\Lambda\) for which the Majorana fermion's spin-flip rate in the early Universe is equal to the Hubble expansion rate. From the figure, we may determine, for a given temperature, the upper limit on \(\Lambda\) for which spin-flip interactions are sufficient to achieve Majorana fermion polarization. This energy scale can also be used to inform our knowledge of additional, higher-order, effective electromagnetic interactions with the Majorana fermion. In particular, it sets the scale for the DM's polarizabilities.
Considering a fixed \(\Lambda\), then Fig. 1(b) shows the _lowest_ temperature at which the spin-flip and expansion rates are equal because \(\frac{\langle\Gamma\,\,\mu_{\rm l2}\rangle}{H}\sim T^{3}\) (or \(\frac{\langle\Gamma\,\,\mu_{\rm l2}\rangle}{H}\sim T^{5}\) for higher temperatures, \(T>m_{W}\)). In what follows, we would like this spin-flip rate to be sufficiently large for temperatures through freeze out, so that polarization can be achieved up until the point at which dark matter decouples from the background thermal bath. If we are to interpret the temperature in Fig. 1(b) as the freeze-out temperature for a given dark matter candidate, then the constraints on \(\Lambda\) will be more stringent. We estimate these more stringent constraints below.
The freeze-out temperature, \(T_{f}\), for a DM candidate is determined primarily by the thermally averaged annihilation cross section \(\langle\sigma_{\rm ann}|v|\rangle\). The cross section can be expanded in a power series for velocity because DM is non-relativistic when it decouples from the thermal background. Here we assume one velocity mode dominates the annihilation cross section and parametrize the cross section in terms of the background temperature by virtue of \(v\sim T^{\frac{1}{2}}\), viz., \(\langle\sigma_{\rm ann}|v|\rangle=\sigma_{0}x^{-n}\) where \(x=\frac{m_{\chi}}{T}\). If s-wave annihilation dominates, then \(n=0\); for p-wave, \(n=1\); and so on.
To precisely determine freeze out and the relic DM density, one must solve the Boltzmann equation, as discussed below in Sec. IV. However, estimates, accurate to a few percent, do exist. In particular, the freeze-out temperature and relic number density \(Y=n/s\) (relative to the entropy density \(s\)) are given by
\[x_{f} \approx\log[(n+1)a\lambda]-\left(n+\tfrac{1}{2}\right)\log[\log[( n+1)a\lambda]] \tag{13}\] \[Y_{\infty} \approx\frac{(n+1)}{\lambda}x_{f}^{n+1} \tag{14}\]
where \(a=0.289\,g_{\rm s}^{-1}\) and \(\lambda=0.264\,g_{\rm s}^{1/2}M_{\rm Pl}m_{\chi}\sigma_{0}\)[56; 57].
For the models under consideration herein, we assume the p-wave contribution to the cross section to dominate, so we set \(n=1\) and use the cross section in Eq. (1). Given a particular DM mass \(m_{\chi}\), we can determine what value of energy scale \(\Lambda\) yields a given freeze-out temperature \(T_{f}>m_{\chi}\). Figure 2 contains these results for a range of DM masses from 1 GeV to 1 TeV. We superpose on this plot the curve from Fig. 1(b) which shows the upper limit for \(\Lambda\) at which the spin-flip rate is sufficient to achieve thermalization. Not surprisingly, for the models under consideration, the spin-flip rate is sufficiently large through the freeze-out temperature. Additionally, we consider the values of \(\Lambda\) and \(m_{\chi}\) that reproduce the relic DM mass density present today \(\rho_{\rm DM}=\Omega_{\rm DM}\rho_{\rm crit}\), where \(\Omega_{\rm DM}\) is the DM fraction of the energy budget and \(\rho_{\rm crit}\) is the critical energy density [1]. If we assume our DM candidate is to reproduce the relic DM density, then the energy scale \(\Lambda\) is sufficiently small so that the DM candidate interacts through freeze out to thermalize in a background current.
## IV DM density in a current
We must use the Boltzmann equation to precisely determine the evolution of the DM density from the time it becomes non-relativistic through freeze out. As above, we express as \(Y=n/s\) the DM number density relative to the entropy density, \(s\). Given this, the first moment of
Figure 1: (a) In the top panel, the curve plots the anapole moment at which the spin-flip rate of a non-relativistic Majorana fermion equals that of the Hubble parameter for a given temperature in the radiation-dominated era of the early Universe. (b) Setting \(f_{a}=\frac{g}{\Lambda^{2}}\) with \(g=1\), the curve in the lower panel shows the energy scale \(\Lambda\) at which the spin-flip rate of a non-relativistic Majorana fermion equals that of the Hubble parameter for a given temperature in the radiation-dominated era of the early Universe.
the Boltzmann equation can be written as
\[\frac{\mathrm{d}}{\mathrm{d}x}Y=-\left(0.602g_{*}^{-\frac{1}{2}}\frac{M_{\rm Pl}} {m_{\chi}^{2}}\right)\langle\sigma_{\rm ann}|v|\rangle sx(Y^{2}-Y_{\rm eq}^{2}). \tag{15}\]
The term \(Y_{\rm eq}(x)\) tracks the equilibrium number density which we take as \(Y_{\rm eq}=a\,x^{\frac{3}{2}}e^{-x}\) in the non-relativistic regime (\(x\gg 3\)), where again \(a=0.289g_{*}^{-1}\). In what follows, we would like to determine the impact of a locally polarized region of DM on the local relic DM density, but the the Boltzmann equation is derived under the assumptions of homogeneity and isometry. The presence of a local current clearly violates that, but because the inhomogeneities we introduce are so small, we will treat the current as a local perturbative term in the Boltzmann equation.
Though the p-wave annihilation cross section typically dominates, Eq. (1), we must also consider the s-wave mode, Eq. (2). For what follows, it will be useful to factor the cross section as
\[\langle\sigma_{\rm ann}|v\rangle= \sigma_{0}x^{-1}(1+bx), \tag{16}\]
where \(\sigma_{0}=\langle\sigma_{\rm p}|v|\rangle x\) and \(b=\langle\sigma_{\rm s}|v|\rangle/\sigma_{0}\). For the DM interactions considered herein, we find from Eq. (1) that \(\sigma_{0}=16\alpha m_{\chi}^{2}N_{\rm eff}/\Lambda^{4}\). Then, from Eq. (2), we compute \(b=\tilde{g}^{2}/(16\alpha N_{\rm eff}\mu^{4})\). If the s-wave process is to be a subdominant correction through freeze out, then we must require \(bx_{f}\ll 1\). This constrains the parameter \(\mu\): \(\mu\gg[x_{f}/(16\alpha N_{\rm eff})]^{\frac{1}{4}}\sim 2.6\). Substituting the factored annihilation cross section in Eq. (15), we have
\[\frac{\mathrm{d}}{\mathrm{d}x}Y=-\lambda(1+bx)x^{-3}(Y^{2}-Y_{\rm eq}^{2}), \tag{17}\]
where again \(\lambda=\left(0.264\,g_{*}^{\frac{1}{2}}M_{\rm Pl}m_{\chi}\right)\sigma_{0}\).
Aside from our desire to keep the s-wave annihilation mode sub-dominant, there are observational constraints on the annihilation of DM into two photons that come from a variety of sources. In search of mono-energetic gamma rays from annihilating DM in the galactic halo, Fermi-LAT places the most stringent constraints on the annihilation into two photons; we use the most stringent constraints from the R3 region of interest in Ref. [58]. Additionally, precise measurements of the cosmic microwave background (CMB) anisotropies from the Planck satellite [61] severely constrain the energy injection from DM annihilation at the time of recombination. An analysis of the CMB has resulted in stringent constraints on the s-wave DM annihilation cross section considered herein [60]. Finally, an absorption feature has been observed in the 21-cm spectrum at high redshift; this constrains DM s-wave annihilation because energy injection from annihilation would wash out the feature [59]. We use these constraints on the s-wave cross section to place lower bounds on the parameter \(\mu\) in Eq. (2). To do so, we assume that, for a given DM mass \(m_{\chi}\), the energy-scale \(\Lambda\) is set by reproducing the DM relic density through p-wave annihilation only. Then the limits from Fermi-LAT, the CMB, and 21-cm data bound \(\mu\) as show in Fig. 3
In the presence of a local current density, we need to modify the s-wave contribution to the cross section because it requires a spin-zero initial state and a current can partially polarize the DM medium before freeze out. In particular, suppose a current \(J\) exists in a region. In thermal equilibrium, the number density of DM particles with spins aligned with the current, \(n_{\uparrow}\), will exceed those antialigned, \(n_{\downarrow}\), by an amount \(n_{\uparrow}/n_{\downarrow}=\exp[\mathcal{E}/T]\approx 1+\mathcal{E}/T\) where \(\mathcal{E}=2f_{a}J\) is the energy difference between the
Figure 2: For a given DM mass \(m_{\chi}\), the black curves show the value of \(\Lambda\) that yields a given freeze-out temperature, \(T_{f}\). For the dotted curve, the mass is 1 GeV; for the dashed curve, the mass is 10 GeV; for the dot-dashed curve, the mass is 100 GeV; for the solid (black) curve, the mass is 1 TeV. The value of \(\Lambda\) that reproduces the relic DM density for a given mass is denoted by \(\bigstar\). The solid gray curve represents the spin-flip constraints on \(\Lambda\) for a given temperature, reproduced from Fig. 1(b)
Figure 3: Bounds on \(\mu\) derived from observational limits on the s-wave DM annihilation mode. The solid (black) curve uses data from Ref. [58]; the dashed (red) curve uses data from Ref. [59]; and the dotted (blue) curve uses data from Ref. [60].
aligned and anti-aligned states. The fractional relative difference in the two spin states is \(\epsilon:=(n_{\uparrow}-n_{\downarrow})/n\approx{\cal E}/(2T)\). Then, in the Boltzmann equation, Eq. (17), the s-wave annihilation in the presence of a current is suppressed by a factor of \((1-\epsilon)\)
\[\frac{\mathrm{d}}{\mathrm{d}x}Y=-\lambda[1+b(1-\epsilon)x]x^{-3}(Y^{2}-Y_{ \mathrm{eq}}^{2}), \tag{18}\]
where \(\epsilon=f_{a}J/T\).
Before exploring the impact of a current in detail, it is worth considering the most extreme possibility in which s-wave annihilation is prohibited by virtue of complete polarization of the DM medium; that is, we compare the \(\epsilon=1\) scenario (full polarization) with the \(\epsilon=0\) scenario (no polarization). We will do this, first, using the estimates of \(x_{f}\) and \(Y_{\infty}\) from Eqs. (13) and (14) extended to include both s- and p-wave annihilation [56]
\[x_{f}\approx\log[2a\lambda]-\tfrac{3}{2}\log[\log[2a\lambda]]+\log[1+b\log[2 a\lambda]] \tag{19}\]
\[Y_{\infty}\approx\frac{2}{\lambda}x_{f}^{2}\frac{1}{(1+2bx_{f})} \tag{20}\]
If the p-wave process dominates freeze out, then \(x_{f}\sim\log[2a\lambda]\), and if we continue with our previous approximation \(bx_{f}\ll 1\), then the s-wave channel modifies the freeze-out temperature by \(x_{f}\stackrel{{\sim}}{{\to}}x_{f}+b\log[2a\lambda]\). Upon including the s-wave annihilation mode, the relic DM density should decrease by \(Y_{\infty}\stackrel{{\sim}}{{\to}}Y_{\infty}(1-2bx_{f})\). It is most useful to cast these changes in terms of the fractional change in the relic density; full polarization of the DM medium (which turns off the s-wave mode) relative to the no-current scenario results in a fractional change of
\[\frac{Y_{\infty}^{\epsilon=1}-Y_{\infty}^{\epsilon=0}}{Y_{\infty}}\approx 2bx _{f}. \tag{21}\]
Using some exemplar parameters, we would like to numerically integrate the Boltzmann equation, Eq. (18), to confirm the accuracy of the estimates in Eqs. (19 - 21). Because Eq. (18) is an extremely stiff differential equation, it is easier to set \(W=\log Y\) and instead integrate the equation [62]
\[\frac{\mathrm{d}}{\mathrm{d}x}W=\lambda[1+b(1-\epsilon)x]x^{-3}\left(e^{(2W_{ \mathrm{eq}}-W)}-e^{W}\right). \tag{22}\]
For parameters, we set \(m_{\chi}=100\) GeV. Annihilation dominated by the p-wave process reproduces the observed DM relic density for \(\Lambda=579\) GeV with a freeze-out temperature around \(x_{f}\sim 23.1\). If we use the most stringent constraints on \(\mu\) derived from the Fermi LAT data [58], then \(\mu=11.2\). From the estimate in Eq. (21), we expect a fractional increase in the relic DM density of \(0.4\%\) for a fully polarized DM medium. Numerical calculations produce the same order of magnitude change. Computing \(Y(x=1000)\) for both scenarios, we find that the p-wave only (full polarization) relic density is a factor of \(0.1\%\) larger than when both p- and s-wave annihilations (no current scenario) are considered. (We note that the estimate of \(Y_{\infty}\) from Eq. (20) relative to the numerical computation of \(Y(x=1000)\) differs by \(3.5\%\); however, the _relative_ fractional change in the estimates using Eqs. (14, 20) for the p-wave only and p- and s-wave computations for \(Y_{\infty}\) yield the correct order of magnitude result.)
We now discuss the impact of a local current upon the local relic DM density. In order to do so, we must supply some details about the form of the current. We treat the current classically, assuming that it can be represented as the net drift of the relativistic charged species in the plasma, \(J=en_{q}v_{\mathrm{drift}}\). The factor \(n_{q}\) is the sum over the number density of all charged relativistic species, Eqs. (7) and (11), in the plasma at temperature \(T\) weighted by their absolute charge. We suppose that the current exists from \(x=1\) (\(T=m_{\chi}\)) through freeze out, \(x\sim 20-25\), and we assume that the drift velocity suffers a redshift due to expansion so that \(v_{\mathrm{drift}}(x)=\frac{1}{x}v_{\mathrm{drift}}(1)\). With these assumptions, the current scales like \(J\sim x^{-4}\), and overall, the perturbing term in Eq. (18) scales with \(x\) as \(\epsilon=f_{a}J/T\sim x^{-3}\).
Before we integrate Eq. (18) with the classical current, we will develop some approximations that allow us to estimate the relative change in the local relic density. Following the arguments in Ref. [56] that yield Eqs. (19) and (20), we can achieve the order of magnitude estimates
\[x_{f}^{\epsilon}\approx x_{f}^{\epsilon=0}-b\epsilon\log[2a\lambda] \tag{23}\] \[Y^{\epsilon}(x_{f})\approx Y^{\epsilon=0}(x_{f})\left[1+\frac{1}{2}bex_{f}\right] \tag{24}\]
where \(\epsilon\) is evaluated at \(x_{f}^{\epsilon=0}\). The fractional change in the local relic density in the presence of the assumed classical current is
\[\frac{Y^{\epsilon}(x_{f})-Y^{\epsilon=0}(x_{f})}{Y(x_{f})}\approx\frac{1}{2} bex_{f}. \tag{25}\]
We would like to compare this crude estimate with a more robust solution of Eq. (18). The current introduces a small perturbation in the DM density \(\delta Y=Y^{\epsilon}-Y^{\epsilon=0}\), and it is sufficient to linearize Eq. (18) with respect \(\delta Y\), neglecting small quantities
\[\frac{\mathrm{d}}{\mathrm{d}x}\delta Y=-2\lambda[1+bx]x^{-3}Y\delta Y+ \epsilon\lambda bx^{-2}(Y^{2}-Y_{\mathrm{eq}}^{2}), \tag{26}\]
where \(Y=Y^{\epsilon=0}\) is the no-current DM density. This first order non-homogenous ordinary differential equation can be solved with an integrating factor. Defining the functions
\[p(x)= 2\lambda[1+bx]x^{-3}Y \tag{27}\] \[q(x)= \epsilon\lambda bx^{-2}(Y^{2}-Y_{\mathrm{eq}}^{2}), \tag{28}\]
then the solution to Eq. (26) is
\[\delta Y(x)=\int_{1}^{x}P(x)^{-1}P(s)q(s)\mathrm{d}s \tag{29}\]
where
\[P(x)=\exp\left[\int_{1}^{x}p(s)\mathrm{d}s\right]. \tag{30}\]
Examining Eq. (29), we see that the approximation for \(\delta Y\) is explicitly linear in \(v_{0}\) and manifestly positive because \(q(x)\) is positive.
To perform the integral in Eq. (29), we first consider the factor
\[P(x)^{-1}P(s)=\exp\left[-\int_{s}^{x}p(t)\mathrm{d}t\right]. \tag{31}\]
For the parameters under consideration \(p(x)\) is extremely large, ranging from \(\sim 10^{18}\) around \(x=1\) to \(\sim 10^{8}\) around \(x=x_{f}\) for the DM mass of 100 GeV (and associated parameters considered above). Because \(p(x)\) is so large, the factor \(P(x)^{-1}P(s)\) vanishes, for all practical purposes, except at \(s=x\), where \(P(x)^{-1}P(s=x)=1\). Because of this feature, only the value \(q(x)\) is of any consequence in the integrand
\[\delta Y(x)\approx q(x)\int_{1}^{x}P(x)^{-1}P(s)\mathrm{d}s. \tag{32}\]
The remaining integral in Eq. (32) can be accurately estimated by a Taylor expansion of the argument of the exponential in Eq. (31) about \(s=x\)
\[P(x)^{-1}P(s)\approx\exp\left[(s-x)p(x)\right]. \tag{33}\]
We can then estimate the integral
\[\int_{1}^{x}P(x)^{-1}P(s)\mathrm{d}s\approx\frac{1}{p(x)} \tag{34}\]
for \(x\gg 1\). This yields an estimate for \(\delta Y\) of
\[\delta Y(x)=\frac{q(x)}{p(x)} \tag{35}\]
for \(x\gg 1\). Using the definitions of \(p\) and \(q\) in Eqs. (27, 28), we find at freeze out
\[\delta Y(x_{f})=\frac{\epsilon bx_{f}[Y^{2}(x_{f})-Y_{\mathrm{eq}}^{2}(x_{f}) ]}{2[1+bx_{f}]Y(x_{f})}, \tag{36}\]
where \(\epsilon\) is evaluated at \(x_{f}\). For small \(b\), we can approximate this as
\[\frac{\delta Y(x_{f})}{Y(x_{f})}=\frac{1}{2}\epsilon bx_{f}\left[1-\frac{Y_{ \mathrm{eq}}^{2}(x_{f})}{Y^{2}(x_{f})}\right], \tag{37}\]
In Eq. (37), if we further neglect \(Y_{\mathrm{eq}}^{2}\) relative to \(Y^{2}\) at freeze out, then we recover our crude estimate from Eq. (25): \(\frac{\delta Y(x_{f})}{Y(x_{f})}\approx\frac{1}{2}\epsilon bx_{f}\). Executing the calculation for \(\delta Y/Y\) in Eq. (37) for DM mass \(m_{\chi}=100\) GeV (and associated parameters considered above), we find that the result is about 0.58 times the crude estimate, \(\frac{1}{2}\epsilon bx_{f}\).
With Eq. (37), we are now able to determine how a local electric current can impact the local DM density through freeze out. As noted previously, we model the current classically as a steady current from \(x=1\) through freeze out, modulo a decreasing plasma density and red shifted drift velocity, whose initial value is \(v_{\mathrm{drift}}(1)\). For our calculations, we take \(v_{\mathrm{drift}}(1)=1\). Because \(\delta Y/Y\) is manifestly linear in \(v_{\mathrm{drift}}(1)\), one can easily scale our results to accommodate more realistic values for the drift velocity. With this assumption, we present our results in Fig. 4. For a given DM mass, we determine the mass scale \(\Lambda\) by fixing the relic DM density to the observed value, assuming p-wave annihilations determine this. Then, we set the mass ratio \(\mu\) to satisfy the constraints on s-wave annihilation derived from Fermi-LAT data [58], CMB data from the Planck satellite [60; 61], and observations of the 21-cm line [59].
## V Discussion and Conclusion
Herein, we considered the impact that a local current might have on the evolution of the DM density around DM freeze out. The anapole moment of a Majorana fermion DM candidate tends to align with external currents provided there are sufficient spin-flip interactions to thermalize the DM. We find for the model parameters under consideration DM can thermalize, so that the DM states, described by a Boltzmann distribution, can lead to a partially polarized DM medium. This partial polarization is of consequence for the available DM anni
Figure 4: The local relative change in DM density at freeze out due to the presence of a classical current with \(v_{\mathrm{drift}}(1)=1\). For a given \(m_{\chi}\), the parameters are chosen to reproduce the observed relic DM density and satisfy constraints on s-wave annihilation derived from Fermi-LAT data (the solid (black) curve) [58], 21-cm spectral data (the dashed (red) curve) [59], and CMB data (the dotted (blue) curve) [60].
hilation channels.
Rather generically, a Majorana fermion DM candidate can interact with real photons through higher order processes. In particular, DM can annihilate into two real photons in an s-wave process by virtue of its spin-dependent polarizabilities. In a partially polarized DM medium, this s-wave annihilation channel is somewhat suppressed because it requires the initial DM states to have opposite spins. As a consequence, the overall DM annihilation rate is smaller than it would be if no current were present, and as the Universe cools and expands, the lower annihilation rate will result in a slightly higher relic DM density than in a no-current region.
Referring to Fig. 4, we find that, for an initial charge carrier drift speed approaching the speed of light, the relative over density of DM in the current bearing region ranges from \(\sim 10^{-13}\) to \(\sim 10^{-6}\) depending on the DM mass and s-wave constraint used. Generally, the constraints on \(\mu\) decrease with DM mass, and because \(\delta Y\sim\mu^{-4}\), the size of \(\delta Y\) increases substantially with \(m_{\chi}\). A density variation of order \(10^{-6}\) is extremely large at this time in the early Universe, but this is based upon an unrealistic drift speed. Our results scale linearly with initial drift speed, so it is trivial to determine the impact upon more realistic current magnitudes.
In terms of the assumptions in our calculations, we assume a current that persists from temperature \(T=m_{\chi}\) to freeze out \(T\sim\frac{1}{20}m_{\chi}\). The density change \(\delta Y\), however, is most impacted by the existence of the current around freeze out, so the assumed longevity of the current in our calculations could be relaxed without significantly impacting the results. Additionally, we modeled our current as a simple classical current density; more complex models could also fit into our existing work. Finally, throughout we assumed that the p-wave annihilation mode determines the relic density in our calculations. This is an excellent approximation when using the constraints on \(\mu\) derived from the observations in Ref. [58], but for DM masses greater than a few hundred GeV, the approximation is strained if using the constraints on \(\mu\) derived from Refs. [59; 60].
The premise of our entire work requires the presence of substantial electric currents in the early Universe around DM freeze out, but the actual existence of such currents is beyond the scope of this work. We do find, in the literature, arguments supporting the existence of electric currents in the early Universe specifically during the inflationary period [63], around the electroweak phase transition [64; 65], and around the QCD phase transition [66; 67; 68; 64]. These currents should be short-lived, compared to the Hubble time [69]. For our purposes, long-lived currents are not crucial to the results, rather the strength of the current and the time at which it occurs are the more important factors.
## VI Acknowledgments
DCL thanks the Kavli Institute for Theoretical Physics for its hospitality during the completion of this work. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
|
2302.04955 | Constraints for eliminating the Gibbs phenomenon in finite element
approximation spaces | One of the major challenges in finite element methods is the mitigation of
spurious oscillations near sharp layers and discontinuities known as the Gibbs
phenomenon. In this article, we propose a set of functionals to identify
spurious oscillations in best approximation problems in finite element spaces.
Subsequently, we adopt these functionals in the formulation of constraints in
an effort to eliminate the Gibbs phenomenon. By enforcing these constraints in
best approximation problems, we can entirely eliminate over- and undershoot in
one dimensional continuous approximations, and significantly suppress them in
one- and higher-dimensional discontinuous approximations. | M. ten Eikelder, S. Stoter, Y. Bazilevs, D. Schillinger | 2023-02-09T22:08:28Z | http://arxiv.org/abs/2302.04955v2 | # Constraints for eliminating the Gibbs phenomenon in finite element approximation spaces
###### Abstract
One of the major challenges in finite element methods is the mitigation of spurious oscillations near sharp layers and discontinuities known as the Gibbs phenomenon. In this article, we propose a set of functionals to identify spurious oscillations in best approximation problems in finite element spaces. Subsequently, we adopt these functionals in the formulation of constraints in an effort to eliminate the Gibbs phenomenon. By enforcing these constraints in best approximation problems, we can entirely eliminate over- and undershoot in one dimensional continuous approximations, and significantly suppress them in one- and higher-dimensional discontinuous approximations.
Gibbs phenomenon; Finite element methods; Constrained optimization; Isogeometric analysis; Discontinuous Galerkin.
AMS Subject Classification: 65N30, 65K10, 35L67
## 1 Introduction
### Historical overview
The discovery of the Gibbs phenomenon may be traced back to Henry Wilbraham (1848), and the phenomenon was rediscovered by J. Willard Gibbs (1898-1899), in their studies on Fourier series [1, 2]. It is traditionally described as the inability to recover point values of a discontinuous function by a truncated Fourier expansion. Near the discontinuity, the error does not vanish as the number of terms in the expansion is increased, and the magnitude of the over- and undershoots tends to a fixed limit. The limiting value is known as the Gibbs constant. It is less well known that the Gibbs phenomenon also occurs in truncated expansions of other sets of orthogonal functions [3, 4]. In fact, the associated Gibbs constants are often identical, as is the case for expansions with Legendre, Hermite, or Laguerre polynomials [5, 6].
Fundamentally, the Gibbs phenomenon has, however, little to do with Fourier series or expansions in orthogonal polynomials. The effect arises from the best approximation in a square integral metric, of which these expansions are examples [7]. As such, it also occurs in best approximation problems in the \(L^{2}\)-metric by piecewise linear polynomials [7] or splines [8]. The role of the metric herein is crucial: spurious oscillations that appear in the \(L^{2}\)-metric are significantly more severe than those that occur in the \(L^{q}\)-metric when \(q\) tends to 1, for which they are in some cases even completely absent [9]. A detailed study on the possible elimination of the Gibbs phenomenon in \(L^{q}\)-best approximation by piecewise linear finite element shape functions is presented in [10]. In the last few decades, the \(L^{1}(\Omega)\) functional settings has hence been explored as the point of departure for approximating solutions to partial differential equations (PDEs) [11, 12, 13, 14, 9]. The main challenge behind these approaches is that they require the minimization of a nondifferentiable functional, which leads to a poorly behaved nonlinear problem. As a consequence, there is a lack of practical algorithms for solving even standard problems in computational mathematics. Additionally, even though approximations in subspaces of \(L^{1}(\Omega)\) reduce spurious oscillations, on some meshes these do not vanish in general [15, 10]
The more conventional (Bubnov-)Galerkin method produces solutions that are optimal in an inner product induced norm (associated with subspaces of \(L^{2}(\Omega)\)). As such, approximations of interior- and boundary layers indeed tend to suffer from spurious oscillations. This issue is well known in the finite element community, and many attempts at resolutions have been proposed. Arguably the most successful is the class of residual-based stabilized methods [16, 17, 18], which are primarily adopted for fluid mechanics related applications. Residual-based stabilization significantly improves the solution quality in regions free of abrupt changes, but the Gibbs phenomenon still occurs in regions with sharp layers. As a remedy, the finite element formulation is often augmented with a nonlinear stabilization mechanism that locally introduces artificial diffusion [19, 20, 21]. These methods are referred
to as shock- or discontinuity capturing methods.
In the case of nonlinear (hyperbolic) evolution equations, the above stabilization methods do still not suffice. In order to enhance the quality of numerical approximations for these types of problems, algorithms have been designed to inherit certain stability properties of the underlying PDE. The prevalent example is the entropy stability property possessed by entropy solutions. Weak solutions of nonlinear evolution equations are not unique and the entropy stability property singles out the entropy solution as the physically relevant solution [22]. The entropy stability concept, which reduces for many physical systems to an energy-dissipation property, has frequently been used in the construction of stable finite element methods [23, 24, 25, 26, 27, 28]. Even though the solution quality enhances significantly, numerical solutions that inherit entropy stability do not preclude spurious oscillations. For particular variable sets, the Galerkin method may even exactly satisfy the entropy stability condition but still exhibit spurious oscillations [29]. Evidently, the entropy stability concept is not inextricably linked to the Gibbs phenomenon. It does, however, seem to be a good indicator for the identification of shock waves, and thus as an indicator of where the Gibbs phenomenon might manifest.
A stability concept that is more directly targeted at removing the Gibbs phenomenon, is the total variation diminishing (TVD) property introduced by Harten [30, 31]. Solutions with the TVD property preclude the growth of the total variation of the solution. The design of numerical schemes with the TVD property still is an active area of research. The incentive for the design of TVD schemes is the desire to produce numerical approximations that satisfy the maximum principle, as well as certain monotonicity properties. Despite its success, particularly in the finite difference and finite volume communities, the applicability of TVD schemes limited. Namely, it is solely suitable for time-dependent and scalar conservation laws and does not provide any information on local solution quality. Moreover, its introduction in the discrete setting relies on a Cartesian grid and lacks frame-invariance.
The above two observations, namely (i) the occurrence of the Gibbs phenomenon in entropy stable discrete solutions, and (ii) the limitations of TVD schemes, have incentivized the design of a novel stability concept called _variation entropy theory_[32]. This theory provides a local continuous generalization of the TVD stability condition for general conservation laws in an entropy framework. Similar to classical entropy solutions, variation entropy solutions satisfy an underlying stability condition. This stability condition serves in the discrete setting as an indicator of the Gibbs phenomenon. It has successfully been employed in the variation multiscale (VMS) paradigm [33, 34, 35] to design a framework for discontinuity capturing methods [21].
### Objective
Despite the significant attention it has gained, a precise mathematical definition of the Gibbs phenomenon does not exist. Any attempt at eliminating the Gibbs
phenomenon thus first requires an identification strategy. The identifier that we develop is rooted in variation entropy theory. We then propose is to eliminate the Gibbs phenomenon via enforcement of constraints.
This brings us to the main objective of this article: _to identify a set of practical constraints that aim to eliminate the Gibbs phenomenon in the approximation of sharp layers and discontinuities in finite element spaces_. To facilitate the analysis, we discuss our results in the isogeometric analysis framework, which we think of as a generalization of \(\mathcal{C}^{0}\)- and \(\mathcal{C}^{-1}\)-finite element spaces to higher order continuity.
Some remark are in order. First, it may seem feasible to construct constraints that remove oscillations by explicitly choosing the coefficients of the basis functions such that the numerical approximation does not exceed bounds of the analytical profile. This is however not a strategy that is realizable in practical computations. The challenge is thus to establish a set of constraints that can be adopted in practice. Second, the idea of _a priori_ enforcing constraints in numerical methods is not new. A notable contribution in this regard is the work of Evans et al. [36], in which a framework for the enforcement of constraints in the VMS framework is presented.
### Main results
The main result of this paper is a set of integral constraints that aim to identify and eliminate the Gibbs phenomenon. The occurrence of the Gibbs phenomenon in a certain approximation can not solely be inferred from the approximation itself. Rather, it stands in relation to the function being approximated, and depends on the (sub)domain of interest. We propose an indicator of the form:
\[\mathscr{G}_{\phi,\omega}(\phi^{*})\leq 0, \tag{1}\]
where the function \(\phi^{*}\in H^{1}(\tilde{\Omega})\) is an approximation of the function \(\phi\in H^{1}(\Omega)\) on \(\omega\subset\Omega\). Here, \(H^{1}(\tilde{\Omega})\) is a broken Sobolev space and \(\tilde{\Omega}\) is a collection of disjoint subdomains (precise definitions are provided in Section 4). We call this constraint the _Gibbs constraint_. It follows from the _Gibbs functional_, which we define as:
\[\mathscr{G}_{\phi,\omega}(\phi^{*}):=\int_{\omega}g_{\phi}(\phi^{*})\ \mathrm{d}x, \tag{2}\]
with \(g_{\phi}\) as:
\[g_{\phi}(\phi^{*}):=\begin{cases}\|\nabla\phi^{*}\|_{2}^{-1}\nabla\phi^{*}\cdot \nabla(\phi^{*}-\phi)&\text{for}\ \nabla\phi^{*}\neq 0,\\ 0&\text{for}\ \nabla\phi^{*}=0.\end{cases} \tag{3}\]
We search functions \(\phi^{*}\) as an approximation of \(\phi\) for which the Gibbs constraint is satisfied on predetermined sets of subdomains \(\omega\). We study the application of the Gibbs constraints in the context of finite element best approximation problems. In particular, we consider the constrained best approximation problem:
\[\phi^{h}=\underset{\theta^{h}\in\mathcal{K}_{p,\alpha}}{\text{arginf}}\ \|\phi-\theta^{h}\|_{\mathcal{H}}, \tag{4}\]
where \(\|\cdot\|_{\mathcal{H}}\) is a norm induced by a certain inner product and the feasible set is given by:
\[\mathcal{K}_{p,\alpha}:=\left\{\phi^{h}\in\mathcal{V}^{h}_{D,p,\alpha}:\ \ \mathcal{G}_{\phi,\omega_{j}}(\phi^{*})\leq 0,\ j=1,...,J,\ \omega_{j}\in \mathcal{T}_{\omega}\right\}. \tag{1.5}\]
Precise definitions of \(\mathcal{V}^{h}_{D,p,\alpha}\) and \(\mathcal{T}_{\omega}\) are provided in later in the paper. We demonstrate for sharp layers that finite element approximations of arbitrary degree and continuity are free of over- and undershoots when they satisfy the Gibbs constraint (for one dimensional continuous approximations), or these oscillations are significantly suppressed (for discontinuous approximation).
The choice of subdomains \(\omega\) depends on the regularity \(\alpha\) of the finite element approximation space and the dimension of the domain. In one dimension, the constraints may be applied element-wise (\(\omega_{j}=K_{j}\)) when the finite element space is either discontinuous or \(\mathcal{C}^{0}\)-continuous. For higher regularity finite element spaces (\(\alpha\geq 1\)), the subdomains \(\omega\) need to be collections of neighboring elements, and the number of collected elements increases with the regularity. In higher dimensions, the Gibbs constraints are too restrictive for continuous finite element spaces, limiting its applicability to discontinuous finite element spaces.
### Outline
The remainder of the paper is structured as follows. First, in Section 2 we provide preliminaries concerning function spaces and projectors. Then, in Section 3 we present an overview of the Gibbs phenomenon for best approximations in finite element spaces of arbitrary degree and continuity. In Section 4 we present the identification of the constraints in one spatial dimension. Next, we extend our construction to higher dimensions in Section 5. Finally, we provide a summary and outlook in Section 6.
## 2 Preliminaries
### Function spaces
We adopt the standard functional analysis setting. We denote by \(\Omega\subset\mathbb{R}^{d}\) the bounded, open and connected domain with spatial dimension \(d\), and with boundary \(\partial\Omega\). \(L^{2}(\Omega)\) is the Lebesgue space of \(2\)-integrable functions on \(\Omega\). Furthermore, \(H^{1}(\Omega)\subset L^{2}(\Omega)\) is the Sobolev space of \(L^{2}(\Omega)\)-functions with their gradient also in \([L^{2}(\Omega)]^{d}\). The subspace \(H^{1}_{0}(\Omega)\subset H^{1}(\Omega)\) consists of functions with zero trace on \(\partial\Omega\). The associated norms are denoted as \(\|\cdot\|_{L^{2}(\Omega)}=\|\cdot\|_{\Omega}\) and \(\|\cdot\|_{H^{1}_{0}(\Omega)}\). Furthermore, we use standard notation for the \(L^{2}\)-inner product on \(\Omega\), \((\cdot,\cdot)_{L^{2}(\Omega)}=(\cdot,\cdot)_{\Omega}\) and write \(\langle\cdot,\cdot\rangle_{D}\) for the duality pairing \(H^{-1/2}(D)\times H^{1/2}(D)\to\mathbb{R}\) on some boundary domain \(D\).
In this article, we consider finite element spaces of arbitrary degree and continuity. As such, we make use of the _isogeometric analysis_ framework [37, 38, 39]. We introduce knotvectors, univariate and multivariate B-splines, geometrical map
pings and the physical mesh. The ordered knotvector \(\Xi\) is defined for degree \(p\) and dimensionality \(n\) as:
\[\Xi:=\left\{-1=\xi_{1},\xi_{2},...,\xi_{n+p+1}=1\right\}, \tag{2.1}\]
where \(\xi_{i}\in\mathbb{R}\) represents the \(i\)-th knot with \(i=1,...,n+p+1\). We adopt the convention that \(p=0,1,2,\ldots\) refers to piecewise constants, linears, quadratics, etc. In this work we restrict ourselves to _open_ knotvectors, meaning that the first and last knot appear \(p+1\) times. The univariate B-spline basis functions are defined recursively for \(p=0,1,2,\ldots\). Starting with piecewise constant functions, we have:
\[N_{i,0}(\xi)=\begin{cases}1\text{ if }\xi_{i}\leq\xi<\xi_{i+1}\\ 0\text{ \ \ \ \ otherwise,}\end{cases} \tag{2.2}\]
whereas for \(p=1,2,\ldots\) the B-spline basis functions are given by:
\[N_{i,p}(\xi)=\frac{\xi-\xi_{i}}{\xi_{i+p}-\xi_{i}}N_{i,p-1}(\xi)+\frac{\xi_{i +p+1}-\xi}{\xi_{i+p+1}-\xi_{i+1}}N_{i+1,p-1}(\xi). \tag{2.3}\]
This definition is augmented with the convention that if a denominator (i.e. \(\xi_{i+p}-\xi_{i}\) or \(\xi_{i+p+1}-\xi_{i+1}\)) is zero, that fraction is taken as zero. B-spline basis functions coincide with standard finite element Lagrange basis functions for \(p=0\) and \(1\), and differ for \(p\geq 2\). The set of B-spline basis functions of degree \(p\) consists of non-negative piecewise \(p\)th-order polynomial functions with local support, that form a partition of unity. Linear combinations of B-spline basis functions are referred to as B-splines. We introduce the vector \(\boldsymbol{\zeta}=\left\{\zeta_{1},...,\zeta_{m}\right\}\) consisting of all knots without repetitions. The open knot vector implies that the basis functions are interpolatory at the ends of the interval. Inside a knot interval B-spline basis functions are smooth, whereas the repetition of a knot reduces the continuity of the B-spline basis function at that knot. More precisely, a B-spline basis function of degree \(p\) at a knot \(\xi_{i}\) with multiplicity \(k_{i}\) has \(\alpha_{i}:=p-k_{i}\) continuous derivatives at \(\xi_{i}\) (note that \(\alpha_{1}=\alpha_{m}=-1\)). We denote the space of B-splines of polynomial degree \(p\) and regularity \(\boldsymbol{\alpha}=\left\{\alpha_{1},\ldots,\alpha_{m}\right\}\) as:
\[S^{p}_{\boldsymbol{\alpha}}:=\operatorname{span}\left\{N_{i,p}\right\}_{i=1}^ {n}. \tag{2.4}\]
B-spline basis functions of degree \(p\) with uniform internal multiplicity \(p\) are interpolatory and span the same space as standard \(\mathcal{C}^{0}\)-Lagrange basis functions. Similarly, B-spline basis functions of degree \(p\) with uniform internal multiplicity \(p+1\) are discontinuous and span the same space as discontinuous Lagrange basis functions.
The construction of multivariate B-splines follows from taking a tensor-product of the univariate B-splines. We introduce the open knot vectors:
\[\Xi_{l}:=\left\{\xi_{1,l},\xi_{2,l},...,\xi_{n_{l}+p_{l}+1,l}\right\}, \tag{2.5}\]
for polynomial degrees \(p_{l}\) and dimensionality integers \(n_{l}\) for \(l=1,\ldots,d\). We define for each knot vector \(\Xi_{l}\) univariate B-spline basis functions \(N_{i_{l},p_{l},l}\) of polynomial degree \(p_{l}\) for \(i_{l}=1,...,n_{l}\). Again we introduce the vector of knots with repetition
\(\mathbf{\zeta}_{l}=\{\zeta_{1,l},...,\zeta_{m_{l},l}\}\) and regularity vector \(\mathbf{\alpha}_{l}=\{\alpha_{1,l},\ldots,\alpha_{m_{l},l}\}\). The Cartesian mesh on the parametric domain \(\hat{\Omega}=(-1,1)^{d}\subset\mathbb{R}^{d}\) is now given by:
\[\hat{\mathcal{T}}=\{Q=\otimes_{l=1,...,d}(\zeta_{i_{l},l},\zeta_{i_{l}+1,l}),1 \leq i_{l}\leq m_{l}-1\}\,. \tag{6}\]
The boundary of an open element \(Q\in\hat{\mathcal{T}}\) is denoted \(\partial Q\). The multivariate tensor-product B-spline basis functions are defined on the parametric mesh \(\hat{\mathcal{T}}\) as
\[N_{i_{1},...,i_{d},p_{1},...,p_{d}}=N_{i_{1},p_{1},1}\otimes\cdots\otimes N_{ i_{d},p_{d},d}. \tag{7}\]
The associated tensor-product B-spline function space on \(\hat{\mathcal{T}}\) is given by:
\[S^{p_{1},...,p_{d}}_{\mathbf{\alpha}_{1},...,\mathbf{\alpha}_{d}}:=\text{span}\left\{ N_{i_{1},...,i_{d},p_{1},...,p_{d}}\right\}^{n_{1},...,n_{d}}_{i_{1}=1,...,i_{d}=1}\,. \tag{8}\]
Throughout this paper we restrict ourselves to a uniform regularity vector \(\mathbf{\alpha}=\mathbf{\alpha}_{l}\) in the interior, i.e. \(\alpha_{2,l}=\cdots=\alpha_{m-1,l}=\alpha\), and use equal polynomial degrees \(p_{1}=\cdots=p_{d}=p\). We assume that the physical domain can be exactly described by the continuously differentiable geometrical map (with continuously differentiable inverse) \(\mathbf{F}:\mathbf{\xi}\in\hat{\Omega}\to\mathbf{x}\in\Omega\). The physical mesh on \(\Omega\) follows by applying the geometrical map \(\mathbf{F}\) on elements of the parametric mesh:
\[\mathcal{T}=\left\{K:K=\mathbf{F}(Q),Q\in\hat{K}\right\}. \tag{9}\]
As usual, we demand the element \(K\in\mathcal{T}\) to be shape-regular. The boundary of an element \(K\in\mathcal{T}\) is denoted as \(\partial K\). We define the Jacobian of the mapping \(\mathbf{F}\) as \(\mathbf{J}=\partial\mathbf{x}/\partial\mathbf{\xi}\). Lastly, we introduce the finite element approximation space \(\mathcal{V}^{h}_{p,\alpha}=\left\{\mathbf{F}\left(\mathcal{S}^{p}_{\alpha} \right)\right\}:=\left\{\mathbf{F}\left(\mathcal{S}^{p,...,p}_{\alpha}\right) \right\}\), and its subspaces \(\mathcal{V}^{h}_{Q,p,\alpha}\subset\mathcal{V}^{h}_{p,\alpha}\) and \(\mathcal{V}^{h}_{D,p,\alpha}\subset\mathcal{V}^{h}_{p,\alpha}\) consisting of those functions that satisfy homogeneous and inhomogeneous boundary conditions on \(\partial\Omega\), respectively.
### Projection operators
In this subsection we introduce some orthogonal projection operators \(\mathscr{P}:\mathcal{V}\to\mathcal{V}^{h}_{D,p,\alpha}\). Consider first the (constrained) \(\mathcal{H}\)-best approximation problem:
\[\phi^{h}=\underset{\theta^{h}\in\mathcal{V}^{h}_{p,\alpha}}{\operatorname{ arginf}}\,\|\phi-\theta^{h}\|_{\mathcal{H}}, \tag{10}\]
subject to the trace equality:
\[\phi^{h}|_{\partial\Omega}=\phi|_{\partial\Omega}, \tag{11}\]
where \(\|\cdot\|_{\mathcal{H}}\) is a norm induced by the inner product \((\cdot,\cdot)_{\mathcal{H}}\). The constraint (11) may be homogenized via the adoption of a lift argument, as is standard in finite element methods, whereby the approximation space becomes \(\mathcal{V}^{h}_{\partial p,\alpha}\subset\mathcal{V}^{h}_{p,\alpha}\). The \(\mathcal{H}\)-best approximation may be determined by solving the first-order optimality conditions obtained from taking the Gateaux derivative in (10):
find \(\phi^{h}\in\mathcal{V}^{h}_{\mathcal{D},p,\alpha}\) such that for all \(w^{h}\in\mathcal{V}^{h}_{0;p,\alpha}\):_
\[(\mathscr{P}_{\mathcal{H}}\phi-\phi,w^{h})_{\mathcal{H}}=(\phi^{h}-\phi,w^{h})_ {\mathcal{H}}=0. \tag{2.12}\]
For approximation spaces \(\mathcal{V}^{h}_{p,\alpha}\) consisting of continuous functions (i.e. \(\alpha\geq 0\)), we introduce the \(L^{2}\)- and \(H^{1}_{0}\)-orthogonal projectors respectively as:
\[(\mathscr{P}_{L^{2}}\phi-\phi,w^{h})_{\Omega}=(\phi^{h}-\phi,w^{h })_{\Omega}=0, \tag{2.13a}\] \[(\mathscr{P}_{H^{1}_{0}}\phi-\phi,w^{h})_{H^{1}_{0}(\Omega)}=( \phi^{h}-\phi,w^{h})_{H^{1}_{0}(\Omega)}=0. \tag{2.13b}\]
Next, we consider approximation spaces \(\mathcal{V}^{h}_{p,\alpha}\) consisting of discontinuous functions (i.e. \(\alpha=-1\)). In this context, the standard \(H^{1}_{0}\)-norm is not a suitable best approximation problem. In order to introduce a suitable alternative to the \(H^{1}_{0}\)-best approximation problem, we first introduce some additional notation. We define the union of (\(n_{\text{el}}\)) open element domains and the associated interface skeleton as:
\[\tilde{\Omega}= \bigcup_{i=1}^{n_{\text{el}}}K_{i}, \tag{2.14a}\] \[\Gamma= \bigcup_{i=1}^{n_{\text{el}}}\partial K_{i}, \tag{2.14b}\]
and introduce \(\Gamma^{0}=\Gamma\backslash\partial\Omega\) as the interior part of the interface skeleton. Next, we introduce some trace operators that are convenient in the context of discontinuous basis functions. For an interior edge \(e\), shared by elements \(K^{+}\) and \(K^{-}\), we define the outward pointing unit normal vectors on \(e\) as \(\mathbf{n}^{+}\) and \(\mathbf{n}^{-}\), respectively. Denoting \(\phi^{+}=\phi|_{\partial K^{+}}\) and \(\phi^{-}=\phi|_{\partial K^{-}}\) of a scalar quantity \(\phi\), we define the average \(\{\!\!\{\phi\}\!\}\) and jump \([\![\phi]\!]\) on \(\Gamma^{0}\) as:
\[\{\!\!\{\phi\}\!\} =\frac{1}{2}\left(\phi^{+}+\phi^{-}\right), \tag{2.15a}\] \[\{\!\!\{\phi\}\!\} =\phi^{+}\mathbf{n}^{+}+\phi^{-}\mathbf{n}^{-}. \tag{2.15b}\]
For a vector-valued quantity \(\boldsymbol{\psi}\) on \(e\) we define \(\boldsymbol{\psi}^{+}\) and \(\boldsymbol{\psi}^{-}\) analogously and introduce the average \(\{\!\!\{\psi\}\!\}\) on \(\Gamma^{0}\) as:
\[\{\!\!\{\psi\}\!\}=\frac{1}{2}\left(\boldsymbol{\psi}^{+}+\boldsymbol{\psi}^{ -}\right). \tag{2.16}\]
We do not require the jump of a vector quantity and leave it undefined.
We now provide an alternative to the \(H^{1}_{0}\)-best approximation problem that is well-posed. This alternative is closely related the well-known interior penalty method. This method requires the evaluation of boundary flux \(\partial_{n}\phi\) on \(\Gamma^{0}\), which is an unbounded operator and yields a double-valued function for \(\phi\in\mathcal{V}=H^{1}(\Omega)\). To circumvent this issue we first introduce the broken space:
\[H^{1}(\tilde{\Omega})=\left\{\phi\in L^{2}(\Omega):\phi|_{K}\in H^{1}(K)\ \text{ for all }K\in\mathcal{T}\right\}, \tag{2.17}\]
and consider solution functions \(\phi\in\tilde{\mathcal{V}}=H^{1}(\tilde{\Omega})\). To mitigate the issue of the
unbounded boundary flux operator, we introduce a suitable additional function space. Recalling that the boundary flux is double-valued, we introduce the function space of the boundary fluxes as the product space \(\mathcal{Q}\times\mathcal{Q}\), where \(\mathcal{Q}=H^{-1/2}(\Gamma^{0})\). Consider then the following operator:
\[\mathscr{P}_{\rm IP}:\tilde{\mathcal{V}} \times\mathcal{Q}\times\mathcal{Q} \longrightarrow\text{ran}\mathscr{P}_{\rm IP},\] \[(\phi,\mu^{+},\mu^{-}) \longrightarrow\left(\phi^{h},\partial_{n}\phi^{+,h},\partial_{n} \phi^{-,h}\right), \tag{18}\]
with
\[\phi^{h} =\operatorname*{arginf}_{\theta^{h}\in\mathcal{V}_{D,\alpha}^{h} }\frac{1}{2}\|\phi-\theta^{h}\|_{\mathcal{H}_{0}^{1}(\tilde{\Omega})}^{2}- \left\langle\{\mathbf{\mu}\mathbf{n}-\nabla\theta^{h}\},[\![\phi-\theta^{h}]\!]\right\rangle _{\Gamma^{0}}\] \[\qquad+\frac{1}{2}\left\langle\eta[\![\phi-\theta^{h}]\!],[\![ \phi-\theta^{h}]\!]\right\rangle_{\Gamma^{0}}. \tag{19}\]
The range of \(\mathscr{P}_{\rm IP}\) is given by:
\[\text{ran}\mathscr{P}_{\rm IP}=\left\{(w^{h},\partial_{n}w^{+,h},\partial_{n} w^{-,h}):w^{h}\in\mathcal{V}_{D,p,\alpha}^{h}\right\}, \tag{20}\]
with dimension \(\dim\left(\text{ran}\mathscr{P}_{\rm IP}\right)=\dim\mathcal{V}_{D,p,\alpha}^ {h}\). Additionally, the mapping \(\mathscr{P}_{\rm IP}\) is idempotent, and is a linear and bounded operator on the space \(\tilde{\mathcal{V}}\times\mathcal{Q}\times\mathcal{Q}\). As a consequence, \(\mathscr{P}_{\rm IP}\) is a projector, and we refer to it as the _interior penalty projector_[40, 41]. The penalty parameter \(\eta\) penalizes mismatches of interface jumps. Well-posedness is ensured when the penalty parameter satisfies a certain lower bound. In this work we base the value of the penalty parameter \(\eta\) on the work of Shahbazi [42].
[Interpretation boundary flux] The double-valued quantity \(\mu\) acts as a surrogate to \(\partial_{n}\phi\) in (19), and is introduced to make the projector a bounded operator. In practice, we simply use \(\nabla\phi\) in place of \(\mu\)m in (19). To clarify the consistency of this replacement, we expand a part of the second integrand in (19):
\[\{\mathbf{\mu}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\bmbm{\
ing first-order optimality condition:
_find_ \[\phi^{h}\in\mathcal{V}^{h}_{D;p,0}\ \text{such that for all }w^{h}\in\mathcal{V}^{h}_{0;p,0}\text{:}\] \[(\phi^{h}-\phi,w^{h})_{H^{1}_{0}(\Omega)}-\langle[\![\phi^{h}- \phi]\!],\{\nabla w^{h}\}\!\rangle_{\Gamma^{0}}-\langle\{\!\nabla\phi^{h}- \nabla\phi\!\},[\![w^{h}]\!]\!]_{\Gamma^{0}}\] \[\quad+\langle\eta[\![\phi^{h}-\phi]\!],[\![w^{h}]\!]\rangle_{ \Gamma^{0}}=0\text{.}\] (23)
With this substitution it is easy to see that the interior penalty projector is indeed associated with a best approximation problem:
\[\phi^{h}=\operatorname*{arginf}_{\theta^{h}\in\mathcal{V}^{h}_{D;p,-1}}\|\phi- \theta^{h}\|_{\mathrm{IP}(\Omega)}\text{,} \tag{24}\]
where the norm is defined as:
\[\|v\|^{2}_{\mathrm{IP}(\Omega)}:=\|v\|^{2}_{H^{1}_{0}(\Omega)}-2\left\langle \{\!\nabla v\!\},[\![v]\!],[\![v]\!]\right\rangle_{\Gamma^{0}}+\langle\eta[\! [v]\!],[\![v]\!]\right\rangle_{\Gamma^{0}}. \tag{25}\]
## 3 An exposition of the Gibbs phenomenon for best approximations in finite element spaces
In this section we demonstrate the occurrence of the Gibbs phenomenon in best approximation problems that involve finite element approximation spaces of arbitrary continuity and degree. For simplicity, we work with B-splines with equal knot spacing. We consider the one-dimensional case in Section 3.1 and the two-dimensional case in Section 3.2.
### The Gibbs phenomenon in one dimension
We consider best approximations of a step function \(\phi\in L^{2}(\Omega)\) defined as:
\[\phi=\phi_{a}(x)=\left\{\begin{array}{cc}1&x>a\\ -1&x<a\end{array}\right., \tag{26}\]
where \(a\) denotes the location of the jump discontinuity. As some best approximation statements involve weak derivatives, we wish to work with solution functions in \(H^{1}(\Omega)\). Therefore, we introduce the following smooth (differentiable) approximation \(\phi\in H^{1}(\Omega)=:\mathcal{V}\) of the step function:
\[\phi=\phi_{a}^{\epsilon}(x)=\tanh\left(\frac{x-a}{\epsilon}\right), \tag{27}\]
where \(\epsilon\ll 1\) is a smoothing parameter.
We start off with the case in which the approximation space \(\mathcal{V}^{h}_{p,\alpha}\) consists of continuous functions, i.e. \(\alpha\geq 0\). The \(L^{2}\)-best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D;1,0}\) (the space spanned by continuous piecewise linear basis functions) of the smooth step function is illustrated in Figure 1. We observe that the numerical approximation \(\phi^{h}\) contains over- and undershoots near the sharp layer. These oscillations do not vanish when the number of elements is increased. In fact, the over- and undershoots on each side of the discontinuity converge to the value \(1-\sqrt{3}/2\approx 0.13\) as the number of elements
is increased (assuming that the layer is'sufficiently sharp') [7]. We note that the Gibbs phenomenon is often mistakenly interpreted as related to approximation with higher-order basis functions. This example illustrates that this is _not_ the case.
Figure 2 shows the approximations in \(\mathcal{V}_{D;2,0}^{h}\) and \(\mathcal{V}_{D;2,1}^{h}\), the spaces of continuous quadratic finite elements.
The figure shows over- and undershoots of roughly the same magnitude as those for the linear approximation. For the quadratic B-splines the over- and undershoots on each side of the discontinuity converge with the number of elements to a value of approximately \(0.10\)[8]. The Gibbs phenomenon persists when the polynomial order \(p\) of the maximum regularity B-spline basis functions is increased. Moreover, the magnitude of the over- and undershoots converges to the same value as that of a truncated Fourier series. This value is approximately \(0.09\) and is known as the _Gibbs constant_.
**Remark 3.1** (Different degrees of freedom).: It is important to realize that
the number of degrees-of-freedom (dofs) is significantly different for results with the same number of elements and polynomial degree \(p\) but with different regularity \(\alpha\). In this situation we have \(21\) dofs for \(\phi^{h}\in\mathcal{V}^{h}_{D,2,0}\) and only \(10\) dofs for \(\phi^{h}\in\mathcal{V}^{h}_{D,2,1}\) (the count includes boundary dofs).
In Figure 3, we visualize the \(H^{1}_{0}\)-best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D;1,0}\) (the space spanned by continuous piecewise linear basis functions) for \(a=0.5\) and \(a=0.58\). We observe nodally exact numerical approximations for both cases. The combination of the linear basis functions with the nodal exactness implies that the numerical approximations are free of over- and undershoots, i.e. the Gibbs phenomenon is not present.
In Figure 4, we illustrate the \(H^{1}_{0}\)-best approximations for the approximation
spaces of quadratic finite elements \(\mathcal{V}_{D,2,0}^{h}\) and \(\mathcal{V}_{D,2,1}^{h}\). We observe over- and undershoots for both the quadratic Lagrange polynomials and the B-spline functions. Again, these oscillations persist with mesh refinement. For the case of the Lagrange basis functions, we have nodal exactness at element boundary nodes, but the monotonicity property is lost in the element interiors [43].
**Lemma 3.1** (Nodal interpolant).: _The \(H_{0}^{1}\)-best approximation in the space \(\mathcal{V}_{D,p,0}^{h}\) in one dimension is nodally interpolatory at the element boundary nodes. For linear elements (\(p=1\)) this best approximation is monotonicity preserving, while for higher-order basis functions (\(p>1\)) monotonicity inside the elements is in general lost._
Next, we turn our attention to discontinuous approximations, i.e. \(\alpha=-1\). We select as penalty parameter \(\eta=6(p+1)^{2}/h\). The interior penalty-best approximation of the smooth step function is illustrated in Figure 5 for the approximation spaces \(\mathcal{V}_{D,1,-1}^{h}\) (discontinuous piecewise linears) and \(\mathcal{V}_{D,2,-1}^{h}\) (discontinuous piecewise quadratics). For this best approximation problem, we see that the numerical approximations \(\phi^{h}\) contain over- and undershoots near the sharp layer; the nodal exactness of the \(H_{0}^{1}\)-best approximation problem is not inherited. Furthermore, we observe that the average of the approximation \(\phi^{h}\) at the element boundaries coincides with the value of \(\phi\). This is a property of the interior penalty projector [40].
**Proposition 3.1** (Vanishing average error on element boundaries).: _The interior penalty best approximation \(\phi^{h}\in\mathcal{V}_{D,p,-1}^{h}\) of \(\phi\) in one-dimension satisfies the property:_
\[\left\{\!\left.\!\left.\!\left.\!\left.\!\left.\!\left.\!\left.\!\left.\! \left.\!\left.\!\left.\!\left.\!\left.\!\left.\!\left.\!\left.\!\!\left.\! \left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\!\left. \!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\! \!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\! \left.\!\!\left.\!\!\left.\!\!\left.\!\!\left.\!\!\!\left.\!\!\left.\!\! \left.\!\!\!\left.\!\!\left.\!\!\!\left.\!\!\left.\!\!\!\left.\!\!\!\! \left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\! \left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\! \left.\!\!\!\!\left.\!\!\!\left.\!\!\!\left.\!\!\!\!\left.\!\!\!\!\!\! \left.\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\! \left.\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\! \left.\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\left.\!\!\!\! \left.\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\! \left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\! \!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\! \!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\! \left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\!\!\! \left.\!\!\!\!\!\!\!\left.\!\!\!\!\!\!\!\!\left.\!\!
### The Gibbs phenomenon in two dimensions
We consider best approximations of a two-dimensional step function \(\phi\in L^{2}(\Omega)\) on the square domain \(\Omega=(-1,1)^{2}\):
\[\phi=\phi(x,y)=\begin{cases}1&x-y>0\\ -1&x-y<0\end{cases}.\]
Again, we work with a smooth approximation \(\phi\in H^{1}(\Omega)=:\mathcal{V}\) of the step function:
\[\phi=\phi^{\epsilon}(x,y)=\tanh\left(\frac{y-x}{\epsilon}\right),\]
where \(\epsilon\ll 1\) is a smoothing parameter.
Analogous to the one-dimensional case, we begin with approximation spaces \(\mathcal{V}^{b}_{D,p,\alpha}\) consisting of continuous functions (\(\alpha\geq 0\)). Recall from the one-dimensional case that the \(L^{2}\)-best approximation contains over- and undershoots. This is also the case in higher dimensions, and we omit the visualization. We display the \(H^{1}_{0}\)-best approximation for the continuous finite element approximation spaces \(\mathcal{V}^{b}_{D,1,0},\mathcal{V}^{b}_{D,2,0}\) and \(\mathcal{V}^{h}_{D,2,1}\) in Figure 6. We observe the occurrence of over- and undershoots for each of the approximations.
Next, we consider discontinuous approximations (\(\alpha=-1\)). Analogous to the one-dimensional case, we solely consider the interior penalty-best approximation. We select as a penalty parameter \(\eta=2(2p+1)(2p+2)/h\). In Figure 7, we visualize the interior penalty best approximations for approximation spaces \(\mathcal{V}^{h}_{1,-1}\) and \(\mathcal{V}^{h}_{2,-1}\), i.e. for linear and quadratic discontinuous basis functions. Again, we observe over- and undershoots of the finite element approximation in both cases.
## 4 Eliminating the Gibbs phenomenon in one dimension
In this section we present the constraints for the elimination of the Gibbs phenomenon in finite element spaces in one dimension. We call these constraints the _Gibbs constraints_. To this purpose, we first describe the construction of our proposed Gibbs constraints in general approximation spaces in Section 4.1, and present the properties of the constraints in Section 4.2. Next, we advance the discussion to best approximation problems in Section 4.3. Finally, in Section 4.4 we apply the Gibbs constraints to finite element spaces of arbitrary continuity, and perform numerical experiments.
### Gibbs constraints
One of the challenges of dealing with the Gibbs phenomenon is the uncertainty in the level of locality that is required to identify the phenomenon. Pointwise evaluations of functions, or their derivatives, carry insufficient information to be able to infer the occurrence of the Gibbs phenomenon. On the other hand, the information that may be deduced from global evaluations, such as global integrals, is too coarse-grained to establish the existence of spurious oscillations on a local scale. In this
subsection, we construct a constraint for the elimination of the Gibbs phenomenon on a _given subdomain_. The selection of the subdomains then remains an important matter, which we discuss extensively in Section 4.4 in the context of finite element approximations.
Consider the one-dimensional simply connected domain \(\Omega\subset\mathbb{R}\), and let \(\phi^{*}:\Omega\to\mathbb{R}\) denote an approximation of \(\phi:\Omega\to\mathbb{R}\). The main ingredient in the elimination of the Gibbs phenomenon relies on the _fundamental theorem of Lebesgue integral calculus_, which we recall here.
**Theorem 4.1** (Fundamental theorem of Lebesgue integral calculus).: _Let \(\theta:\Omega\to\mathbb{R}\) be an absolutely continuous function, then \(\theta\) is differentiable almost everywhere and for each \(\omega=[x_{L},x_{R}]\subset\Omega\) we have_
\[\int_{x_{L}}^{x_{R}}\mathrm{D}\theta\ \mathrm{d}x=\theta_{R}-\theta_{L}, \tag{4.1}\]
with trace equalities \(\theta(x_{L})=\theta_{L}\) and \(\theta(x_{R})=\theta_{R}\), and \(\mathrm{D}\theta\in L^{1}(\Omega)\)._
The fundamental theorem communicates that the trace values (the right-hand side in (4.1)) are controlled by the integral. Still, the trace values do not provide any information on the oscillatory behavior of the function \(\theta\) inside \(\omega\). In contrast, the _total variation_ is a concept that does incorporate this.
[Total variation] Let \(\theta:\Omega\rightarrow\mathbb{R}\) be a given function, and let
\(P=\{x_{L}=x_{0},x_{1},\ldots,x_{N-1},x_{N}=x_{R}\}\) denote a partition of \(\omega=[x_{L},x_{R}]\subset\Omega\). The variation of \(\theta\) with respect to partition \(P\) is defined as:
\[V_{\omega,P}(\theta):=\sum_{i=0}^{N}|\theta(x_{i+1})-\theta(x_{i})|. \tag{4.2}\]
Denote by \(\mathfrak{P}\) the set of all possible partitions of \(\omega\). The total variation is given by:
\[V_{\omega}(\theta):=\sup_{P\in\mathfrak{P}}V_{\omega,P}(\theta). \tag{4.3}\]
The total variation \(V_{\omega}(\theta)\) represents a measure of the fluctuations of \(\theta\) on \(\omega\). A function \(\theta\) with the property \(V_{\omega}(\theta)<\infty\) is said to have _bounded variation_ and we write \(\theta\in\mathrm{BV}(\omega)\). An absolutely continuous function has bounded variation. In case \(\theta\) is a continuous differentiable function, the total variation \(V_{\omega}(\theta)\) may be evaluated as follows.
[Total variation continuous differentiable function] Let \(\theta\in\mathcal{C}^{1}(\Omega)\). The total variation of \(\theta\) on \(\omega\subset\Omega\) is given by:
\[V_{\omega}(\theta)=\int_{\omega}|\mathrm{D}\theta|\;\;\mathrm{d}x. \tag{4.4}\]
Figure 7: The interior penalty-best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D;p,-1}\), with \(n_{\mathrm{el}}=8\times 8\), of the smooth two-dimensional step function, for different \(p\).
If the function \(\theta\) is only piecewise continuously differentiable, the expression in Lemma4.1 needs to be augmented with jump terms, as is expressed in Lemma4.2.
**Lemma 4.2** (Total variation piecewise-continuous function).: _Let \(\theta\in\mathrm{BV}(\Omega)\) be a function that has a continuous derivative on each \((a_{i},a_{i+1})\subset\Omega,i=0,\dots,M\) and jump discontinuities at \(a_{i},i=1,\dots,M\). Denote with \(a_{i}^{-},a_{i}^{+}\) the left and right limits of the discontinuity at \(a_{i}\). The total variation of \(\theta\) on \(\omega\subset\Omega\) is given by:_
\[V_{\omega}(\theta)=\sum_{i=0}^{M}\int_{a_{i}}^{a_{i+1}}\left|\mathrm{D}\theta \right|\ \mathrm{d}x+\sum_{i=1}^{M}\left|\theta(a_{i})-\theta(a_{i}^{-})\right|+ \left|\theta(a_{i}^{+})-\theta(a_{i})\right|. \tag{4.5}\]
Next, we note that \(V_{\omega}\) is convex and satisfies a homogeneity property.
**Proposition 4.1** (Convexity and homogeneity of the total variation).: _Given the same assumptions on \(\theta\) as in Lemma4.2, the functional \(V_{\omega}\) is convex and satisfies the homogeneity property:_
\[\mathrm{d}V_{\omega}(\theta)(\theta)=V_{\omega}(\theta). \tag{4.6}\]
**Remark 4.1**.: A convex functional that satisfies the homogeneity property is termed a _variation entropy_[32]. Variation entropy functionals form the basis of an entropy stability theory for (hyperbolic) conservation laws called variation entropy theory.
The total variation of a monotonic function is related to its trace values in the following way.
**Lemma 4.3** (Total variation monotonic function).: _Let \(\omega=[x_{L},x_{R}]\subset\Omega\) and let \(\mathcal{C}_{D}^{0}(\Omega)\) denote the space of continuous functions \(\theta\) that satisfy trace equalities \(\theta(x_{L})=\theta_{L}\) and \(\theta(x_{R})=\theta_{R}\). For \(\theta\in\mathcal{C}_{D}^{0}(\Omega)\) we have_
\[V_{\omega}(\theta)\geq\left|\theta_{R}-\theta_{L}\right|, \tag{4.7}\]
_where equality only holds when \(\theta\) is monotonic._
This is an important ingredient in the design of the Gibbs constraints. It communicates that the jump of the trace values are controlled by the total variation.
Let us introduce the quantity
\[\mathscr{V}_{\phi,\omega}(\phi^{*}):=V_{\omega}(\phi^{*})-V_{\omega}(\phi). \tag{4.8}\]
With the aim of eliminating the Gibbs phenomenon, one could introduce the constraint \(\mathscr{V}_{\phi,\omega}(\phi^{*})\leq 0\). The disadvantage is that total variation does not take into account the sign of its argument:
\[V_{\omega}(-\theta)=V_{\omega}(\theta). \tag{4.9}\]
As a consequence, we have:
\[\mathscr{V}_{\phi,\omega}(-\phi)=0. \tag{4.10}\]
We now generalize the element-based definition of the subdivision of domain \(\Omega\), (2.14a), to the union of \((J)\) disjoint general subdomains, i.e.
\[\bar{\Omega}:=\bigcup_{j=1}^{J}\omega_{j}, \tag{4.11}\]
with \(\omega_{j}\cap\omega_{k}=\emptyset\) for \(j\neq k\). Furthermore, we redefine the broken space (2.17) for this subdivision as:
\[H^{1}(\tilde{\Omega}):=\left\{v\in L^{2}(\Omega):v|_{\omega}\in H^{1}(\omega) \text{ for all }\omega\in\mathcal{T}_{\omega}\right\}, \tag{4.12}\]
with \(\mathcal{T}_{\omega}\) the collection of \(\omega_{j},j=1,\ldots J\).
Motivated by the above observation, we wish to find approximations that, besides bounding the size of the jump of the approximation, also carry information about the direction of the analytical solution \(\phi\). To this purpose we now propose the _Gibbs functional_ and the associated _Gibbs constraint_.
**Definition 4.2** (Gibbs functional).: The Gibbs functional of the function \(\phi^{*}\in H^{1}(\tilde{\Omega})\), on \(\omega=\omega_{j}\) (for some \(j\)) and with respect to the given function \(\phi\in H^{1}(\Omega)\), is defined as:
\[\mathscr{G}_{\phi,\omega}(\phi^{*}):=\int_{\omega}g_{\phi}(\phi^{*})\ \mathrm{d}x, \tag{4.13}\]
where the functional \(g_{\phi}\) is defined as:
\[g_{\phi}(\phi^{*}):=|\mathrm{D}\phi^{*}|-\mathrm{sgn}\left(\mathrm{D}\phi^{*} \right)\mathrm{D}\phi=-\mathrm{sgn}\left(\mathrm{D}\phi^{*}\right)\mathrm{D} \phi^{\prime}. \tag{4.14}\]
Here \(\mathrm{sgn}\) is the sign function (i.e. \(\mathrm{sgn}(t)=t/|t|\) for \(t\neq 0\) and \(\mathrm{sgn}(0)=0\)), and \(\phi^{\prime}:\Omega\to\mathbb{R}\) defined as \(\phi^{\prime}:=\phi-\phi^{*}\) denotes the error function.
**Definition 4.3** (Gibbs constraint).: The Gibbs constraint of the function \(\phi^{*}\in H^{1}(\tilde{\Omega})\), on \(\omega\) and with respect to the given function \(\phi\in H^{1}(\Omega)\), is defined as:
\[\mathscr{G}_{\phi,\omega}(\phi^{*})\leq 0. \tag{4.15}\]
Note that the incorporation of the correct sign may be recognized in the Gibbs constraint via the equivalence:
\[|\mathrm{D}\phi^{*}|-|\mathrm{D}\phi|\leq 0\] \[\mathrm{sgn}\left(\mathrm{D}\phi^{*}\right)-\mathrm{sgn}\left( \mathrm{D}\phi\right)=0\]
the integration of which provides the Gibbs constraint.
The objective is now to search for functions \(\phi^{*}\) as approximations of \(\phi\) that satisfy the Gibbs constraint on certain \(\omega\).
### Properties of the Gibbs constraint
It is the purpose of this subsection to discuss the properties of the Gibbs constraint and to establish its connection with the well-known concepts of monotonic solutions and the maximum principle.
We have the simple but important property that \(\phi\) as an approximation of itself satisfies the Gibbs constraint.
**Proposition 4.2** (Perfect approximation).: _The Gibbs functional vanishes for a perfect approximation (\(\phi^{*}=\phi\)):_
\[\mathscr{G}_{\phi,\omega}(\phi)=0. \tag{4.17}\]
Furthermore, we have the following lower bound of the Gibbs functional.
**Lemma 4.4** (Gibbs functional bound).: _The Gibbs functional satisfies the lower bound:_
\[\mathscr{G}_{\phi,\omega}(\phi^{*})\geq\mathscr{V}_{\phi,\omega}(\phi^{*}). \tag{4.18}\]
We now proceed with establishing connections between the Gibbs constraint and certain properties of the function approximation. We first provide a characterization of the Gibbs functional.
**Lemma 4.5** (Characterization Gibbs functional).: _Let \(\phi^{*}\in H^{1}(\tilde{\Omega})\) and \(\phi\in H^{1}(\Omega)\), and let \(\omega=[x_{L},x_{R}]\subset\Omega\) be given. Denote the locations of sign changes of \(\mathrm{D}\phi^{*}\) by \(x_{i},i=1,\ldots,N\) with \(x_{i}<x_{i+1}\). The form of Gibbs functional \(\mathscr{G}_{\phi,\omega}\) depends on the sign of \(\mathrm{D}\phi^{*}\) on \([x_{L},x_{1}]\) and the number of sign changes \(N\):_
1. _for_ \(N\) _odd and_ \(\mathrm{D}\phi^{*}\geq 0\) _on_ \([x_{L},x_{1}]\)_:_ \[\mathscr{G}_{\phi,\omega}(\phi^{*})=\phi^{\prime}(x_{L})+2\sum_{i=1}^{N}(-1) ^{i}\phi^{\prime}(x_{i})+\phi^{\prime}(x_{R}),\] (4.19)
2. _for_ \(N\) _odd and_ \(\mathrm{D}\phi^{*}\leq 0\) _on_ \([x_{L},x_{1}]\)_:_ \[\mathscr{G}_{\phi,\omega}(\phi^{*})=-\phi^{\prime}(x_{L})-2\sum_{i=1}^{N}(-1) ^{i}\phi^{\prime}(x_{i})-\phi^{\prime}(x_{R}),\] (4.20)
3. _for_ \(N\) _even and_ \(\mathrm{D}\phi^{*}\geq 0\) _on_ \([x_{L},x_{1}]\)_:_ \[\mathscr{G}_{\phi,\omega}(\phi^{*})=\phi^{\prime}(x_{L})+2\sum_{i=1}^{N}(-1) ^{i}\phi^{\prime}(x_{i})-\phi^{\prime}(x_{R}),\] (4.21)
4. _for_ \(N\) _even and_ \(\mathrm{D}\phi^{*}\leq 0\) _on_ \([x_{L},x_{1}]\)_:_ \[\mathscr{G}_{\phi,\omega}(\phi^{*})=-\phi^{\prime}(x_{L})-2\sum_{i=1}^{N}(-1) ^{i}\phi^{\prime}(x_{i})+\phi^{\prime}(x_{R}),\] (4.22) _where we recall_ \(\phi^{\prime}=\phi-\phi^{*}\)_._
A direct consequence of this characterization is the following lemma.
**Lemma 4.6** (Interpolatory monotonic approximation).: _An interpolatory monotonic approximation \(\phi^{*}\in H^{1}(\tilde{\Omega})\) of \(\phi\in H^{1}(\Omega)\) on \(\omega\subset\Omega\) satisfies the Gibbs
constraint:_
\[\mathscr{G}_{\phi,\omega}(\phi^{*})\leq 0. \tag{4.23}\]
Additionally, an approximation of a monotonic analytical profile that is free of the Gibbs phenomenon satisfies a bound on the trace values.
**Lemma 4.7** (Approximation of monotonic function).: _Suppose that \(\phi\in H^{1}(\Omega)\) is monotonically increasing (decreasing) on \(\omega=[x_{L},x_{R}]\subset\Omega\) and the approximation \(\phi^{*}\in H^{1}(\tilde{\Omega})\) satisfies the Gibbs constraint:_
\[\mathscr{G}_{\phi,\omega}(\phi^{*})\leq 0, \tag{4.24}\]
_then the increase (decrease) of \(\phi^{*}\) is bounded by the increase (decrease) of \(\phi\), i.e._
\[\phi^{*}(x_{R})-\phi^{*}(x_{L})\leq\phi(x_{R})-\phi(x_{L}) \qquad\text{(if $\phi$ is monotonically increasing)}, \tag{4.25a}\] \[\phi^{*}(x_{L})-\phi^{*}(x_{R})\leq\phi(x_{L})-\phi(x_{R}) \qquad\text{(if $\phi$ is monotonically decreasing)}. \tag{4.25b}\]
_The equality in (4.25) holds when (4.24) holds with equality._
Proof.: We omit the proof of the general case and consider instead two simple cases. Without loss of generality, assume that \(\phi\) is monotonically increasing. Suppose first that \(\mathrm{D}\phi^{*}\) has no sign changes, i.e. \(\phi^{*}\) is monotonically increasing on \(\omega=[x_{L},x_{R}]\). Then, (4.21) reduces to:
\[\mathscr{G}_{\phi,\omega}(\phi^{*})=(\phi^{*}(x_{R})-\phi^{*}(x_{L}))-(\phi( x_{R})-\phi(x_{L})), \tag{4.26}\]
which is negative if and only if (4.25a) holds. Now suppose that \(\mathrm{D}\phi^{*}>0\) has a single change of sign, say at \(x_{1}\), and that \(\mathrm{D}\phi^{*}>0\) for \(x<x_{1}\). Additionally, since we may shift \(\phi^{*}\) by \(\phi^{\prime}(x_{L})\), we take \(\phi^{\prime}(x_{L})=0\). It is easy to verify that \(\phi^{*}(x_{1})\leq\phi(x_{R})\), as otherwise \(\mathscr{G}_{\phi,\omega}(\phi^{*})\geq 0\). Since \(\phi^{*}\) is decreasing on \([x_{1},x_{R}]\) and \(\phi\) is increasing, we immediately have \(\phi^{\prime}(x_{R})\geq 0\). The case \(\mathrm{D}\phi^{*}<0\) for \(x<x_{1}\) follows from a similar argument.
Next, we introduce the classical definition of the maximum principle.
**Definition 4.4** (Maximum principle).: An approximation \(\phi^{*}:\Omega\to\mathbb{R}\) of \(\phi:\Omega\to\mathbb{R}\) satisfies the maximum principle on \(\omega\subset\Omega\) if and only if it does not exceed the bounds of \(\phi\) on \(\omega\):
\[\inf_{\omega}\phi^{*} \geq\inf_{\omega}\phi, \tag{4.27a}\] \[\sup_{\omega}\phi^{*} \leq\sup_{\omega}\phi. \tag{4.27b}\]
**Theorem 4.2** (Interpolatory approximation of monotonic function).: _Suppose that \(\phi\in H^{1}(\Omega)\) is monotonic on \(\omega=[x_{L},x_{R}]\subset\Omega\) and \(\phi^{*}\in H^{1}(\tilde{\Omega})\) is an interpolatory approximation, i.e. \(\phi^{*}(x_{L})=\phi(x_{L})\) and \(\phi^{*}(x_{R})=\phi(x_{R})\). We have the following results:_
1. \(\mathscr{G}_{\phi,\omega}(\phi^{*})\geq 0\)_,_
2. \(\phi^{*}\) _is a monotonic function if and only if_ \(\mathscr{G}_{\phi,\omega}(\phi^{*})=0\)_,_
3. _if_ \(\mathscr{G}_{\phi,\omega}(\phi^{*})=0\) _then_ \(\phi^{*}\) _satisfies the maximum principle on_ \(\omega\)_._
Proof.: 1. Without loss of generality, assume \(\phi\) is increasing. In case \(\mathrm{D}\phi^{*}\) does not change sign, we invoke (4.26) of Lemma 4.7 and obtain \(\mathscr{G}_{\phi,\omega}(\phi^{*})=0\). In the other case, denote the locations of sign changes of \(\mathrm{D}\phi^{*}\) as \(x_{i}\in\omega,i=1,\ldots,N\) with \(x_{i}\leq x_{i+1}\). Lemma 4.5 provides:
* for \(\mathrm{D}\phi^{*}\geq 0\) on \([x_{L},x_{1}]\): \[\mathscr{G}_{\phi,\omega}(\phi^{*})=2\sum_{i=1}^{N}(-1)^{i}\phi^{\prime}(x_{ i}),\] (4.28)
* for \(\mathrm{D}\phi^{*}\leq 0\) on \([x_{L},x_{1}]\): \[\mathscr{G}_{\phi,\omega}(\phi^{*})=-2\sum_{i=1}^{N}(-1)^{i}\phi^{\prime}(x_ {i}),\] (4.29)
where we recall \(\phi^{\prime}=\phi-\phi^{*}\). To show non-negativity of (4.28) and (4.29), one has to consider various cases. For example, for \(N=1\) we have \(\mathscr{G}=2\left|\phi^{\prime}(x_{1})\right|\geq 0\). We omit a detailed proof of the general case.
2. '\(\Rightarrow\)': Suppose \(\phi^{*}\) is a monotonic function. Without loss of generality, assume \(\phi\) is increasing. Since \(\phi^{*}\) is a monotonic function, the sums in (4.29) and (4.28) are empty and the expressions vanish.
'\(\Leftarrow\)': Suppose that \(\mathscr{G}_{\phi,\omega}(\phi^{*})=0\). Lemma 4.4 implies \(\mathscr{V}_{\phi,\omega}(\phi^{*})\leq 0\). Without loss of generality, suppose that \(\phi\) is monotonically increasing. We arrive at:
\[V_{\omega}(\phi^{*})\leq\phi(x_{R})-\phi(x_{L}). \tag{4.30}\]
Since \(\phi^{*}\) is interpolatory, Lemma 4.3 implies that \(\phi^{*}\) is monotonically increasing.
3. If \(\mathscr{G}_{\phi,\omega}(\phi^{*})=0\) then \(\phi^{*}\) is monotonic via claim 2, and (4.25a) provides the maximum principle via:
\[\sup_{\omega}\phi^{*} =\phi^{*}(x_{R})=\phi(x_{R})=\sup_{\omega}\phi, \tag{4.31a}\] \[\inf_{\omega}\phi^{*} =\phi^{*}(x_{L})=\phi(x_{L})=\inf_{\omega}\phi. \tag{4.31b}\]
We have the following connection between the Gibbs constraints and the maximum principle.
**Lemma 4.8** (Gibbs constraints and maximum principle).: _Let the analytical profile \(\phi^{*}\in H^{1}(\Omega)\) be monotonic on each \(\omega_{j},j=1,\ldots,J\), and let \(\phi^{*}\in H^{1}(\bar{\Omega})\) be an approximation that satisfies \(\mathscr{G}_{\phi,\omega_{j}}(\phi^{*})\leq 0,j=1,\ldots,J\), such that \(\phi^{*}\) is interpolatory on \(\partial\Omega\). Then \(\phi^{*}\) satisfies the maximum principle on \(\Omega\)._
In order to preclude the Gibbs phenomenon on the entire domain, the strategy is to require \(\mathscr{G}_{\phi,\omega_{j}}(\phi^{*})\leq 0\), for \(j=1,\ldots,J\). We note that the practical applicability of
this strategy relies on an appropriately chosen subdivision of \(\Omega\). In the subsequent subsection, we discuss the Gibbs constraints in the context of best approximation problems. We return to the domain subdivision problem in the context of finite elements in Section 4.4.
### Best approximation problems under Gibbs constraints
Let now \(\phi\in H^{1}(\Omega)\) be given and consider the best approximation problem:
\[\phi^{*}=\operatorname*{arginf}_{\theta^{*}\in\mathcal{K}}\,\|\phi-\theta^{*} \|_{\mathcal{H}},\] (4.32a) where the feasible set is defined as: \[\mathcal{K}:=\left\{\phi^{*}\in H^{1}(\tilde{\Omega}):\ \ \mathscr{G}_{\phi,\omega_{j}}(\phi^{*})\leq 0,\ j=1,...,J\right\}. \tag{4.32b}\]
We note that, in general \(\mathcal{K}\), has no strictly feasible point. For example, in the case \(\phi\equiv 0\) in \(\Omega\), we have \(\mathscr{G}_{\phi,\omega_{j}}(\phi^{*})=0\) for \(j=1,\ldots,J\). In the following we exclude this trivial case.
The standard techniques to study best approximation problems are the gradient-based methods. We remark, however, that the Gateaux derivative of \(\phi^{*}\to\mathscr{G}_{\phi,\omega_{j}}(\phi^{*})\) does not exist due to the occurrence of the sign function. To permit the adoption of standard gradient methods, we regularize the nondifferentiable constraint function as follows. We introduce the parameter \(\varepsilon\in\mathbb{R}\) and define the differentiable regularizations \(\left|\cdot\right|_{\varepsilon}:\mathbb{R}\to\mathbb{R}_{+}\) and \(\operatorname*{sgn}_{\varepsilon}:\mathbb{R}\to\mathbb{R}\) as:
\[\left|r\right|_{\varepsilon} :=\left|r^{2}+\varepsilon^{2}\right|^{1/2}, \tag{4.33a}\] \[\operatorname*{sgn}_{\varepsilon}(r) :=\left|r\right|_{\varepsilon}^{-1}, \tag{4.33b}\]
which satisfy the relation:
\[\operatorname{D}\left|r\right|_{\varepsilon}=\operatorname*{sgn}_{ \varepsilon}(r). \tag{4.34}\]
Next, we introduce the regularized functional
\[\mathscr{G}_{\phi,\omega}^{\varepsilon}(\phi^{*}):=\int_{\omega}g_{\phi}^{ \varepsilon}(\phi^{*})\ \mathrm{d}x, \tag{4.35}\]
where the functional \(g_{\phi}^{\varepsilon}\) is defined as:
\[g_{\phi}^{\varepsilon}(\phi^{*}):=\left|\operatorname{D}\!\phi^{*}\right|_{ \varepsilon}-\operatorname*{sgn}_{\varepsilon}\left(\operatorname{D}\!\phi^ {*}\right)\operatorname{D}\!\phi=-\operatorname*{sgn}_{\varepsilon}\left( \operatorname{D}\!\phi^{*}\right)\operatorname{D}\!\phi^{\prime}+\varepsilon^{ 2}\left|\operatorname{D}\!\phi^{*}\right|_{\varepsilon}^{-1}. \tag{4.36}\]
It is now clear that the regularized functional \(\mathscr{G}_{\phi,\omega}^{\varepsilon}(\phi^{*})\) is differentiable in the Gateaux sense. The Gateaux derivative of \(\mathscr{G}_{\phi,\omega}^{\varepsilon}(\phi^{*})\), denoted \(\mathrm{d}\mathscr{G}_{\phi,\omega}^{\varepsilon}(\phi^{*})(w)\), is given by:
\[\mathrm{d}\mathscr{G}_{\phi,\omega}^{\varepsilon}(\phi^{*})(w)=\int_{\omega_{ j}}\left|\operatorname{D}\!\phi^{*}\right|_{\varepsilon}^{-1}\operatorname{D}\! \phi^{*}\mathrm{D}w-\varepsilon^{2}\left|\operatorname{D}\!\phi^{*}\right|_{ \varepsilon}^{-3}\operatorname{D}\!\phi\mathrm{D}w\ \mathrm{d}x. \tag{4.37}\]
We have the following property regarding the convexity of the regularized Gibbs
functional.
**Proposition 4.3** (Convexity of Gibbs functional).: _The functional \(g^{\varepsilon}_{\phi}\) is quasi-convex:_
\[g^{\varepsilon}_{\phi,\omega}(\zeta\phi^{*}_{1}+(1-\zeta)\phi^{*}_{2})\leq \max\left\{g^{\varepsilon}_{\phi,\omega}(\phi^{*}_{1}),g^{\varepsilon}(\phi^{ *}_{2})\right\}, \tag{4.38}\]
_for all \(\phi^{*}_{1},\phi^{*}_{2}\in H^{1}(\tilde{\Omega})\), \(\zeta\in[0,1]\). The functional \(\mathcal{V}_{\phi,\omega}\) is convex, but \(\mathscr{G}^{\varepsilon}_{\phi,\omega}\) is in general not (quasi-)convex._
We now consider the best approximation problem:
\[\phi^{*}=\underset{\theta^{*}\in\mathcal{K}^{\varepsilon}}{\mathrm{arginf}} \left\|\phi-\theta^{*}\right\|_{\mathcal{H}},\] (4.39a) where the regularized feasible set is defined as: \[\mathcal{K}^{\varepsilon}:=\left\{\phi^{*}\in H^{1}(\tilde{\Omega}):\ \ \mathscr{G}^{ \varepsilon}_{\phi,\omega_{j}}(\phi^{*})\leq 0,\ j=1,...,J\right\}. \tag{4.39b}\]
A consequence of Proposition 4.3 is that the feasible set \(\mathcal{K}^{\varepsilon}\) is not convex. This is a result of the occurrence of the (regularized) sign function in \(g^{\varepsilon}_{\phi}\).
**Remark 4.2**.: The feasible set determined by (a regularized) functional
\[\left\{\phi^{*}\in H^{1}(\tilde{\Omega}):\ \ \mathcal{V}^{\varepsilon}_{\phi, \omega_{j}}(\phi^{*})\leq 0,j=1,...,J\right\}, \tag{4.40}\]
with
\[\mathcal{V}^{\varepsilon}_{\phi,\omega_{j}}(\phi^{*})=\int_{\omega}|\mathrm{D }\phi^{*}|_{\varepsilon}-|\mathrm{D}\phi|_{\varepsilon}\ \mathrm{d}x, \tag{4.41}\]
is convex. Furthermore, note that quasi-convexity of the functional is sufficient for the associated feasible set to be convex.
We now introduce the first-order optimality Karush-Kuhn-Tucker (KKT) conditions of the constrained best approximation problem (4.39). We note that the solution of the KKT conditions is only guaranteed to be a local optimum due to the lack of convexity of the feasible set \(\mathcal{K}^{\varepsilon}\). Replacing \(\mathcal{K}^{\varepsilon}\) by the feasible set from (4.40) would imply global optimality.
**Theorem 4.3** (Karush-Kuhn-Tucker conditions).: _The function \(\phi^{*}\in\mathcal{K}\) is a local optimum of the problem (4.39) if and only if there exist Lagrange multipliers \(\lambda_{j}\in\mathbb{R}\) (\(j=1,...,J\)) such that the following Karush-Kuhn-Tucker (KKT) conditions hold:_
_Stationarity:_
_Find \(\phi^{*}\in H^{1}(\tilde{\Omega}),\lambda_{j}\in\mathbb{R}\) for \(j=1,...,J\) such that_
\[(\phi^{*}-\phi,w)_{\mathcal{H}}+\sum_{j=1}^{J}\lambda_{j}\mathrm{d}\mathscr{G }^{\varepsilon}_{\phi,\omega_{j}}(\phi^{*})(w)=0\ \ \ \text{for all}\ w\in H^{1}(\tilde{\Omega}), \tag{4.42a}\]
_Primal feasibility:_
\[\mathscr{G}^{\varepsilon}_{\phi,\omega_{j}}(\phi^{*})\leq 0\quad\text{ for }j=1,...,J, \tag{4.42b}\]
_Dual feasibility:_
\[\lambda_{j}\geq 0\quad\text{ for }j=1,...,J, \tag{4.42c}\]
_Complementary slackness:_
\[\lambda_{j}\mathscr{G}^{\varepsilon}_{\phi,\omega_{j}}(\phi^{*})=0\quad\text{ for }j=1,...,J. \tag{4.42d}\]
**Proposition 4.4** (Homogeneity Gibbs functional).: _The functional \(\mathscr{G}^{\varepsilon}_{\phi,\omega}\) satisfies the property:_
\[\lim_{\varepsilon\to 0}\mathrm{d}\mathscr{G}^{\varepsilon}_{\phi,\omega}( \phi^{*})(\phi^{*})=V_{\omega}(\phi^{*})\geq 0. \tag{4.43}\]
* Substitution of \(w=\phi^{*}\in H^{1}(\bar{\Omega})\) into (4.37) provides \[\mathrm{d}\mathscr{G}^{\varepsilon}_{\phi,\omega}(\phi^{*})(\phi^{*}) = \int_{\omega}\mathrm{d}g^{\varepsilon}_{\phi}(\phi^{*})(\phi^{*}) \ \mathrm{d}x,\] (4.44a) \[\mathrm{d}g^{\varepsilon}_{\phi}(\phi^{*})(\phi^{*}) = \left|\mathrm{D}\phi^{*}\right|_{\varepsilon}-\varepsilon^{2} \left|\mathrm{D}\phi^{*}\right|_{\varepsilon}^{-3}\left(\left|\mathrm{D}\phi^ {*}\right|_{\varepsilon}^{2}+\mathrm{D}\phi\mathrm{D}\phi^{*}\right).\] (4.44b) The integrand \[\mathrm{d}g^{\varepsilon}_{\phi}(\phi^{*})(\phi^{*})\] vanishes for \[\mathrm{D}\phi^{*}=0\]. On the other hand, for \[\mathrm{D}\phi^{*}\neq 0\] we have \[\lim_{\varepsilon\to 0}\mathrm{d}g^{\varepsilon}_{\phi}(\phi^{*})(\phi^{*})= \left|\mathrm{D}\phi^{*}\right|\geq 0.\] (4.44b)
**Lemma 4.9** (Positivity).: _The solution of the optimization problem (4.32) has the following inner product form that is positive:_
\[(\phi^{\prime},\phi^{*})_{\mathcal{H}}\geq 0. \tag{4.46}\]
* Substitution of \(w=\phi^{*}\) into the stationarity condition (4.42a) provides: \[-\left(\phi^{\prime},\phi^{*}\right)_{\mathcal{H}}+\sum_{j=1}^{J}\lambda_{j} \mathrm{d}\mathscr{G}^{\varepsilon}_{\phi,\omega}(\phi^{*})(\phi^{*})=0,\] (4.47) where we have employed the substitution \(\phi^{\prime}=\phi-\phi^{*}\). The result now follows from taking the limit \(\varepsilon\to 0\), utilizing Proposition 4.4 and invoking the dual feasibility property (4.42c).
### Best approximations in finite element spaces
In this subsection, we seek for finite element approximations that satisfy the Gibbs constraints. Consider the best approximation problem:
\[\phi^{h}=\underset{\theta^{h}\in\mathcal{K}_{p,\alpha}}{\operatorname{argarg}} \|\phi-\theta^{h}\|_{\mathcal{H}}, \tag{4.48a}\]
where the feasible set \(\mathcal{K}_{p,\alpha}\) for a finite element approximation space of polynomial degree \(p\) and regularity \(\alpha\) is defined as:
\[\mathcal{K}_{p,\alpha}:=\left\{\phi^{h}\in\mathcal{V}_{D;p,\alpha}^{h}:\ \ \ \mathcal{G}_{\phi,\omega_{j}}(\phi^{h})\leq 0,j=1,...,J, \right\}. \tag{4.48b}\]
To proceed, we regularize the non-differentiable constraint according to (4.35)-(4.36) and introduce the KKT conditions of the regularized problem.
_Stationarity:_
\[\text{Find }\phi^{h}\in\mathcal{V}_{D,p,\alpha}^{h},\lambda_{j}\in \mathbb{R}\text{ for }\ j=1,...,J\text{ such that }\] \[\left(\phi^{h}-\phi,w^{h}\right)_{\mathcal{H}}+\sum_{j=1}^{J} \lambda_{j}\mathrm{d}\mathcal{G}_{\phi,\omega_{j}}^{\varepsilon}(\phi^{h})(w^ {h})=0\ \ \ \text{for all }w^{h}\in\mathcal{V}_{0,p,\alpha}^{h},\] (4.49a) _Primal feasibility:_ \[\mathcal{G}_{\phi,\omega_{j}}^{\varepsilon}(\phi^{h})\leq 0\ \ \ \ \text{for }j=1,...,J,\] (4.49b) _Dual feasibility:_ \[\lambda_{j}\geq 0\ \ \ \ \text{for }j=1,...,J,\] (4.49c) _Complementary slackness:_ \[\lambda_{j}\mathcal{G}_{\phi,\omega_{j}}^{\varepsilon}(\phi^{h})=0\ \ \ \ \text{for }j=1,...,J. \tag{4.49d}\]
The problem (4.49) takes the following algebraic form:
_Find_ \(\mathbf{\phi}^{h},\mathbf{\lambda}\) such that:
\[\mathbf{M}\mathbf{\phi}^{h} =\mathbf{M}\mathbf{\phi}-\mathbf{\lambda}^{T}\mathbf{G}(\mathbf{\phi}^{h}), \tag{4.50a}\] \[\mathbf{g}(\mathbf{\phi}^{h}) \leq 0,\] (4.50b) \[\mathbf{\lambda} \geq 0,\] (4.50c) \[\mathbf{\lambda}^{T}\mathbf{g}(\mathbf{\phi}^{h}) =0, \tag{4.50d}\]
where the matrices \(\mathbf{M}=[M_{AB}]\in\mathbb{R}^{n_{\text{\tiny{def}}}\times n_{\text{\tiny{ def}}}}\) and \(\mathbf{G}(\mathbf{\phi}^{h})=[G_{jB}]\in\mathbb{R}^{n_{\text{\tiny{def}}}\times n_{ \text{\tiny{def}}}}\) are given by:
\[M_{AB} =(N_{A},N_{B})_{\mathcal{H}}, \tag{4.51a}\] \[G_{jB} = \tag{4.51b}\]
and the vectors are \(\mathbf{g}(\mathbf{\phi}^{h})=[\mathcal{G}_{\phi,\omega_{j}}(\phi^{h})]\in\mathbb{ R}^{n_{\text{\tiny{def}}}}\) and \(\mathbf{\lambda}=[\lambda_{j}]\). Here, \(n_{\text{\tiny{def}}}\) denotes the number of degrees of freedom.
**Remark 4.3** (Computation constrained solutions).: In this article, we use carefully selected examples that allow the construction of solutions of the constrained best approximation problem (4.48). The computation of constrained solutions with standard gradient-based methods is in general very difficult. There are
a number of challenges. First, in some situations the number of degrees of freedom in the feasible solution set may be very small. It can even occur that the feasible set consists of a single function (see Remark 4.4). This is extremely difficult to find with gradient-based methods. Second, the problem is in general non-convex. As a consequence, gradient-based methods may get stuck in local optima. This excludes a large class of powerful methodologies from convex optimization (that often rely on the KTT conditions). Third, the problem is highly nonlinear. This means that standard Newton-Raphson linearization methods often do not converge, and one has to work with less efficient approaches such as quasi-Newton type methods. Finally, we note that similar issues occur in the computation of \(L^{q}\)-best approximations when taking \(q\to 1\), and especially when \(q=1\).
We start with discontinuous approximation spaces (\(\alpha=-1\)). In order to apply the Gibbs constraints in practice, the subdomains \(\omega_{j}\) need to be selected. We select the subsets \(\omega_{j}\subset\Omega,j=1,\ldots,J\) as the finite elements: \(\omega_{j}=K_{j}\). Note that the set \(\mathcal{K}_{p,-1}\) is not empty (it contains at least all piecewise constants). The feasible set is convex for discontinuous piecewise linear basis functions (recall that it is general not convex).
[Convexity of \(\mathcal{K}_{1,-1}\)] The feasible set \(\mathcal{K}_{1,-1}\) is convex.
Let \(\phi_{1}^{h},\phi_{2}^{h}\in\mathcal{K}_{1,-1}\) be given. Fix an element \(K_{i}=(x_{i},x_{i+1})\); on this element the functions \(\phi_{1}^{h}\) and \(\phi_{2}^{h}\) have a representation \(\phi_{1}^{h}=a_{1}x+b_{1}\) and \(\phi_{2}^{h}=a_{2}x+b_{2}\) for some scalars \(a_{1},a_{2},b_{1},b_{2}\in\mathbb{R}\). Since \(\phi_{1}^{h},\phi_{2}^{h}\in\mathcal{K}_{1,-1}\), we now have:
\[\mathscr{G}_{\phi,K_{i}}(\phi_{1}^{h}) =\int_{x_{i}}^{x_{i+1}}\left|\mathrm{D}\phi_{1}^{h}\right|- \mathrm{sgn}\left(\mathrm{D}\phi_{1}^{h}\right)\mathrm{D}\phi\ \mathrm{d}x \tag{4.52a}\] \[=(x_{i+1}-x_{i})|a_{1}|-\mathrm{sgn}\left(a_{1}\right)(\phi(x_{i +1})-\phi(x_{i}))\leq 0,\] \[\mathscr{G}_{\phi,K_{i}}(\phi^{h}) =\int_{x_{i}}^{x_{i+1}}\left|\mathrm{D}\phi_{2}^{h}\right|- \mathrm{sgn}\left(\mathrm{D}\phi_{2}^{h}\right)\mathrm{D}\phi\mathrm{d}x\] (4.52b) \[=(x_{i+1}-x_{i})|a_{2}|-\mathrm{sgn}\left(a_{2}\right)(\phi(x_{i +1})-\phi(x_{i}))\leq 0.\]
Without loss of generality, we assume \(\phi(x_{i+1})-\phi(x_{i})>0\). From (4.52), we find \(a_{1}>0\) and \(a_{2}>0\), and we can write:
\[(x_{i+1}-x_{i})a_{1}-(\phi(x_{i+1})-\phi(x_{i}))\leq 0, \tag{4.53a}\] \[(x_{i+1}-x_{i})a_{2}-(\phi(x_{i+1})-\phi(x_{i}))\leq 0. \tag{4.53b}\]
Now, let \(\zeta\in[0,1]\) and define \(\phi_{\zeta}^{h}=\zeta\phi_{1}^{h}+(1-\zeta)\phi_{2}^{h}\). Convexity is then a direct consequence of (4.53):
\[\mathscr{G}_{\phi,K_{i}}(\phi_{\zeta}^{h}) =\int_{x_{i}}^{x_{i+1}}\left|\mathrm{D}\phi_{\zeta}^{h}\right|- \mathrm{sgn}\left(\mathrm{D}\phi_{\zeta}^{h}\right)\mathrm{D}\phi\mathrm{d}x \tag{4.54}\] \[=(x_{i+1}-x_{i})(\zeta a_{1}+(1-\zeta)a_{2})-(\phi(x_{i+1})-\phi (x_{i}))\] \[\leq 0.\]
Consider now the best approximation problem (4.48a)-(4.48b) with the interior penalty optimality (2.18)-(2.19), subject to the element-wise Gibbs constraints. Note that the Gateaux derivatives \(\mathrm{d}g_{\phi,\omega_{j}}^{\mu}\) are linearly independent. Therefore, the Lagrange multiplier \(\lambda_{j}\) solely depends on quantities defined on \(K_{j}\). We visualize the interior penalty-best approximation \(\phi^{h}\in\mathcal{V}_{D;p,-1}^{h}\) of the smooth step function, subject to the element-wise Gibbs constraints, in Figure 8 for \(p=1,2\).
We observe that the constrained interior penalty-best approximations do not show over- or undershoots. Note that this property is not valid in general. Not all feasible solutions are monotonic, since the addition of a piecewise constant does not alter the Gibbs functional. Furthermore, both approximations deviate by a small piecewise constant away from the sharp layer. This behavior diminishes as the penalty parameter \(\eta\) is increased.
Next, we focus on the case \(\alpha=0\), i.e. continuous approximation spaces. Again, we take \(\omega_{j}=K_{j}\). It is easy to see that \(\mathcal{K}_{p,0}\) is not empty. Namely, \(\mathcal{K}_{1,0}\) contains the piecewise linear interpolant, and \(\mathcal{K}_{1,0}\subset\mathcal{K}_{p,0}\) for \(p\geq 1\). Considering the best approximation problem (4.48a)-(4.48b) with \(\mathcal{H}=H_{0}^{1}(\Omega)\), we have the following lemma.
**Lemma 4.10** (\(H_{0}^{1}\)-orthogonality continuous linears).: _Let \(\phi^{h}\in\mathcal{V}_{D,1,0}^{h}\) be the \(H_{0}^{1}\)-best approximation of \(\phi\in\mathcal{V}\). We have the property:_
\[\int_{K_{i}}\mathrm{D}\phi^{h}\mathrm{D}\phi^{\prime}\ \mathrm{d}x=0, \tag{4.55}\]
_for all elements \(K_{i}=(x_{L,i},x_{R,i}),i=1,\ldots,n_{\mathrm{el}}\)._
Figure 8: The interior penalty-best approximation \(\phi^{h}\in\mathcal{V}_{D;p,-1}^{h}\), with \(n_{\mathrm{el}}=8\), of the smooth step function \(\phi=\phi_{0.5}\), subject to element-wise Gibbs constraints, for different for \(p\).
**Proof.** Applying the first Green's identity we find:
\[\int_{K_{i}}\mathrm{D}\phi^{h}\mathrm{D}\phi^{\prime}\ \mathrm{d}x=\int_{ \partial K_{i}}n\mathrm{D}\phi^{h}\phi^{\prime}\ \mathrm{d}a-\int_{K_{i}}\mathrm{D}^{2}\phi^{h}\phi^{\prime}\ \mathrm{d}x, \tag{4.56}\]
with \(n\) the outward unit normal. Noting that for piecewise linear polynomials the second term vanishes and we are left with:
\[\int_{K_{i}}\mathrm{D}\phi^{h}\mathrm{D}\phi^{\prime}\ \mathrm{d}x=\phi^{\prime}(x _{R,i})\mathrm{D}\phi^{h}(x_{R,i})-\phi^{\prime}(x_{L,i})\mathrm{D}\phi^{h}(x_ {L,i}). \tag{4.57}\]
Recalling now from Lemma 3.1 that the \(H^{1}_{0}\)-projector provides nodally exact solutions (i.e. \(\phi^{\prime}(x_{R,i})=\phi^{\prime}(x_{L,i})=0\)) completes the proof.
**Lemma 4.11** (Vanishing Lagrange multipliers).: _Suppose that the approximation satisfies homogeneous boundary conditions, \(\phi^{h}\in\mathcal{V}^{h}_{0,1,0}\). The Lagrange multiplier \(\lambda_{j}\) in the KKT conditions (4.49) vanishes if \(\phi^{h}\) is not constant on \(\omega_{j}\), for \(\varepsilon\to 0\)._
**Proof.** Noting that the \(H^{1}_{0}\)-best approximation is monotonic and interpolatory, Lemma 4.6 ensures satisfaction of the Gibbs constraints \(\mathscr{G}_{\phi,K_{i}}(\phi^{h})\leq 0\) (for \(\varepsilon\to 0\)). As a consequence, the first term in (4.49a) vanishes (consistent with Lemma 4.10), and we are left with:
\[\sum_{j=1}^{J}\lambda_{j}\mathrm{d}\mathscr{G}_{\phi,K_{j}}^{ \varepsilon}(\phi^{h})(w^{h})=0, \tag{4.58}\]
for all \(w^{h}\in\mathcal{V}^{h}_{0,1,0}\). Substituting \(w^{h}=\phi^{h}\), taking the limit \(\varepsilon\to 0\) and invoking the homogeneity property of Proposition 4.4 yields:
\[\sum_{j=1}^{J}\lambda_{j}V_{\omega_{j}}(\phi^{h})=0. \tag{4.59}\]
Noting that \(\lambda_{j}\geq 0\) and \(V_{\omega}(\phi^{h})\geq 0\), we get \(\lambda_{j}V_{\omega_{j}}(\phi^{h})=0\), for \(j=1,\ldots J\). If \(\phi^{h}\) is not constant on \(\omega_{j}\), we have \(V_{\omega_{j}}(\phi^{h})>0\) and thus \(\lambda_{j}=0\).
Note that, as a consequence of the overlapping support of the basis functions, the Gateaux derivatives \(\mathrm{d}\mathscr{G}_{\phi,K_{j}}^{\varepsilon}\) are linearly dependent. Therefore, the Lagrange multipliers \(\lambda_{j},j=1,...,n_{el}\) are _non-local_, in the sense that \(\lambda_{j}\) does not solely depend on quantities defined on \(\omega_{j}\). The \(H^{1}_{0}\)-best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D;p,0}\), \(p=1,2\), of the smooth step function (with \(a=0.58\)) subject to the element-wise Gibbs constraints is illustrated in Figure 9. We observe in both figures an interpolatory monotonic approximation. Moreover, we have \(\mathscr{G}_{\phi,K_{j}}(\phi^{h})=0\) for all element numbers \(j=1,\ldots,J\). In general, the following theorem holds.
**Theorem 4.5** (Monotonic interpolant for regularity \(\alpha=0\)).: _The constrained best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D;p,0}\) defined by the problem (4.48a)-(4.48b) of a monotonic profile \(\phi\) is a monotonic interpolant._
**Remark 4.4** (Uniqueness).: By Theorem 4.5, the best approximation for polynomial degree \(p=1\) is the sole feasible solution in \(\mathcal{K}_{1,0}\). This function is thus independent of the optimality condition \(\mathcal{H}\).
We now turn our attention to higher-order smooth (\(\alpha\geq 1\)) approximation spaces \(\mathcal{V}_{D,p,\alpha}^{h}\). The following proposition precludes existence of a solution for \(\omega_{j}=K_{j}\) in general.
**Proposition 4.5** (Infeasible elementwise constraints).: _The constrained best approximation problem (4.48a)-(4.48b) with \(\omega_{j}=K_{j}\) and \(a=0.58\) has in general no solution for \(\alpha\geq 1\)._
This is a consequence of the observation that interpolatory solutions exist for \(\alpha=-1\) and \(\alpha=0\), but not for \(\alpha\geq 1\). We illustrate this for the smooth step function (3.2) with parameter \(a=0.58\) in Figure 10. Here, we display several sharp finite element approximations with quadratic basis functions (\(p=2\)) and regularity \(\alpha=1\). The finite element approximation requires at least 2 elements to capture the sharp layer. There thus necessarily exists an element \(K_{i}\) for which \(\mathscr{G}_{\phi,K_{i}}(\phi^{h})>0\), and the constrained best approximation problem (4.48a)-(4.48b) therefore has no solution.
In general, B-spline basis functions of degree \(p\) have the following local support:
\[\operatorname{supp}(N_{i,p})=(\xi_{i},\xi_{i+p+1}), \tag{4.60}\]
which depends on the regularity \(\alpha\). The B-spline basis function \(N_{i,p}\) has support in at most \(p-k+2=\alpha+2\) elements. Neighboring B-spline basis functions share the support:
\[\operatorname{supp}(N_{i,p})\cap\operatorname{supp}(N_{i+1,p})=(\xi_{i+1}, \xi_{i+p+1}), \tag{4.61}\]
which consists of at most \(\alpha+1\) elements. With the aim of finding a subdivision of \(\Omega\) that permits the sharpest approximations, we select \(\omega_{j}\) as the union
of \(\alpha+1\) neighboring elements. We subdivide domain \(\Omega\) into disjoint subdomains \(\omega_{j}=\cup_{i\in\mathcal{I}_{j}}K_{i}\), where \(\mathcal{I}_{j}\) denotes an index set. Note that shifting groups of elements yields a different subdivision (for \(\alpha>0\)). The construction is therefore not unique and we consider \(\alpha+1\) possibilities. For the first index set \(\mathcal{I}_{1}\) we have the options \(\{1\},\ldots,\{1,\ldots,\alpha+1\}\). The consecutive index sets contain the next \(\alpha+1\) consecutive numbers, where the last set terminates with the last element number. We visualize two possible subdivisions in Figure 11.
In Figure 12, we visualize the \(H^{1}_{0}\)-best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D,2,1}\) of the smooth step function subject to Gibbs constraints on the subdomains of both subdivisions. We see that both approximations are completely free of over- and undershoots. In general, the existence of a feasible solution of the constrained best approximation problem with \(\alpha\geq 1\) is an open question.
Figure 11: The two possible subdivisions for \(\mathcal{V}^{h}_{2,1}\) when \(n_{\mathrm{el}}=8\). The vertical dashed lines represent the element boundaries.
Figure 10: Sharp finite element approximations \(\phi^{h}_{1},\phi^{h}_{2},\phi^{h}_{3}\in\mathcal{V}^{h}_{D,2,1}\), with \(n_{\mathrm{el}}=8\), of the smooth one-dimensional step function \(\phi=\phi_{0.58}\). The vertical dashed lines indicate the element boundaries.
## 5 Eliminating the Gibbs phenomenon in higher dimensions
In this section, we extend the strategy to eliminate the Gibbs phenomenon presented in Section 4 to higher dimensions. We first introduce the Gibbs constraints for the elimination of the Gibbs phenomenon in general function spaces in Section 5.1. After that, in Section 5.2, we discuss the Gibbs constraints in the context of best approximations in finite element spaces.
### Gibbs constraints
We present the construction of Gibbs constraints on a multidimensional given subdomain \(\omega\), by building upon the one-dimensional framework presented in Section 4. The main distinguishing feature in the design of Gibbs constraints in higher dimensions is the notion of directionality. As a consequence, the occurrence of over- and undershoots now depends on the point of view, meaning that there exists no precise mathematical definition of monotonicity in higher dimensions. To construct a set of constraints, we instead aim to preclude the Gibbs phenomenon in a particular direction. We start by assuming an arbitrary direction, \(\mathbf{e}\), and afterwards suggest a particular closed form expression for \(\mathbf{e}\).
**Definition 5.1** (Directional Gibbs functional).: The directional Gibbs functional of the function \(\phi^{*}\in H^{1}(\tilde{\Omega})\), with respect to the given function \(\phi\in H^{1}(\Omega)\) and on \(\omega\subset\Omega\), is defined as:
\[\mathscr{G}_{\phi,\omega}(\phi^{*};\mathbf{e}):=\int_{\omega}g_{\phi}(\phi^{*},\mathbf{e})\;\mathrm{d}\Omega, \tag{5.1}\]
where the functional \(g_{\phi}\) is defined as:
\[g_{\phi}(\phi^{*},\mathbf{e}):=|\mathbf{e}\cdot\nabla\phi^{*}|-\mathrm{sgn} \left(\mathbf{e}\cdot\nabla\phi^{*}\right)\mathbf{e}\cdot\nabla\phi=-\mathrm{ sgn}\left(\mathbf{e}\cdot\nabla\phi^{*}\right)\mathbf{e}\cdot\nabla\phi^{ \prime}, \tag{5.2}\]
Figure 12: The \(H^{1}_{0}\)-best approximation \(\phi^{h}\in\mathcal{V}^{h}_{D;2,1}\), with \(n_{\mathrm{el}}=8\), of the smooth step function \(\phi=\phi_{0.58}\), subject to Gibbs constraints on each of the subdomains illustrated in Figure 11.
and \(\mathbf{e}\in\mathbb{R}^{d}\) is a unit vector (i.e. \(\|\mathbf{e}\|_{2}=1\)).
In the following remark we provide a motivation for this generalization of the one-dimensional Gibbs functional.
**Remark 5.1** (Pointwise motivation Gibbs constraint).: Define the projection operator \(\mathcal{P}_{\mathbf{e}}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) in the direction \(\mathbf{e}\) by \(\mathcal{P}_{\mathbf{e}}\mathbf{v}:=\mathbf{P}_{\mathbf{e}}\mathbf{v}\) with projection matrix \(\mathbf{P}_{\mathbf{e}}=\mathbf{e}\otimes\mathbf{e}\). The projections of \(\nabla\phi^{*}\) and \(\nabla\phi\) in direction \(\mathbf{e}\) are thus given by \(\mathcal{P}_{\mathbf{e}}\nabla\phi^{*}\) and \(\mathcal{P}_{\mathbf{e}}\nabla\phi\), respectively. The form of the local functional \(g_{\phi}(\phi^{*},\mathbf{e})\) is now motivated by the following equivalence:
\[\|\mathcal{P}_{\mathbf{e}}\nabla\phi^{*}\|_{2}-\|\mathcal{P}_{ \mathbf{e}}\nabla\phi\|_{2}\leq 0 \Leftrightarrow |\mathbf{e}\cdot\nabla\phi^{*}|\leq|\mathbf{e}\cdot\nabla\phi|,\] \[\operatorname{sgn}\left(\mathcal{P}_{\mathbf{e}}\nabla\phi^{*} \cdot\mathbf{e}\right)=\operatorname{sgn}\left(\mathcal{P}_{\mathbf{e}} \nabla\phi\cdot\mathbf{e}\right) \Leftrightarrow \operatorname{sgn}\left(\mathbf{e}\cdot\nabla\phi^{*}\right)= \operatorname{sgn}\left(\mathbf{e}\cdot\nabla\phi\right)\] \[\Leftrightarrow g_{\phi}(\phi^{*},\mathbf{e})\leq 0. \tag{5.3}\]
We have the following simple, but important, property.
**Proposition 5.1** (Invariance of directional Gibbs functional opposite direction).: _The directional Gibbs functional is invariant with respect to flipping the sign of the direction \(\mathbf{e}\):_
\[\mathscr{G}_{\phi,\omega}(\phi^{*};-\mathbf{e})=\mathscr{G}_{\phi,\omega}(\phi ^{*};\mathbf{e}). \tag{5.4}\]
To proceed, we provide a short study on the functional \(g_{\phi}=g_{\phi}(\phi^{*},\mathbf{e})\). In the following proposition we comment on the sign thereof, depending on the unit vector \(\mathbf{e}\).
**Proposition 5.2** (Sign of directional \(g_{\phi}\)).: _The sign of the functional \(g_{\phi}(\phi^{*},\mathbf{e})\) depends in the following way on the unit vector \(\mathbf{e}\):_
\[g_{\phi}(\phi^{*},\mathbf{e})<0 \Leftrightarrow \begin{cases}\mathbf{e}\cdot\nabla\phi^{\prime}>0\ \text{ and }\ \mathbf{e}\cdot\nabla\phi^{*}>0\ \text{ if }\ \mathbf{e}\cdot\nabla\phi>0\\ \mathbf{e}\cdot\nabla\phi^{\prime}<0\ \text{ and }\ \mathbf{e}\cdot\nabla\phi^{*}<0\ \text{ if }\ \mathbf{e}\cdot\nabla\phi<0,\end{cases} \tag{5.5a}\] \[g_{\phi}(\phi^{*},\mathbf{e})=0 \Leftrightarrow \mathbf{e}\cdot\nabla\phi^{*}=0\ \text{ or }\ \mathbf{e}\cdot\nabla\phi^{\prime}=0,\] (5.5b) \[g_{\phi}(\phi^{*},\mathbf{e})>0 \Leftrightarrow \begin{cases}\mathbf{e}\cdot\nabla\phi^{\prime}<0\ \text{ or }\ \mathbf{e}\cdot\nabla\phi^{*}<0\ \text{ if }\ \mathbf{e}\cdot\nabla\phi>0\\ \mathbf{e}\cdot\nabla\phi^{\prime}>0\ \text{ or }\ \mathbf{e}\cdot\nabla\phi^{*}>0\ \text{ if }\ \mathbf{e}\cdot\nabla\phi<0.\end{cases} \tag{5.5c}\]
We provide a visualization of the sign of \(g_{\phi}(\phi^{*},\mathbf{e})\) for varying direction \(\mathbf{e}\) for a two-dimensional scenario in Figure 13.
In general, there always exists a direction \(\mathbf{e}\) for which \(g_{\phi}(\phi^{*},\mathbf{e})>0\), as expressed in the following proposition.
**Proposition 5.3** (Supremum of \(g_{\phi}(\phi^{*},\mathbf{e})\)).: _The supremum of \(g_{\phi}(\phi^{*},\mathbf{e})\) is given by_
\[\sup_{\|\mathbf{e}\|_{2}=1}g_{\phi}(\phi^{*},\mathbf{e})=\begin{cases}\mathbf{ b}\cdot\nabla\phi^{\prime}\ \ \text{ if }\ \nabla\phi^{*}\cdot\nabla\phi^{\prime}\geq 0\\ \|\nabla\phi^{\prime}\|_{2}\ \text{ if }\ \nabla\phi^{*}\cdot\nabla\phi^{\prime}<0,\end{cases} \tag{5.6}\]
_where \(\mathbf{b}\in\big{\{}\mathbf{a}\in\mathbb{R}^{d}:\|\mathbf{a}\|_{2}=1,\, \mathbf{a}\cdot\nabla\phi^{*}=0\big{\}}\)._
**Proof.**
Distinguish the two cases: (i) \(\nabla\phi^{*}\cdot\nabla\phi^{\prime}<0\) and (ii) \(\nabla\phi^{*}\cdot\nabla\phi^{\prime}\geq 0\).
* If \(\mathbf{e}\cdot\nabla\phi^{*}\geq 0\), then the supremum of \(g_{\phi}\) is \(\|\nabla\phi^{\prime}\|_{2}\), which is attained at \(\mathbf{e}=-\nabla\phi^{\prime}/\|\nabla\phi^{\prime}\|_{2}\). Similarly, if \(\mathbf{e}\cdot\nabla\phi^{*}<0\), then the supremum of \(g_{\phi}\) is again \(\|\nabla\phi^{\prime}\|_{2}\), which is then attained at \(\mathbf{e}=\nabla\phi^{\prime}/\|\nabla\phi^{\prime}\|_{2}\).
* If \(\mathbf{e}\cdot\nabla\phi^{*}\geq 0\), then \(g_{\phi}\) is a decreasing function in \(\mathbf{e}\cdot\nabla\phi^{*}\). Therefore, \(g_{\phi}\) attains its supremum when \(\mathbf{e}\cdot\nabla\phi^{*}=0\). Similarly, if \(\mathbf{e}\cdot\nabla\phi^{*}<0\), then \(g_{\phi}\) is an increasing function in \(\mathbf{e}\cdot\nabla\phi^{*}\). Again, \(g_{\phi}\) attains its supremum when \(\mathbf{e}\cdot\nabla\phi^{*}=0\).
Propositions 5.2 and 5.3 show that demanding the elimination of the Gibbs functional in each possible direction, which might intuitively yield the most desirable result, is not prudent: selecting the direction \(\mathbf{e}\) to maximize \(g_{\phi}=g_{\phi}(\phi^{*},\mathbf{e})\) and subsequently using this in the Gibbs constraint would only yield (i) the zero approximation \(\phi^{*}=0\) and (ii) the perfect approximation \(\phi^{*}=\phi\) as feasible solutions. To proceed, we must thus select a direction \(\mathbf{e}\) that leads to both suitable and practical constraints. The relevant directions to consider are functions of the (approximate) solution gradients: (i) \(\mathbf{e}=\nabla\phi/\|\nabla\phi\|_{2}\), (ii) \(\mathbf{e}=\nabla\phi^{*}/\|\nabla\phi^{*}\|_{2}\) and
(iii) \(\mathbf{e}=\nabla\phi^{\prime}/\|\nabla\phi^{\prime}\|_{2}\) leading to the expressions:
\[g_{\phi}\left(\phi^{*},\frac{\nabla\phi}{\|\nabla\phi\|_{2}}\right) = -|\nabla\phi\cdot\nabla\phi^{*}|^{-1}\|\nabla\phi\|_{2}^{-1}( \nabla\phi\cdot\nabla\phi)(\nabla\phi\cdot\nabla\phi^{\prime}), \tag{5.7a}\] \[g_{\phi}\left(\phi^{*},\frac{\nabla\phi^{*}}{\|\nabla\phi^{*}\|_ {2}}\right) = -\|\nabla\phi^{*}\|_{2}^{-1}\nabla\phi^{*}\cdot\nabla\phi^{\prime},\] (5.7b) \[g_{\phi}\left(\phi^{*},\frac{\nabla\phi^{\prime}}{\|\nabla\phi^{ \prime}\|_{2}}\right) = -\|\nabla\phi^{\prime}\|\nabla\phi^{*}\cdot\nabla\phi^{\prime}|^{- 1}\nabla\phi^{*}\cdot\nabla\phi^{\prime}. \tag{5.7c}\]
Insisting compatibility with the one-dimensional case requires the choice of direction \(\mathbf{e}=\nabla\phi^{*}/\|\nabla\phi^{*}\|_{2}\), which we focus on exclusively in the following.
**Definition 5.2** (Gibbs functional and constraint).: The Gibbs functional of the function \(\phi^{*}\in H^{1}(\tilde{\Omega})\), with respect to the given function \(\phi\in H^{1}(\Omega)\) and on \(\omega\subset\Omega\), is defined as:
\[\mathscr{G}_{\phi,\omega}(\phi^{*}):=\int_{\omega}g_{\phi}(\phi^{*})\ \mathrm{d}\Omega, \tag{5.8}\]
where the functional \(g_{\phi}\) is defined as:
\[g_{\phi}(\phi^{*}):=\ \begin{cases}-\|\nabla\phi^{*}\|_{2}^{-1}\nabla\phi^{*} \cdot\nabla\phi^{\prime}\ \mathrm{if}\ \ \nabla\phi^{*}\neq 0,\\ \qquad\qquad\qquad\qquad\mathrm{if}\ \ \nabla\phi^{*}=0.\end{cases} \tag{5.9}\]
The associated Gibbs constraint reads:
\[\mathscr{G}_{\phi,\omega}(\phi^{*})\leq 0. \tag{5.10}\]
Note that the Gibbs functional is frame-invariant, and that the properties of Proposition 4.2 and Lemma 4.4 are inherited from the one-dimensional case.
### Gibbs constraints for finite element best approximations
In this subsection, we apply the Gibbs constraints to finite element best approximations. Again, we consider the following general form of the best approximation problem:
_find \(\phi^{h}\in\mathcal{V}^{h}_{\mathcal{D};p,\alpha}\) such that:_
\[\phi^{h}=\operatorname*{arginf}_{\theta^{h}\in\mathcal{K}_{p,\alpha}}\|\phi- \theta^{h}\|_{\mathcal{H}},\] (5.11a) _where the feasible set \[\mathcal{K}_{p,\alpha}\] is defined as:_ \[\mathcal{K}_{p,\alpha}:=\left\{\phi^{h}\in\mathcal{V}^{h}_{\mathcal{D};p, \alpha}:\ \mathscr{G}_{\phi,\omega_{j}}(\phi^{h})\leq 0,j=1,...,J\right\}. \tag{5.11b}\]
We first consider discontinuous finite element approximation spaces (\(\alpha=-1\)). Analogous to the one-dimensional case, we consider (5.11a)-(5.11b) with \(\mathcal{H}=\mathrm{IP}\), i.e. the interior penalty best approximation. We select the subdomains \(\omega_{j}\) as the finite elements \(K_{j},j=1,\ldots,n_{\mathrm{el}}\). Feasibility of the constrained optimization problem is a consequence of the existence of an elementwise constant solution \(\phi^{h}\), for which
\(\mathscr{G}_{\phi,K_{j}}(\phi^{h})=0\). Again, we may analyze the solution properties via the KKT conditions (4.49). In case of homogeneous boundary conditions, we have \((\phi^{h},\phi^{\prime})_{\mathcal{H}}\geq 0\) (Lemma 4.9). In Figure 14, we visualize the interior penalty-best approximation \(\phi^{h}\in\mathcal{V}_{D,\mathrm{p},-1}^{h}\), \(p=1,2\) of the profile of example 1, subject to the element-wise Gibbs constraints. We observe that the constrained interior penalty-best approximations display a significant reduction of over- and undershoots compared to the non-constrained solutions depicted in Section 3.2.
Figure 14: The interior penalty-best approximation \(\phi^{h}\in\mathcal{V}_{D,\mathrm{p},-1}^{h}\) of the smooth step function, with \(n_{\mathrm{el}}=8\times 8\), subject to element-wise Gibbs constraints, for different \(p\).
Next, we study the case with regularity \(\alpha=0\). Consider again the example with the sharp layer skew to the mesh. In Figure 15 we visualize the sharpest possible approximation \(\phi^{h}\in\mathcal{V}^{h}_{D,1,0}\), and plot the element-wise values of the Gibbs functionals \(\mathscr{G}_{\phi,K_{i}}(\phi^{h})\). We note that the sharpest approximation is interpolatory. Nevertheless, the Gibbs constraints are not fulfilled element-wise. Similar to the one-dimensional case, one could define \(\omega_{j}\) as the collection of several elements. However, a careful examination reveals that the only possible choice is the collection of all elements \(\omega=\cup_{j=1}^{n_{\text{ind}}}K_{j}\), which would constitute a rather weak condition. It appears impossible to obtain a feasible solution when using subdomains \(\omega_{j}\) that consist of a few elements. This is a consequence of the support of the basis functions, which contains multiple elements. The Gibbs constraints for any collection of elements \(K_{i}\) on the off-diagonal with \(x\geq y\) (\(x\leq y\)) would enforce \(\phi^{h}_{K_{i}}\) to be close to \(1\) (and \(-1\)). This is in conflict with the continuity requirement of the finite element approximation space \(V^{h}_{D,1,0}\), and persists when using higher order polynomials.
Figure 15: The sharpest possible approximation \(\phi^{h}\in\mathcal{V}^{h}_{1,0}\) (a) and the corresponding element-wise values of the Gibbs functionals \(\mathscr{G}_{\phi,K_{i}}(\phi^{h})\) (b).
Lastly, we consider higher-order smooth finite element approximation spaces (\(\alpha\geq 1\)). The above discussed infeasibility of the optimization problem also applies to higher-order smooth approximations. We illustrate this for the smooth step function with a quadratic B-spline approximation space \(\mathcal{V}^{h}_{D,2,1}\). Consider a subdivision into groups of maximum \(3\times 3\) elements, as visualized in Figure 16. In order to satisfy the Gibbs constraint on the element group on the middle-bottom we must have \(\phi^{h}=-1\) in the blue boxed element. Similarly, in the black boxed element we require \(\phi^{h}=1\). These two conditions are incompatible.
## 6 Conclusions
In this article, we constructed a set of integral constraints with the aim of eliminating the Gibbs phenomenon in finite element best approximations. We first provided an overview of the Gibbs phenomenon for best approximations in finite element spaces. We illustrated with computational examples that spurious oscillations occur in one and two dimensions for standard projections onto finite element spaces of arbitrary degree and regularity (with the exception of the one-dimensional \(H^{1}_{0}\)-projection with linear continuous finite elements). The proposed constraints build onto the concept of total variation. In this regard, we established in one dimension the interrelation between the Gibbs constraint and interpolatory and monotonic approximations, as well as the maximum principle. Furthermore, we displayed in one dimension that the proposed constraints may be applied element-wise when the finite element space is either discontinuous or \(\mathcal{C}^{0}\)-continuous. For higher regularity finite element
Figure 16: Subdivision of an \(8\times 8\) element domain into groups of maximum \(3\times 3\) elements with approximation space \(\mathcal{V}^{h}_{D,2,1}\). The right and bottom inset figures display the univariate basis functions along the corresponding axes.
spaces, the integration domains of the constraints depend on multiple elements. We showed that enforcing the constraints removes over- and undershoots for continuous finite element spaces, and suppresses them for discontinuous finite element spaces. In higher dimensions, the constraints act in the direction of the solution gradient. The applicability of the constraints is then limited to discontinuous finite element spaces. We demonstrated that also in this case over- and undershoots are severely reduced.
We recognize two open problems that require further investigation. The first is linked to the last observation, namely the extension of the set of constraints to continuous finite element spaces in higher dimensions. The second open problem is the construction of a finite element method that incorporates these constraints in practical computations. We conjecture that a possible resolution lies in the variational multiscale framework, in particular through the work of Evans et al. [36] and of the first author [32].
## Acknowledgments
The authors are grateful to Thomas J.R. Hughes for insightful discussions on the topic. MtE was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) via the Walter Benjamin project EI 1210/1-1. DS gratefully acknowledges support from the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) via the Emmy Noether Award SCH 1249/2-1.
|
2307.10126 | Neutrino spin oscillations in a magnetized Polish doughnut | We study the gravitational scattering of ultrarelativistic neutrinos off a
rotating supermassive black hole (BH) surrounded by a thick magnetized
accretion disk. Neutrinos interact electroweakly with background matter and
with the magnetic field in the disk since neutrinos are supposed to possess
nonzero magnetic moments. The interaction with external fields results in
neutrino spin oscillations. We find that the toroidal magnetic field, inherent
in the magnetized Polish doughnut, does not cause a significant spin-flip for
any reasonable strengths of the toroidal component. The reduction of the
observed neutrino flux, owing to neutrino spin oscillations, is predicted. A
poloidal component of the magnetic field gives the main contribution to the
modification of the observed flux. The neutrino interaction with matter,
rotating with relativistic velocities, also changes the flux of neutrinos. We
briefly discuss the idea of the neutrino tomography of magnetic field
distributions in accretion disks near BHs. | Maxim Dvornikov | 2023-07-19T16:47:33Z | http://arxiv.org/abs/2307.10126v2 | # Neutrino spin oscillations in a magnetized Polish doughnut
###### Abstract
We study the gravitational scattering of ultrarelativistic neutrinos off a rotating supermassive black hole (BH) surrounded by a thick magnetized accretion disk. Neutrinos interact electroweakly with background matter and with the magnetic field in the disk since neutrinos are supposed to possess nonzero magnetic moments. The interaction with external fields results in neutrino spin oscillations. We find that the toroidal magnetic field, inherent in the magnetized Polish doughnut, does not cause a significant spin-flip for any reasonable strengths of the toroidal component. The reduction of the observed neutrino flux, owing to neutrino spin oscillations, is predicted. A poloidal component of the magnetic field gives the main contribution to the modification of the observed flux. The neutrino interaction with matter, rotating with relativistic velocities, also changes the flux of neutrinos. We briefly discuss the idea of the neutrino tomography of magnetic field distributions in accretion disks near BHs.
Keywords:neutrino properties, astrophysical black holes, magnetic fields, accretion : 2307.10126
## 1 Introduction
The experimental achievements, e.g., in refs. [1; 2], confirmed that neutrinos are massive particles and there is a nonzero mixing between different neutrino generations. These results open a window to explore physics beyond the standard model. The nonzero neutrino masses inevitably result in nontrivial neutrino electromagnetic properties. Namely, it is shown in refs. [3; 4] that, even in the minimally extended standard model supplied with a right handed neutrino, there is a small neutrino magnetic moment. It means that a neutrino is no longer considered as a purely electrically neutral particle. For example, a nonzero neutrino magnetic moment leads to the particle spin precession in an external magnetic field.
The change of the neutrino polarization in external fields has dramatic consequences for the observability of these particles. In frames of the standard model, a neutrino is created as a left-handed particle, i.e. its spin is opposite to the neutrino momentum. If the neutrino polarization changes, the particle becomes right-handed, i.e. sterile. Hence, we shall observe the effective reduction of the initial neutrino flux. This process is called neutrino spin oscillations. Spin oscillations of astrophysical neutrinos were thoroughly studied by many authors (see, e.g., ref. [5]). For example, the upper bound on the neutrino magnetic moment by studying spin oscillations of supernova (SN) neutrinos was obtained in ref. [6].
It was demonstrated in ref. [7] that, besides the interaction with a magnetic field, neutrino spin oscillations can be affected by the neutrino electroweak interaction with background matter. The gravitational interaction, although it is quite weak, can also cause the precession of a fermion spin. It was shown in ref. [8] that the motion of a spinning body in a curved spacetime deviates from geodesic. Assuming that a fermion is an elementary particle, it was obtained in ref. [9] that its spin is parallel transported along the particle trajectory. The quantum theory of the fermion spin in curved spacetime was developed in ref. [10].
Using the quasiclassical approach in ref. [9], neutrino spin oscillations in curved spacetime were studied in ref. [11] within the General Relativity (GR), and its extensions in refs. [12; 13; 14; 15]. In this work, we continue the studies in refs. [16; 17; 18], where spin effects in the neutrino gravitational scattering off a rotating black hole (BH) are accounted for. In a gravitational scattering, both incoming and outgoing neutrinos are in the asymptotically flat spacetime. Therefore, their spin states are well defined.
As shown in refs. [16; 17; 18], the magnetic field in an accretion disk gives the main contribution to neutrino spin oscillations. However, we used quite simple models of accretion disk in those papers. It should be noted that an accretion disk should be thick in order that both magnetic field and background matter are able to influence neutrino spin oscillations.
In the present work, we rely on the thick accretion disk model call a Polish doughnut [19]. The magnetized version of a Polish doughnut was developed in ref. [20]. We also account for the possibility of the presence of a poloidal magnetic field inside the accretion disk. One of the models for such a field was proposed in ref. [21]. Other models of accretion disks are reviewed in ref. [22].
The present work was motivated by the recent observations of the BH shadows in the centers of M87 and our Galaxy in refs. [23; 24]. It is the first direct test of GR in the strong field limit. It was suggested in ref. [25] that, besides photons emitted by an accretion disk, it can be a source of high energy neutrinos. It should be noted that there are active searches for high energy neutrinos emitted in active galactic nuclei, e.g., in ref. [26].
The image of such a disk, observed in a neutrino telescope, should account for both strong gravitational lensing of particles, as well as the neutrino spin precession in external fields, which converts active left neutrinos to sterile ones. Thus, in a neutrino telescope, we shall observe a different picture compared to an optical image plotted, e.g., in ref. [27]. Another possibility to probe spin effects in the neutrino gravitational scattering is to observe the lensing of SN neutrinos by a supermassive BH (SMBH) in the center of our Galaxy. For example, such a possibility was discussed in ref. [28].
Our work is organized as follows. We recall how to describe the trajectory of ultrarelativistic particles scattered off a rotating BH in section 2. In section 3, we represent the neutrino spin evolution in external fields in curved spacetime. The structure of external fields in the accretion disk in given in section 4. We fix the characteristics of the external fields and a neutrino in section 5. In section 6, we present the results of numerical simulations. Finally, we conclude in section 7. The main expressions for a magnetized Polish doughnut are listed in appendix A. In appendix B, we show how to use the symmetry of the system to reconstruct the spin precession of some neutrinos.
## 2 Neutrino scattering off a rotating BH
In this section, we briefly remind how to describe the motion of ultrarelativistic neutrinos scattered off a rotating BH.
The spacetime of a rotating BH has the Kerr metric which is written down in the following form using the Boyer-Lindquist coordinates \(x^{\mu}=(t,r,\theta,\phi)\):
\[\mathrm{d}s^{2}=g_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=\left(1-\frac{rr _{g}}{\Sigma}\right)\mathrm{d}t^{2}+2\frac{rr_{g}a\sin^{2}\theta}{\Sigma} \mathrm{d}t\mathrm{d}\phi-\frac{\Sigma}{\Delta}\mathrm{d}r^{2}-\Sigma\mathrm{ d}\theta^{2}-\frac{\Xi}{\Sigma}\sin^{2}\theta\mathrm{d}\phi^{2}, \tag{1}\]
where
\[\Delta=r^{2}-rr_{g}+a^{2},\quad\Sigma=r^{2}+a^{2}\cos^{2}\theta,\quad\Xi=\left( r^{2}+a^{2}\right)\Sigma+rr_{g}a^{2}\sin^{2}\theta. \tag{2}\]
Here \(r_{g}\) is the Schwarzschild radius. The mass of BH is \(M=r_{g}/2\) and its spin, which is along the \(z\)-axis, is \(J=Ma\), where \(0<a<M\).
The motion of a test ultrarelativistic particle in the metric in eq. (1) can be found in quadratures [29]. It has three integrals of motion: the particle energy, \(E\), its angular momentum, \(L\), and the Carter constant, \(Q\). In the scattering problem, \(Q>0\). It was shown in ref. [30] that form of the trajectory can be inferred from the integral expressions,
\[z\int\frac{\mathrm{d}x}{\pm\sqrt{R(x)}}=\int\frac{\mathrm{d}t}{ \pm\sqrt{\Theta(t)}}, \tag{3}\] \[\phi=z\int\frac{(x-zy)\mathrm{d}x}{\pm\sqrt{R(x)}(x^{2}-x+z^{2})} +\frac{y}{z}\int\frac{\mathrm{d}t}{\pm\sqrt{\Theta(t)}(1-t^{2})} \tag{4}\]
where
\[R(x) =\left(x^{2}+z^{2}-yz\right)^{2}-(x^{2}-x+z^{2})\left[w+(z-y)^{2} \right],\] \[\Theta(t) =(t_{-}^{2}+t^{2})(t_{+}^{2}-t^{2}),\] \[t_{\pm}^{2} =\frac{1}{2z^{2}}\left[\sqrt{(z^{2}-y^{2}-w)^{2}+4z^{2}w}\pm(z^{ 2}-y^{2}-w)\right], \tag{5}\]
We use the dimensionless variables in eqs. (3)-(5): \(x=r/r_{g}\), \(y=L/r_{g}E\), \(z=a/r_{g}\), \(w=Q/r_{g}^{2}E^{2}\), and \(t=\cos\theta\).
The strategy for finding the neutrino trajectory is the following. First, one computes the \(x\)-integral in eq. (3) numerically. Then, \(\theta=\arccos t\) is obtained from eq. (3) using the Jacobi elliptic functions. At this stage, we should account for inversions of the trajectory in the equatorial plane. Finally, eq. (4) is used to get \(\phi\). The details of calculations are provided in ref. [31].
## 3 Neutrino spin evolution in curved spacetime under the influence of external fields
In this section, we consider the description of the neutrino polarization when a particle scatters off a rotating BH surrounded by a realistic thick magnetized accretion disk.
The covariant four vector of the spin \(S^{\mu}\) of a neutrino, which interacts with an electromagnetic field and background matter, obeys the following equation in curved spacetime [32],
\[\frac{\mathrm{D}S^{\mu}}{\mathrm{d}\tau}= 2\mu\left(F^{\mu\nu}S_{\nu}-U^{\mu}U_{\nu}F^{\nu\lambda}S_{ \lambda}\right)+\sqrt{2}G_{\mathrm{F}}E^{\mu\nu\lambda\rho}G_{\nu}U_{\lambda}S _{\rho}, \tag{6}\]
where \(\mathrm{D}S^{\mu}=\mathrm{d}S^{\mu}+\Gamma^{\mu}_{\alpha\beta}S^{\alpha} \mathrm{d}x^{\beta}\) is the covariant differential, \(\Gamma^{\mu}_{\alpha\beta}\) are the Christoffel symbols, \(U^{\mu}=\frac{\mathrm{d}x^{\mu}}{\mathrm{d}\tau}\) is the neutrino four velocity in the world coordinates, \(\tau\) is the proper time, \(E^{\mu\nu\lambda\rho}=\frac{1}{\sqrt{-g}}\varepsilon^{\mu\nu\lambda\rho}\) is the covariant antisymmetric tensor in a curved spacetime, \(g=\det(g_{\mu\nu})\) is the determinant of the metric tensor, \(F_{\mu\nu}\) is the tensor of an external electromagnetic field, \(\mu\) is the neutrino magnetic moment, \(G_{\mathrm{F}}=1.17\times 10^{-5}\,\mathrm{GeV}^{-2}\) is the Fermi constant, and \(G_{\rho}\) is the covariant effective potential of the neutrino electroweak interaction with a background matter. Equation (6) is valid for both massive and massless (ultrarelativistic) neutrinos.
The neutrino polarization is defined in the locally Minkowskian frame \(x_{a}=e^{\mu}_{a}x_{\mu}\). The vierbein vectors \(e^{\mu}_{a}\) satisfy the relation, \(\eta_{ab}=e^{\mu}_{a}e^{\nu}_{b}g_{\mu\nu}\), where \(\eta_{ab}=(1,-1,-1,-1)\) is the
Minkowski metric tensor. One can check that \(e_{a}^{\,\mu}\) have the form,
\[e_{0}^{\,\,\mu}= \left(\sqrt{\frac{\Xi}{\Sigma\Delta}},0,0,\frac{arr_{g}}{\sqrt{ \Delta\Sigma\Xi}}\right),\quad e_{1}^{\,\,\mu}=\left(0,\sqrt{\frac{\Delta}{ \Sigma}},0,0\right),\] \[e_{2}^{\,\,\mu}= \left(0,0,\frac{1}{\sqrt{\Xi}},0\right),\quad e_{3}^{\,\,\mu}= \left(0,0,0,\frac{1}{\sin\theta}\sqrt{\frac{\Sigma}{\Xi}}\right), \tag{10}\]
where \(\Delta\), \(\Sigma\), and \(\Xi\) are given in eq. (2).
We rewrite eq. (1) in this Minkowskian frame making the boost to the particle rest frame, where the invariant three vector of the neutrino polarization, \(\mathbf{\zeta}\), is defined,
\[\frac{\mathrm{d}\mathbf{\zeta}}{\mathrm{d}t}=2(\mathbf{\zeta}\times\mathbf{\Omega}). \tag{11}\]
The vector \(\mathbf{\Omega}=\mathbf{\Omega}_{g}+\mathbf{\Omega}_{\mathrm{em}}+\mathbf{\Omega}_{\mathrm{ matt}}\), which incorporates the neutrino interaction with external fields including gravity, has the following form for an ultrarelativistic neutrino:
\[\mathbf{\Omega}_{g} =\frac{1}{2U^{t}}\left[\mathbf{b}_{g}+\frac{1}{1+u^{0}}\left( \mathbf{e}_{g}\times\mathbf{u}\right)\right],\] \[\mathbf{\Omega}_{\mathrm{em}} =\frac{\mu}{U^{t}}\left[u^{0}\mathbf{b}-\frac{\mathbf{u}( \mathbf{u}\mathbf{b})}{1+u^{0}}+\left(\mathbf{e}\times\mathbf{u}\right)\right],\] \[\mathbf{\Omega}_{\mathrm{matt}} =\frac{G_{\mathrm{F}}\mathbf{u}}{\sqrt{2}U^{t}}\left(g^{0}-\frac{ (\mathbf{g}\mathbf{u})}{1+u^{0}}\right), \tag{12}\]
where \((\mathbf{e}_{g},\mathbf{b}_{g})=G_{ab}=\gamma_{abc}u^{c}\), \(\gamma_{abc}=\eta_{ad}e_{\,\mu;\nu}^{d}e_{\,\,b}^{\mu}e_{\,\,c}^{\,\nu}\) are the Ricci rotation coefficients, the semicolon stays for the covariant derivative, \((\mathbf{e},\mathbf{b})=f_{ab}=e_{a}^{\,\,\mu}e_{b}^{\,\,\nu}F_{\mu\nu}\) is the electromagnetic field tensor in the locally Minkowskian frame, \((u^{0},\mathbf{u})=u^{a}=e_{\,\mu}^{a}U^{\mu}\), and \((g^{0},\mathbf{g})=g^{a}=e_{\,\mu}^{a}G^{\mu}\).
The electric and magnetic fields are \(e_{i}=f_{0i}\) and \(b_{i}=-\varepsilon_{ijk}f_{jk}\), where \(\varepsilon_{ijk}\) is the antisymmetric tensor in the flat spacetime. The explicit form of these vectors depends on the original configuration of the electromagnetic field in world coordinates \(F_{\mu\nu}(x^{\mu})\). These vectors for certain models of electromagnetic fields in an accretion disk versus \(r\) and \(\theta\) are given shortly in section 4; cf. eqs. (4.3)-(4.5).
Instead of solving the precession eq. (11), we deal with the effective Schrodinger equation to describe the neutrino polarization,
\[\mathrm{i}\frac{\mathrm{d}\psi}{\mathrm{d}x}=\hat{H}_{x}\psi,\quad\hat{H}_{x} =-\mathcal{U}_{2}(\mathbf{\sigma}\cdot\mathbf{\Omega}_{x})\mathcal{U}_{2}^{\dagger}, \tag{13}\]
where \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})\) are the Pauli matrices, \(\mathbf{\Omega}_{x}=r_{g}\mathbf{\Omega}\frac{\mathrm{d}t}{\mathrm{d}r}\), and \(\mathcal{U}_{2}=\exp(\mathrm{i}\pi\sigma_{2}/4)\). Equation (13) is rewritten in dimensionless variables and adapted for the scattering problem. It is convenient to rewrite the vector \(\mathbf{\Omega}_{x}\) in the form,
\[\mathbf{\Omega}_{x}^{(g)} =\frac{1}{2}\left[\tilde{\mathbf{b}}_{g}+(\tilde{\mathbf{e}}_{g} \times\mathbf{v})\right],\] \[\mathbf{\Omega}_{x}^{(\mathrm{em})} =V_{\mathrm{B}}\left[l^{0}\mathbf{b}-\mathbf{v}(\mathbf{l} \mathbf{b})+(\mathbf{e}\times\mathbf{l})\right],\] \[\mathbf{\Omega}_{x}^{(\mathrm{matt})} =V_{m}\mathbf{l}\left[g^{0}-(\mathbf{g}\mathbf{v})\right], \tag{14}\]
where the vectors \((l^{0},\mathbf{l})=l^{a}=\frac{\mathrm{d}t}{\mathrm{d}r}\frac{u^{a}}{U^{t}}\), \(\mathbf{v}=\frac{\mathbf{u}}{1+u^{0}}\), \(\tilde{\mathbf{e}}_{g}=\mathbf{e}_{g}\frac{r_{g}}{U^{t}}\frac{\mathrm{d}t}{ \mathrm{d}r}\) and \(\tilde{\mathbf{b}}_{g}=\mathbf{b}_{g}\frac{r_{g}}{U^{t}}\frac{\mathrm{d}t}{ \mathrm{d}r}\), are finite for an ultrarelativistic neutrino. Note that \(\tilde{\mathbf{e}}_{g}\) and \(\tilde{\mathbf{b}}_{g}\) are the linear functions of \(\frac{\mathrm{d}x^{\mu}}{\mathrm{d}r}\) which
can be calculated using the results of section 2. The explicit form of \(l^{a}\) and \({\bf v}\) is provided in ref. [16].
The scalar quantity \(V_{\rm B}\) depends on the configuration of the magnetic field in the disk, which is discussed shortly in section 4. The parameter \(V_{m}=\frac{G_{F\rho}}{\sqrt{2}m_{p}r_{g}}\) for a hydrogen plasma, where \(\rho\) is the mass density of the disk and \(m_{p}\) is the proton mass.
The effective spinor \(\psi\) in eq. (10) has form \(\psi^{\rm T}_{-\infty}=(1,0)\) for incoming neutrinos. Such a spinor corresponds to a left polarized active neutrino. For a scattered particle, after solving eq. (10) along the neutrino trajectory, it becomes \(\psi^{\rm T}_{+\infty}=(\psi^{\rm(R)}_{+\infty},\psi^{\rm(L)}_{+\infty})\). The survival probability, i.e. the probability that a neutrino remains left polarized in the wake of the scattering, is \(P_{\rm LL}=|\psi^{\rm(L)}_{+\infty}|^{2}\).
## 4 External fields in an accretion disk
In this section, we discuss the properties of the background matter and the magnetic fields in an accretion disk which a neutrino interacts with.
We treat electroweak interaction of a neutrino with background fermions in the forward scattering approximation. We mention in section 3 that this interaction is characterized by the four potential \(G^{\mu}\), which has the following form in the hydrogen plasma:
\[G^{\mu}=\sum_{f=e,p}q_{f}J^{\mu}_{f}, \tag{12}\]
where \(J^{\mu}_{f}=n_{f}U^{\mu}_{f}\) are the hydrodynamic currents, \(n_{f}\) are the invariant fermions densities, \(U^{\mu}_{f}\) are their four velocities in the disk, and \(q_{f}\) are the constants which are found in the explicit form in ref. [33]. We suppose in eq. (12) that matter is unpolarized. We take that \(n_{e}=n_{p}\) because of the plasma electroneutrality and \(U^{\mu}_{e}=U^{\mu}_{p}\), i.e. there is no differential rotation between the components of plasma.
In the model of a Polish doughnut, one has that \(U^{\mu}_{f}=(U^{t}_{f},0,0,U^{\phi}_{f})\); cf. eq. (11). Thus, using eq. (11), we get that the nonzero components of \(g^{a}\) in eq. (13) are
\[g^{0} =\frac{\sqrt{x^{2}+z^{2}\cos^{2}\theta}\sqrt{x^{2}-x+z^{2}}U^{t}_ {f}}{\sqrt{z^{2}\cos^{2}\theta}(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x+x^{4}},\] \[g^{3} =\frac{\sin\theta\left[r_{g}U^{\phi}_{f}\left(z^{2}\cos^{2}\theta (x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x+x^{4}\right)-U^{t}_{f}xz\right]}{\sqrt{x^{2} +z^{2}\cos^{2}\theta}\sqrt{z^{2}\cos^{2}\theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2} x+x^{4}}}. \tag{13}\]
The mass density \(\rho\), which enters in the coefficient \(V_{m}\), in given in eq. (12).
Now, we discuss the neutrino interaction with magnetic fields. The toroidal magnetic is inherent in the magnetized Polish doughnut model. The four vector of such a magnetic field is \(B^{\mu}=(B^{t},0,0,B^{\phi})\); cf. eq. (11). The electromagnetic field tensor of this field has the only nonzero components \(F^{r\theta}=-F^{\theta r}\). Based on eq. (11), we get that the nonzero component of \({\bf b}\) in eq. (13) is
\[b_{3}= -\frac{U^{t}_{f}r_{g}^{2}\sqrt{2p_{m}^{(\rm tor)}}}{\sin\theta(1 -\Omega l_{0})\sqrt{(x^{2}+z^{2}\cos^{2}\theta)(x^{2}-x+z^{2})}}\] \[\times\left\{\lambda_{0}^{2}(x^{2}-x+z^{2}\cos^{2}\theta)+2 \lambda_{0}xz\sin^{2}\theta\right.\] \[-\sin^{2}\theta\left[(x^{2}+z^{2})(x^{2}+z^{2}\cos^{2}\theta)+xz^ {2}\sin^{2}\theta\right]\left.\right\}^{1/2}, \tag{14}\]
where \(\lambda_{0}=l_{0}/r_{g}\), \(l_{0}\) is the constant angular momentum in the disk (see appendix A), \(\Omega\) is given in eq. (10), and \(p_{m}^{\rm(tor)}\) is the magnetic pressure of the toroidal field present in eq. (11). The vector \({\bf e}=0\). The dimensionless coefficient \(V_{\rm B}\) in eq. (10) is \(V_{\rm B}=\mu/r_{g}\).
A poloidal magnetic field is not a part of the magnetized Polish doughnut model. It is, however, known that a superposition of poloidal and toroidal components makes the resulting magnetic field more stable. That is why we include a poloidal field in our calculations. We consider two models for a poloidal field.
First, we take a field which asymptotically tends to a constant one parallel to the rotation axis of BH at the infinity. The vector potential for such a field is given in eq. (12). The nonzero components of the dimensionless vectors \({\bf e}\) and \({\bf b}\) are
\[e_{1}= f_{\rm B}(x)\frac{z\left[z^{2}\cos^{4}\theta(z^{2}-x^{2})+\cos^{2} \theta(z^{4}+2z^{2}x^{2}-3x^{4})-z^{2}x^{2}+x^{4}\right]}{2\sqrt{z^{2}\cos^{2} \theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x+x^{4}}(x^{2}+z^{2}\cos^{2}\theta)^{2}},\] \[e_{2}= \frac{f_{\rm B}(x)xz^{3}\sin 2\theta\sqrt{x^{2}-x+z^{2}}(1+ \cos^{2}\theta)}{2\sqrt{z^{2}\cos^{2}\theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x +x^{4}}(x^{2}+z^{2}\cos^{2}\theta)^{2}},\] \[b_{1}= \frac{f_{\rm B}(x)\cos\theta\left[(x^{2}+z^{2}\cos^{2}\theta)^{2} (x^{2}-x+z^{2})+x(x^{4}-z^{4})\right]}{\sqrt{z^{2}\cos^{2}\theta(x^{2}-x+z^{2 })+z^{2}x^{2}+z^{2}x+x^{4}}(x^{2}+z^{2}\cos^{2}\theta)^{2}},\] \[b_{2}= f_{\rm B}(x)\sin\theta\sqrt{x^{2}-x+z^{2}}\] \[\times\frac{\left[z^{4}\cos^{4}\theta(1-2x)+z^{2}\cos^{2}\theta( z^{2}-x^{2}-4x^{3})-z^{2}x^{2}-2x^{5}\right]}{2\sqrt{z^{2}\cos^{2}\theta(x^{2}-x +z^{2})+z^{2}x^{2}+z^{2}x+x^{4}}(x^{2}+z^{2}\cos^{2}\theta)^{2}}, \tag{13}\]
where, following ref. [34], we introduce the additional factor \(f_{\rm B}(x)=x^{-5/4}\) to provide the scaling of the magnetic field with the distance \(B\propto r^{-5/4}\). The dimensionless coefficient \(V_{\rm B}\) in eq. (10), corresponding to such a field, is \(V_{\rm B}=\mu B_{0}r_{g}\), where \(B_{0}\) is the magnetic field strength near BH at \(x\sim 1\).
Second, we consider a poloidal field generated by the vector potential in eq. (13). In this case, the nonzero components of vectors \({\bf e}\) and \({\bf b}\) in eq. (10) are
\[e_{1}= -\frac{zxbr_{g}\partial_{r}\rho}{(x^{2}+z^{2}\cos^{2}\theta)\sqrt {z^{2}\cos^{2}\theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x+x^{4}}},\] \[e_{2}= -\frac{zxb\partial_{\theta}\rho}{(x^{2}+z^{2}\cos^{2}\theta) \sqrt{x^{2}-x+z^{2}}\sqrt{z^{2}\cos^{2}\theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2} x+x^{4}}},\] \[b_{1}= -\frac{b\partial_{\theta}\rho}{\sin\theta\sqrt{z^{2}\cos^{2} \theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x+x^{4}}},\] \[b_{2}= \frac{\sqrt{x^{2}-x+z^{2}}br_{g}\partial_{r}\rho}{\sin\theta \sqrt{z^{2}\cos^{2}\theta(x^{2}-x+z^{2})+z^{2}x^{2}+z^{2}x+x^{4}}}, \tag{14}\]
where \(\rho\) and \(b\) are given in eqs. (11) and (13). The density derivatives with respect to \(r\) and \(\theta\) can be calculated analytically using eq. (11). However, the corresponding expressions are quite cumbersome and, thus, we omit them. The dimensionless coefficient \(V_{\rm B}\) in eq. (10) is \(V_{\rm B}=\mu/r_{g}\) for this field component.
## 5 Parameters of the system
In this section, we specify the values of the parameters of a neutrino and external fields, as well as describe some details of calculations.
We suppose that a neutrino is a Dirac particle possessing a nonzero magnetic moment \(\mu\). Its value is \(\mu=10^{-13}\mu_{\rm B}\), where \(\mu_{\rm B}\) is the Bohr magneton. This magnetic moment is below the best astrophysical upper bounds for neutrino magnetic moments established in ref. [35]. We assume that neutrinos do not have transition magnetic moments, i.e. we study spin oscillations within one neutrino flavor, e.g., for electron neutrinos.
Neutrinos interact with background matter within the standard model. The effective potential in given in eq. (10). We suppose that the disk consists of the electroneutral hydrogen plasma. In this case, the effective potential in eq. (10) depends on the electron number density only. The maximal density in the disk is taken as \(n_{e}^{\rm(max)}=10^{18}\,{\rm cm}^{-3}\). Such a value is consistent with the observations carried out in ref. [36] for SMBH with \(M=10^{8}M_{\odot}\).
The matter density and velocity depend on \(r\) and \(\theta\) in the Polish doughnut model for the accretion disk. The typical density distribution is shown in figure 3 for different spins of BH. The accretion disk model has a free parameter \(K\) [see eq. (11)]. We vary \(K\) to make \(n_{e}^{\rm(max)}=10^{18}\,{\rm cm}^{-3}\) for all spins of BH.
The toroidal magnetic field has the same configuration in our calculations [see eqs. (12) or (13)]. We vary the parameter \(K_{m}\) in eq. (11) to reach \(|{\bf B}|_{\rm max}^{\rm(tor)}=320\,{\rm G}\). This magnetic field strength is \(|{\bf B}|_{\rm max}^{\rm(tor)}\sim 10^{-2}B_{\rm Edd}\), where
\[B_{\rm Edd}=10^{4}\,{\rm G}\times\left(\frac{M}{10^{9}M_{\odot}}\right)^{-1/2 }, \tag{12}\]
is the Eddington limit for the magnetic field in the vicinity of BH [37], which is \(B_{\rm Edd}\approx 3.2\times 10^{4}\,{\rm G}\) for \(M=10^{8}M_{\odot}\).
We consider two models of the poloidal magnetic field in the disk. First, the vector potential is in eq. (12) [see also eq. (11)]. Second, \(A_{\mu}\) is given eq. (13) [see also eq. (12)]. In the latter case, we choose the parameter \(b\) so that \(|{\bf B}|_{\rm max}^{\rm(pol)}=320\,{\rm G}\). It means that the toroidal and poloidal fields have comparable strengths.
The initial flux of neutrinos is emitted from the point \((r_{s},\theta_{s},\phi_{s})=(\infty,\pi/2,0)\). All incoming neutrinos are left polarized. First, we reconstruct the neutrino trajectory, namely, the dependence \(\theta(x)\) using eq. (3). Then, we use the \(\theta(x)\)-dependence in eqs. (10)-(11), which converts eq. (13) into the ordinary differential equation. Finally, we integrate eq. (13) with \(\boldsymbol{\Omega}_{x}\) in eq. (14) along each neutrino trajectory using the two-step Adams-Bashforth developed in ref. [17]. We deal with \(\sim 2.5\times 10^{3}\) test particles in each run.
## 6 Results
In this section, we present the results of numerical simulations with the parameters given in section 5.
We mention in section 1 that active neutrinos are left-polarized, i.e. their spin is opposite to the particle momentum. If the neutrino spin precesses in an external field, a particle becomes sterile, i.e. we will observe the effective reduction of the neutrino flux. If we define the survival probability \(P_{\rm LL}\), i.e. the probability to remain left-handed after scattering, the observed neutrino flux is \(F_{\nu}=P_{\rm LL}F_{0}\), where \(F_{0}\) is the flux of'scalar' particles, i.e. particles which propagate along geodesics lines without necessity to track their polarization. Hence, \(F_{0}\) is the observed flux at no spin oscillations. The detailed study of \(F_{0}\) was carried out in ref. [31]. Our main goal is to obtain \(F_{\nu}/F_{0}\) and examine its dependence on the parameters of the system.
First, we mention that, in the scattering of ultrarelativistic neutrinos, the gravitational interaction only does not result in the neutrino spin-flip. This result was obtained in ref. [38] in the situation of a weak gravitational lensing. This fact was confirmed in refs. [16; 17; 18] for a strong gravitational lensing of neutrinos. Since our current simulations reveal the same feature, we omit the corresponding plot for \(F_{\nu}/F_{0}\), which is trivial.
The toroidal magnetic field is inherent in a Polish doughnut accretion disk. The impact of the toroidal magnetic field on neutrino spin oscillations turns out to be negligible. The process of neutrino scattering and spin oscillations in a toroidal field is schematically depicted in figure 1. As seen in figure 3, the toroidal magnetic field is concentrated in a relatively thin torus. Spin oscillations are efficient if there is a significant transverse component of a magnetic field. In figure 1, one can see that particles in regions I and III in the incoming flux will interact mainly with a longitudinal magnetic field in their scattering. Despite particles in the region II interact with a transverse field, they are mainly within the shadow of BH. Hence, such neutrinos fall into BH and will not contribute to the observed flux. That is why the toroidal field with a reasonable strength does not modify the picture of spin oscillations.
If we consider the neutrino interaction with matter in the combination with the toroidal field, such external fields do not result in a significant spin oscillation. Indeed, it was found in ref. [7] that the interaction with matter only does not cause spin oscillations. It can only shift the resonance point. Thus, we omit the plots showing \(F_{\nu}/F_{0}\) for the case of the toroidal field and background matter.
The ratio of fluxes for neutrino gravitational scattering and the interaction with matter under the influence of both toroidal and poloidal field is shown in figure 2. We depict \(F_{\nu}/F_{0}\) for different spins of BH and various models of a poloidal field. The neutrino fluxes are given versus \(\theta_{\rm obs}=\theta(t\rightarrow+\infty)\) and \(\phi_{\rm obs}=\phi(t\rightarrow+\infty)\), which are angular coordinates of an outgoing neutrino. First, we note that the plots in figure 2 are symmetric with respect to the equatorial plane. It is the consequence of the fact that the flux of incoming neutrinos is parallel to the equatorial plane (see also appendix B).
Since the incoming neutrinos are emitted from the point \(\theta_{s}=\pi/2\) and \(\phi_{s}=0\), the image of BH is mainly in the point \(\theta_{\rm obs}\approx\pi/2\) and \(\phi_{\rm obs}\approx\pi\). This feature is seen, especially, in figures 2 and 2, which correspond to an almost nonrotating BH. Despite figures 2 and 2 correspond to an almost Schwarzschild BH, these plots are not fully symmetric with respect to the vertical line \(\phi_{\rm obs}=\pi\). It happens since neutrinos interact with the rotating
Figure 1: Schematic illustration of the neutrino interaction with a magnetized Polish doughnut, shown with gray color. The toroidal magnetic field and plasma velocity are concentrated mainly within the thin red torus. Neutrino trajectories, depicted with blue color, are separated into three regions. Regions I and III coitain particles which interact with the longitudinal magnetic field. Neutrinos in region II interact with the transverse field, but they mainly fall to BH, shown with a black blob.
Figure 2: The observed neutrino flux normalized by \(F_{0}\) for neutrinos scattered off a rotating SMBH with \(M=10^{8}M_{\odot}\). The maximal number density in the disk is \(10^{18}\,\mathrm{cm}^{-3}\). The maximal strengths of toroidal and poloidal fields are \(320\,\mathrm{G}\). The neutrino magnetic moment is \(10^{-13}\mu_{\mathrm{B}}\). Panels (a) and (b): \(a=2\times 10^{-2}M\) (\(z=10^{-2}\)); panels (c) and (d): \(a=0.5M\) (\(z=0.25\)); panels (e) and (f): \(a=0.9M\) (\(z=0.45\)). Panels (a), (c), and (e) correspond to the poloidal field in eq. (10); panels (b), (d), and (f) – in eq. (11). The rest of the parameters is the same as in figure 3.
accretion disk, where plasma moves with relativistic velocities. Again, referring to schematic plot in figure 1, it means that particles in regions I and III have different diagonal elements in the effective Hamiltonian \(\hat{H}_{x}\) since the velocities of plasma and neutrinos are in the same (region I) and the opposite (region III) directions.
If we compare figures 2(a), 2(c), and 2(e) with figures 2(b), 2(d), and 2(f), we can see that spin oscillations for the poloidal field in eq. (10) are more intense than for eq. (11). For example, spin oscillations are almost absent in figure 2(f). This feature is explained by the fact that the poloidal field in eq. (11) has a significant strength only in a small volume in the vicinity of BH (see, e.g., figure 4), whereas the field in eq. (10) slowly decreases towards the outer edge of the disk.
One can also see in figure 2 that there is a strong dependence of \(F_{\nu}/F_{0}\) on the spin of BH, especially for the poloidal field in eq. (11). It happens since the greater \(a\) is, the closer \(|\mathbf{B}|_{\rm max}^{\rm(pol)}\) to the inner radius of the accretion disk is; cf. figure 4.
There are small white areas in figure 2 which increase with \(a\). They appear because the 2D interpolation cannot correctly process plots with local insufficient number of points. This shortcoming can be eliminated by a significant enhancement of the test particles number.
## 7 Conclusion
We have studied neutrino scattering off a rotating SMBH surrounded by a thick magnetized accretion disk. Neutrinos were supposed to interact with BH gravitationally, as well as with matter of the disk electroweakly. The nonzero magnetic moment of Dirac neutrinos also allowed them the interact with the magnetic field in the disk. The interaction with external fields resulted in the precession of the neutrino spin which, in its turn, led to the effective reduction of the observed neutrino flux.
Our main goal was to find the observed neutrino flux accounting for spin oscillations, \(F_{\nu}\). It is shown in figure 2 for different spins of BH and various configurations of magnetic fields in the disk. We normalize \(F_{\nu}\) to \(F_{0}\), which is the flux of scalar particles, i.e. at no spin oscillations.
In the present work, we have used the model of the accretion disk called the magnetized Polish doughnut [20]. This model predicts the self consistent distributions of the matter density, angular velocity, and the toroidal magnetic field in the disk. Hence, our present results are the advance compared to the calculations in refs. [16; 17; 18] where the characteristics of accretion disks were taken from different sources.
Analogously to [16; 17; 18], here, we have confirmed that only gravitational interaction does not cause the spin-flip of ultrarelativistic neutrinos in their gravitational scattering. This fact is valid in a wide range of spins of BH. Thus, the only source of neutrino spin oscillations is the neutrino interaction with the magnetic field. We recall that the interaction with matter in case of ultrarelativistic neutrinos does not cause a spin-flip either. Moreover, we have found that the toroidal magnetic field within the magnetized Polish doughnut does not result in a significant change of the observed neutrino flux. It is the consequence of the quite compact location of the toroidal field.
A configuration with the only toroidal component is known to be unstable. That is why we have assumed that a poloidal magnetic field is present in the disk. We have considered two models of the poloidal field; cf. eqs. (10) and (11). The typical strengths of toroidal and poloidal fields were taken to be equal.
It should be noted that the presence of a strong poloidal magnetic field in an accretion disk around SMBHs is required in ref. [39] for the formation of jets from these objects. A nonzero poloidal component in an accretion disk was obtained using the MHD simulations in curved spacetime in ref. [40]. The analytical estimates for the relation between \(B_{r}\) and \(B_{z}\) components of the poloidal field can be also derived (see, e.g., ref. [41]).
Large scale magnetic fields in a thin accretion disk were studied in ref. [42]. However, a significant neutrino spin-flip is unlikely to be caused by extenal fields in a thin accretion disk since the neutrino path inside such a disk is quite short.
Our results in figure 2 show that spin effects are more sizable for the poloidal field in eq. (10). We have also revealed the dependence of the observed fluxes on the spin of BH. For example, one can see in figure 2(f) that the observed flux is almost unchanged for a rapidly rotating SMBH with the poloidal field in eq. (11).
It should be noted that, in our simulations, we have used quite moderate strengths of magnetic fields, as well as the matter density which is observed near some SMBHs [36]. The neutrino magnetic moment was taken to be below the current astrophysical upper bound [35]. It makes our results quite plausible.
Comparing figure 2 and figure 3, as well as figure 4, we can see that neutrino spin oscillations are the effective tool for the tomography of the distribution of the magnetic field in the vicinity of BH. The structure of a magnetic field, observed with help of neutrinos, is seen especially clearly for a relatively slowly rotating BH; cf. figures 2(a) and 2(b). Moreover, if one compares the fluxes in points, which are symmetric with respect to line \(\phi_{\rm obs}=\pi\), we can extract the information about the accretion disk rotation.
Our results can be useful for the exploration of external fields in the vicinity of BHs with help of neutrinos using existing or future neutrino telescopes [43; 44]. The penetrating power of neutrinos is much higher than that of photons. Perhaps, in future, neutrino telescopes will make a serious competition with the facilities like the Event Horizon Telescope in the studies of BHs vicinities.
We can apply our results to constrain the quantity \(\mu B_{0}\) for neutrinos from a core-collapsing SN, which is expected in our galaxy [45]. Here, we rely on the poloidal magnetic field model in eq. (10) for the definitiveness. The predicted \(F_{\rm pred}(\theta_{\rm obs},\phi_{\rm obs})\) and observed \(F_{\rm obs}(\theta_{\rm obs},\phi_{\rm obs})\) neutrino fluxes in a certain direction \((\theta_{\rm obs},\phi_{\rm obs})\) are related by \(F_{\rm obs}=P_{\rm LL}F_{\rm pred}\), where \(P_{\rm LL}(\theta_{\rm obs},\phi_{\rm obs}|\mu B_{0})\) is the survival probability shown in figure 2. Using the constraint on the magnetic field in the vicinity of SMBH in Sgr A\({}^{*}\), \(B<10^{2}\,\)G, obtained in ref. [46], as well as figure 2, the upper bound on \(\mu\) can be derived. However, it requires \(F_{\rm obs}\) which will be available only after the observation of SN neutrinos.
## Appendix A Magnetized Polish doughnut accounting for the poloidal magnetic field
In this appendix, we review the main properties of a magnetized Polish doughnut. The very detailed description of this model is given in ref. [20]. That is why we represent only the major expressions since the signature of our metric is mainly \((+,-,-,-)\), which is different from ref. [20].
All parameters of the disk depend on \(r\) and \(\theta\) owing to the axial symmetry of the metric in eq. (1). The electromagnetic field tensor has the form,
\[F_{\mu\nu}=E_{\mu\nu\alpha\beta}U_{f}^{\alpha}B^{\beta}, \tag{12}\]
where \(E^{\mu\nu\alpha\beta}=\frac{\varepsilon^{\mu\nu\alpha\beta}}{\sqrt{-g}}\) is the antisymmetric tensor in curved spacetime with \(\varepsilon^{tr\theta\phi}=1\). The four vectors of the fluid velocity in the disk and the toroidal magnetic field are \(U_{f}^{\mu}=(U_{f}^{t},0,0,U_{f}^{\phi})\) and \(B^{\mu}=(B^{t},0,0,B^{\phi})\). We assume that the specific angular momentum of a particle in the disk \(l=L/E\) is constant, \(l=l_{0}\). It allows one the find the components of \(U_{f}^{\mu}\) and \(B^{\mu}\),
\[U_{f}^{t} =\sqrt{\left|\frac{\mathcal{A}}{\mathcal{L}}\right|}\frac{1}{1- l_{0}\Omega},\quad U_{f}^{\phi}=\Omega U_{f}^{t},\] \[B^{\phi} =\sqrt{\frac{2p_{m}^{\rm(tor)}}{|\mathcal{A}|}},\quad B^{t}=l_{0 }B^{\phi}, \tag{10}\]
where \(\mathcal{L}=g_{tt}g_{\phi\phi}-g_{t\phi}^{2}\), \(\mathcal{A}=g_{\phi\phi}+2l_{0}g_{t\phi}+l_{0}^{2}g_{tt}\), and
\[\Omega=-\frac{g_{t\phi}+l_{0}g_{tt}}{g_{\phi\phi}+l_{0}g_{t\phi}}, \tag{11}\]
is the angular velocity in the disk.
The disk density \(\rho\) and the magnetic pressure \(p_{m}^{\rm(tor)}\) have the form,
\[\rho=\left[\frac{\kappa-1}{\kappa}\frac{W_{\rm in}-W}{K+K_{m}\mathcal{L}^{ \kappa-1}}\right]^{\frac{1}{\kappa-1}},\quad p_{m}^{\rm(tor)}=K_{m}\mathcal{L} ^{\kappa-1}\left[\frac{\kappa-1}{\kappa}\frac{W_{\rm in}-W}{K+K_{m}\mathcal{L} ^{\kappa-1}}\right]^{\frac{\kappa}{\kappa-1}}, \tag{12}\]
where \(K\), \(K_{m}\), and \(\kappa\) are the constants in the equations of state, \(p=Kw^{\kappa}\) and \(p_{m}^{\rm(tor)}=K_{m}\mathcal{L}^{\kappa-1}w^{\kappa}\). Here, \(p\) is the plasma pressure and \(w\) is the specific enthalpy. Following ref. [20], we take that \(\kappa=4/3\). The form of the disk depends on the potential \(W\),
\[W(r,\theta)=\frac{1}{2}\ln\left|\frac{\mathcal{L}}{\mathcal{A}}\right|. \tag{13}\]
The parameter \(W_{\rm in}\) in eq. (12) is the value of \(W\) at the border of the disk.
Equations (10)-(13) completely define all the characteristics of the disk. First, using eq. (13), we get that points \((r,\theta)\) inside the disk obey the condition \(W\leq W_{\rm in}\). Then, we apply eq. (12) to find \(\rho\) and \(p_{m}^{\rm(tor)}\). We define the effective toroidal field \(|\mathbf{B}|^{\rm(tor)}=\sqrt{2p_{m}^{\rm(tor)}}\). The maximal value of \(|\mathbf{B}|^{\rm(tor)}\) is equated to the strength expected in the disk. It gives us one of the equations to define the constants \(K\) and \(K_{m}\). Another equation appears if we associate \(\rho_{\rm max}\) with maximal plasma density present in the disk. Finally, eq. (10) gives us the rest of the parameters.
The distributions of the normalized electron number density \(n_{e}/10^{18}\,{\rm cm}^{-3}\), where \(n_{e}=\rho/m_{p}\), and the effective toroidal magnetic field \(|\mathbf{B}|^{\rm(tor)}=\sqrt{2p_{m}^{\rm(tor)}}\), measured in Gauss, are shown in figure 3. for different spins of BH. We use the dimensionless variables \(\tilde{K}=r_{g}^{4(1-\kappa)}K\) and \(\tilde{K}_{m}=r_{g}^{2(1-\kappa)}K_{m}\). In all cases in figure 3, \(n_{e}^{\rm(max)}=10^{18}\,{\rm cm}^{-3}\) and \(|\mathbf{B}|_{\rm max}^{\rm(tor)}=320\,{\rm G}\). Figure 3 corresponds to \(W_{\rm in}=-10^{-5}\) and \(\lambda_{0}=0.6(\lambda_{\rm mb}+\lambda_{\rm ms})\), where \(\lambda_{\rm mb,ms}=\lambda(x_{\rm mb,ms})\) and
\[\lambda(x)=\frac{x^{2}-z\sqrt{2x}+z^{2}}{\sqrt{2}x^{3/2}-\sqrt{2x}+z}. \tag{14}\]
Figure 3: The distributions of the normalized electron number density \(n_{e}/10^{18}\,{\rm cm}^{-3}\) [panels (a), (c), and (e)] and the effective toroidal magnetic field \(|{\bf B}|^{({\rm tor})}=\sqrt{2p_{m}^{({\rm tor})}}\), in Gauss, [panels (b), (d), and (f)] for different spins of BH. Panels (a) and (b): \(a=2\times 10^{-2}M\) (\(z=10^{-2}\)), \(\tilde{K}=2.55\times 10^{-31}\), and \(\tilde{K}_{m}=3.59\times 10^{-41}\); panels (c) and (d): \(a=0.5M\) (\(z=0.25\)) \(\tilde{K}=3.6\times 10^{-31}\), and \(\tilde{K}_{m}=4.66\times 10^{-41}\); panels (e) and (f): \(a=0.9M\) (\(z=0.45\)) \(\tilde{K}=6.5\times 10^{-31}\), and \(\tilde{K}_{m}=7.2\times 10^{-41}\). The distances in the horizontal and vertical axes are in \(r_{g}\).
That is, we study the disk corotating with BH. The quantities \(x_{\rm mb}\) and \(x_{\rm ms}\) are the radii of the marginally bound and marginally stable Keplerian orbits [47].
The model in ref. [20] provides only the toroidal magnetic field. It was proved in refs. [48; 49] that only a toroidal or only a poloidal magnetic field are unstable. The superposition of the these fields can be stable since it has a nonzero linking number and, thus, a nonzero magnetic helicity. That is why we suppose that a nonzero poloidal field is present in the disk. We consider two models for such a field.
First, we take the following vector potential:
\[A_{t}=Ba\left[1-\frac{rr_{g}}{2\Sigma}(1+\cos^{2}\theta)\right],\quad A_{\phi} =-\frac{B}{2}\left[r^{2}+a^{2}-\frac{a^{2}rr_{g}}{\Sigma}(1+\cos^{2}\theta) \right]\sin^{2}\theta, \tag{10}\]
where \(\Sigma\) is given in eq. (2). For the first time, \(A_{\mu}\) in eq. (10) was proposed in ref. [50] to describe the electromagnetic field in the vicinity of a rotating BH which asymptotically equals to a constant and uniform magnetic field \({\bf B}=B{\bf e}_{z}\). Analogous magnetic field configuration is used, e.g., in ref. [51] to explain the cosmic rays acceleration by BH.
The assumption of the constant \(B\) in eq. (10) is unphysical since the magnetic field should vanish towards the outer edge of the disk. That is, following ref. [34], we assume that \(B\propto B_{0}r^{-5/4}\). The strength \(B_{0}\) at \(r\sim r_{g}\) is chosen to be close to \(|{\bf B}|_{\rm max}^{\rm(tor)}\). Previously, such a model of the poloidal field was used in refs. [16; 17; 18] in the whole space outside BH. Now, we suppose that it exists only inside the disk given by the condition \(W\leq W_{\rm in}\).
Second, we use the poloidal field, proposed in ref. [21],
\[A_{\phi}=b\rho, \tag{11}\]
where \(b\) is a constant parameter and \(\rho\) in given in eq. (10). It should be noted that the poloidal field, corresponding to eq. (11), exists only inside the disk defined by \(W\leq W_{\rm in}\).
The only nonzero components of \(F_{\mu\nu}\), corresponding to eq. (11), are \(F_{r\phi}=b\partial_{r}\rho\) and \(F_{\theta\phi}=b\partial_{\theta}\rho\). Using eq. (10), we find the magnetic pressure \(p_{m}^{\rm(pol)}=-g_{\mu\nu}B^{\mu}B^{\nu}/2\),
\[p_{m}^{\rm(pol)}=\frac{b^{2}}{2\sin^{2}\theta\Sigma}\left|\frac{{\cal L}}{{ \cal A}}\right|\left[\frac{1}{\Delta}(\partial_{\theta}\rho)^{2}+(\partial_{r }\rho)^{2}\right], \tag{12}\]
where \(\Delta\) is given in eq. (2). We introduce the effective poloidal magnetic field \(|{\bf B}|^{\rm(pol)}=\sqrt{2p_{m}^{\rm(pol)}}\). The parameter \(b\) in eq. (11) is fixed when we suppose that the maximal value of \(|{\bf B}|^{\rm(pol)}\) equals to \(|{\bf B}|_{\rm max}^{\rm(tor)}\), which is defined earlier.
We show the distribution of \(|{\bf B}|^{\rm(pol)}\) for eq. (12) for different spins of BH in figure 4. We present the situations when \(z=10^{-2}\) and \(z=0.5\). In the case of a rapidly rotating BH with \(z=0.9\), \(|{\bf B}|^{\rm(pol)}\) has a very sharp maximum, which is hardly visible in a contour plot. Therefore, we omit it.
## Appendix B Solution of the Schrodinger equation for neutrinos below the equatorial plane
In our problem, the flux of incoming neutrinos is both above and below the equatorial plane. We solve eq. (10) only for up particles. The solution for down particles can be reconstructed automatically applying the symmetry reasons.
Suppose that the effective Hamiltonian in eq. (3.5) for up particles is \(\hat{H}_{u}\). The Hamiltonian for down particles is \(\hat{H}_{d}=-\hat{H}_{u}^{*}\), where the star means the complex conjugation. The formal solution of eq. (3.5) is
\[\psi_{u}(x)=\left[1-\mathrm{i}\int_{-\infty}^{x}\hat{H}_{u}(x^{\prime})\mathrm{ d}x^{\prime}-\frac{1}{2}\int_{-\infty}^{x}\hat{H}_{u}(x^{\prime})\mathrm{d}x^{ \prime}\int_{-\infty}^{x^{\prime}}\hat{H}_{u}(x^{\prime\prime})\mathrm{d}x^{ \prime\prime}+\cdots\right]\psi_{-\infty},\] (B.1)
where \(\psi_{-\infty}^{\mathrm{T}}=(1,0)\) is the initial condition. Taking the complex conjugation of eq. (B.1), we get
\[\psi_{u}^{*}(x)= \left[1+\mathrm{i}\int_{-\infty}^{x}\hat{H}_{u}^{*}(x^{\prime}) \mathrm{d}x^{\prime}-\frac{1}{2}\int_{-\infty}^{x}\hat{H}_{u}^{*}(x^{\prime}) \mathrm{d}x^{\prime}\int_{-\infty}^{x^{\prime}}\hat{H}_{u}^{*}(x^{\prime\prime })\mathrm{d}x^{\prime\prime}+\cdots\right]\psi_{-\infty}\] \[= \left[1-\mathrm{i}\int_{-\infty}^{x}\hat{H}_{d}(x^{\prime}) \mathrm{d}x^{\prime}-\frac{1}{2}\int_{-\infty}^{x}\hat{H}_{d}(x^{\prime}) \mathrm{d}x^{\prime}\int_{-\infty}^{x^{\prime}}\hat{H}_{d}(x^{\prime\prime}) \mathrm{d}x^{\prime\prime}+\cdots\right]\psi_{-\infty}.\] (B.2)
Thus, \(\psi_{d}(x)=\psi_{u}^{*}(x)\) since the initial condition is real and coincide for both particles.
Therefore \(P_{\mathrm{LL}}^{(u)}=|\psi_{+\infty}^{(u,\mathrm{L})}|^{2}=P_{\mathrm{LL}}^{( d)}=|\psi_{+\infty}^{(d,\mathrm{L})}|^{2}\). When we map the flux of outgoing down particles, we should take into account that \(\phi_{\mathrm{obs}}^{(d)}=\phi_{\mathrm{obs}}^{(u)}\) and \(\theta_{\mathrm{obs}}^{(d)}=\pi-\theta_{\mathrm{obs}}^{(u)}\).
|
2304.06214 | Traveling modulating pulse solutions with small tails for a nonlinear
wave equation in periodic media | Traveling modulating pulse solutions consist of a small amplitude pulse-like
envelope moving with a constant speed and modulating a harmonic carrier wave.
Such solutions can be approximated by solitons of an effective nonlinear
Schrodinger equation arising as the envelope equation. We are interested in a
rigorous existence proof of such solutions for a nonlinear wave equation with
spatially periodic coefficients. Such solutions are quasi-periodic in a
reference frame co-moving with the envelope. We use spatial dynamics, invariant
manifolds, and near-identity transformations to construct such solutions on
large domains in time and space. Although the spectrum of the linearized
equations in the spatial dynamics formulation contains infinitely many
eigenvalues on the imaginary axis or in the worst case the complete imaginary
axis, a small denominator problem is avoided when the solutions are localized
on a finite spatial domain with small tails in far fields. | Tomas Dohnal, Dmitry E. Pelinovsky, Guido Schneider | 2023-04-13T01:43:39Z | http://arxiv.org/abs/2304.06214v2 | # Traveling modulating pulse solutions with small tails
###### Abstract.
Traveling modulating pulse solutions consist of a small amplitude pulse-like envelope moving with a constant speed and modulating a harmonic carrier wave. Such solutions can be approximated by solitons of an effective nonlinear Schrodinger equation arising as the envelope equation. We are interested in a rigorous existence proof of such solutions for a nonlinear wave equation with spatially periodic coefficients. Such solutions are quasi-periodic in a reference frame co-moving with the envelope. We use spatial dynamics, invariant manifolds, and near-identity transformations to construct such solutions on large domains in time and space. Although the spectrum of the linearized equations in the spatial dynamics formulation contains infinitely many eigenvalues on the imaginary axis or in the worst case the complete imaginary axis, a small denominator problem is avoided when the solutions are localized on a finite spatial domain with small tails in far fields.
## 1. Introduction
We consider the semi-linear wave equation
\[\partial_{t}^{2}u(x,t)-\partial_{x}^{2}u(x,t)+\rho(x)u(x,t)=\gamma r(x)u(x,t)^ {3},\quad x,t\in\mathbb{R}, \tag{1}\]
where \(x,t,u(x,t)\in\mathbb{R}\), \(\rho(x)=\rho(x+2\pi)\), \(r(x)=r(x+2\pi)\), and \(\gamma=\pm 1\). We will assume that \(\rho(x)\) and \(r(x)\) are strictly positive for every \(x\) and even with respect to \(x=0\). The purpose of this paper is to prove the existence of traveling modulating pulse solutions. These solutions will be constructed as bifurcations from the trivial solution \(u=0\).
**Remark 1.1**.: _The semi-linear wave equation (1) can be considered as a phenomenological model for the description of electromagnetic waves in photonic crystal fibers. Such fibers show a much larger (structural) dispersion than homogeneous glass fibers. As a consequence they are much better able to support nonlinear localized structures such as pulses than their homogeneous counterpart. Most modern technologies for the transport of information through glass fibers use these pulses, cf. [13]. Sending a light pulse corresponds to sending the digital information "one" over the zero background. Physically such a pulse consists of a localized envelope which modulates an underlying electromagnetic carrier wave._
The traveling modulating pulse solutions in which we are interested are of small amplitude since they bifurcate from the trivial solution \(u=0\). Hence we consider the linearized problem first. The linear wave equation
\[\partial_{t}^{2}u(x,t)-\partial_{x}^{2}u(x,t)+\rho(x)u(x,t)=0,\quad x,t\in \mathbb{R},\]
with a \(2\pi\)-periodic coefficient function \(\rho\) is solved by the family of Bloch modes
\[u(x,t)=e^{\pm\mathrm{i}\omega_{n}(l)t}e^{\mathrm{i}lx}f_{n}(l,x),\quad n\in \mathbb{N},\quad l\in\mathbb{B},\]
## 1. Introduction
In this paper we consider the following _linear Schrodinger equation_
\[\frac{\partial^{2}}{\partial x}\left(\frac{\partial^{2}}{\partial x}\right)= \frac{\partial^{2}}{\partial x}\left(\frac{\partial
where
\[\gamma_{n_{0}}(l_{0})=\frac{3\gamma}{\omega_{n_{0}}(l_{0})}\int_{0}^{2\pi}r(x)|f_{ n_{0}}(l_{0},x)|^{4}dx.\]
The NLS equation (4) possesses traveling pulse solutions if \(\omega_{n_{0}}^{\prime\prime}(l_{0})\gamma_{n_{0}}(l_{0})>0\) in the form:
\[A(X,T)=\gamma_{1}\operatorname{sech}(\gamma_{2}(X-\tilde{c}T))\mathrm{e}^{ \frac{\mathrm{i}(2\tilde{c}X-\tilde{c}^{2}T)}{2\omega_{n_{0}}^{\prime\prime}(l _{0})}}\mathrm{e}^{-\mathrm{i}\tilde{c}T} \tag{5}\]
where \(\tilde{c}\) and \(\tilde{\omega}\) are arbitrary parameters such that \(\tilde{\omega}\omega_{n_{0}}^{\prime\prime}(l_{0})<0\) and the positive constants \(\gamma_{1}\) and \(\gamma_{2}\) are uniquely given by
\[\gamma_{1}=\sqrt{\frac{2|\tilde{\omega}|}{|\gamma_{n_{0}}(l_{0})|}},\quad \gamma_{2}=\sqrt{\frac{2|\tilde{\omega}|}{|\omega_{n_{0}}^{\prime\prime}(l_{0 })|}}. \tag{6}\]
Without loss of generality, we can set \(\tilde{c}=0\) and \(-\tilde{\omega}=\operatorname{sgn}(\omega_{n_{0}}^{\prime\prime}(l_{0}))= \operatorname{sgn}(\gamma_{n_{0}}(l_{0}))\), due to the scaling properties of the NLS equation (4).
**Remark 1.2**.: _As an example consider the spatially homogeneous case with \(\rho(x)=1\) and \(r(x)=1\), i.e., the semi-linear wave equation with constant coefficients. Then, we can re-order the eigenvalues and define_
\[f_{n}(l,x)=\frac{1}{\sqrt{2\pi}}\mathrm{e}^{\mathrm{i}nx},\quad\omega_{n}(l): =\sqrt{1+(n+l)^{2}},\quad n\in\mathbb{Z},\quad l\in\mathbb{B}, \tag{7}\]
_producing_
\[c_{g}=\omega_{n_{0}}^{\prime}(l_{0})=\frac{n_{0}+l_{0}}{\omega_{n_{0}}(l_{0})},\quad\omega_{n_{0}}^{\prime\prime}(l_{0})=\frac{1}{\omega_{n_{0}}(l_{0})^{3} },\quad\gamma_{n_{0}}(l_{0})=\frac{3\gamma}{2\pi\omega_{n_{0}}(l_{0})}. \tag{8}\]
_The traveling pulse solutions exist for \(\gamma=1\) with \(\tilde{\omega}=-1\) since \(\omega_{n_{0}}^{\prime\prime}(l_{0})>0\)._
**Remark 1.3**.: _In [1] an approximation result was established that guarantees that wave-packet solutions of the semi-linear wave equation (1) with periodic coefficients can be approximated by solutions of the NLS equation (4) on an \(\mathcal{O}(\varepsilon^{-2})\)-time scale via \(u_{\mathrm{app}}\) given by (3). In [1] this approximation was extended to the \(d\)-dimensional case._
_Existence of standing and moving modulating pulse solutions in homogenous and periodic media has been considered beyond the \(\mathcal{O}(\varepsilon^{-2})\)-time scale. Depending on the problem, we have to distinguish between pulse solutions which decay to zero for \(|x|\to\infty\) and generalized pulse solutions which have some small tails for large values of \(|x|\)._
**Remark 1.4**.: _In the spatially homogeneous case, i.e. if \(\rho=r=1\), the modulating pulse solutions are time-periodic in a frame co-moving with the envelope. Time-periodic solutions with finite energy are called breather solutions. However, it cannot be expected that such solutions with finite energy do exist in general, according to the non-persistence of breathers result for nonlinear wave equations in homogeneous media [1, 2, 3]. Nevertheless, generalized breather solutions, i.e., modulating pulse solutions with small tails, do exist. Such solutions were constructed in [1] with the help of spatial dynamics, invariant manifold theory and normal form theory. In general, such solutions can only be constructed on large, but finite, intervals in \(\mathbb{R}\), cf. [1, 1]._
**Remark 1.5**.: _In the spatially periodic case standing generalized modulating pulse solutions of the semi-linear wave equation (1) have been constructed in [1]. These solutions are time-periodic, i.e., again breather solutions, but in contrast to the homogeneous case true
spatially localized solutions can be constructed by properly tayloring the periodic coefficients. In [1] breather solutions were constructed by spatial dynamics in the phase space of time-periodic solutions, invariant manifold theory and normal form theory. With the same approach in [10] such solutions were constructed for a cubic Klein-Gordon equation on an infinite periodic necklace graph. The existence of large amplitude breather solutions of the semi-linear wave equation (1) was shown in [11, 12] via a variational approach. Breather solutions were recently considered in [13] for quasi-linear wave equations with periodic coefficients._
**Remark 1.6**.: _To our knowledge traveling modulating pulse solutions have not been constructed before for the semi-linear wave equation (1) with spatially periodic coefficients. For the Gross-Pitaevski equation with a periodic potential such solutions were constructed in [14] by using the coupled-mode approximation and in [15, Chapter 5.6] by using the NLS approximation. The Gross-Pitaevski equation has a phase-rotational symmetry which is not present in the semi-linear wave equation (1). Another new aspect is the fact that in the present paper the normal form transformations are infinite-dimensional in contrast to the existing literature._
In the spatially periodic case traveling modulating solutions of the semi-linear wave equation (1) in general are quasi-periodic in the frame co-moving with the envelope. Hence their construction requires the use of three spatial variables rather than two spatial variables used in the previous works [10] and [14]. However, although the spectrum of the linearized equations in the spatial dynamics formulation contains infinit
Figure 2. Eigenvalues of the spatial dynamics formulation, see (22) below, are dense on the imaginary axis. However, due to the convolution structure w.r.t. the \(z\)-variable, see Theorem 1.7, for a certain power of \(\varepsilon\) only a part of the linear operator has to be taken into account. For controlling the order \(\mathcal{O}(\varepsilon)\) of the solution only the part \(A_{1}(\omega,c)\) has to be considered. The central spectrum of \(A_{1}(\omega,c)\) is sketched in the left panel. In the middle panel the central spectrum of \(A_{1}(\omega,c)\) and \(A_{3}(\omega,c)\) is sketched. It plays a role for controlling the order \(\mathcal{O}(\varepsilon^{3})\). The right panel shows a sketch of the central spectrum of \(A_{1}(\omega,c)\), \(A_{3}(\omega,c)\) and \(A_{5}(\omega,c)\) which plays a role for controlling the order \(\mathcal{O}(\varepsilon^{5})\). In all cases there is a spectral gap between zero and the rest of the spectrum.
on the imaginary axis or in the worst case the complete imaginary axis, a small denominator problem is avoided by considering the problem on a finite spatial domain and by allowing for small tails, as illustrated in Figure 2.
The following result will be proven in this work. Figure 3 illustrates the construction of a generalized modulating pulse solution as described in the following theorem.
**Theorem 1.7**.: _Let \(\rho\) and \(r\) be \(2\pi\)-periodic, bounded, strictly positive, and even functions. Assume \(\gamma\neq 0\) and Assumption 4.11 below. Choose \(n_{0}\in\mathbb{N}\) and \(l_{0}>0\) such that the following conditions are satisfied:_
\[\omega_{n}(l_{0})\neq\omega_{n_{0}}(l_{0}),\qquad\forall n\neq n_{0}, \tag{9}\]
\[\omega^{\prime}_{n_{0}}(l_{0})\neq\pm 1,\qquad\omega^{\prime\prime}_{n_{0}}(l_{0 })\neq 0, \tag{10}\]
_and_
\[\omega^{2}_{n}(ml_{0})\neq m^{2}\omega^{2}_{n_{0}}(l_{0}),\quad m\in\{3,5, \ldots 2N+1\},\quad\forall n\in\mathbb{N}, \tag{11}\]
_for some fixed \(N\in\mathbb{N}\). Then there are \(\varepsilon_{0}>0\) and \(C>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\) there exist traveling modulating pulse solutions of the semi-linear wave equation (1) in the form_
\[u(x,t)=v(\xi,z,x)\quad\text{with}\ \,\,\xi=x-c_{g}t,\ \ z=l_{0}x-\omega t, \tag{12}\]
_where \(c_{g}=\omega^{\prime}_{n_{0}}(l_{0})\), \(\omega=\omega_{n_{0}}(l_{0})+\widetilde{\omega}\varepsilon^{2}\) with \(\widetilde{\omega}=-\mathrm{sgn}(\omega^{\prime\prime}_{n_{0}}(l_{0}))=- \mathrm{sgn}(\gamma_{n_{0}}(l_{0}))\), and \(v:[-\varepsilon^{-(2N+1)},\varepsilon^{-(2N+1)}]\times\mathbb{R}\times \mathbb{R}\to\mathbb{R}\) satisfies_
\[v(\xi,z,x)=v(\xi,z+2\pi,x)=v(\xi,z,x+2\pi),\]
_and_
\[\sup_{\xi\in[-\varepsilon^{-(2N+1)},\ \varepsilon^{-(2N+1)}]}|v(\xi,z,x)-h(\xi,z, x)|\leq C\varepsilon^{2N}. \tag{13}\]
_The function \(h:\mathbb{R}\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) satisfies_
\[h(\xi,z,x)=h(\xi,z+2\pi,x)=h(\xi,z,x+2\pi),\qquad\lim_{|\xi|\to\infty}h(\xi,z, x)=0,\]
_and_
\[\sup_{\xi,z,x\in\mathbb{R}}|h(\xi,z,x)-h_{\mathrm{app}}(\xi,z,x)|\leq C \varepsilon^{2}, \tag{14}\]
_with_
\[h_{\mathrm{app}}(\xi,z,x)=\varepsilon\gamma_{1}\,\mathrm{sech}(\varepsilon \gamma_{2}\xi)f_{n_{0}}(l_{0},x)\mathrm{e}^{\mathrm{i}z}+c.c.. \tag{15}\]
_The constants \(\gamma_{1},\gamma_{2}\) are defined in (6), and the \(2\pi\)-periodic function \(f_{n_{0}}(l_{0},\cdot)\) is a solution of (2). If \(\rho\) and \(r\) are smooth functions of \(x\), then \(v\) is a smooth function of \(\xi\), \(z\), and \(x\)._
**Remark 1.8**.: _Assumption 4.11 is of technical nature and guarantees the existence of infinite-dimensional invariant manifolds in the construction of the modulating pulse solutions. It is satisfied, for instance, if eigenvalues of the linearized operators on and near \(\mathrm{i}\mathbb{R}\setminus\{0\}\) are semi-simple. An extended result can be obtained in the case of double eigenvalues, see Remark 4.15 below._
**Remark 1.9**.: _The function \(h\) solves a second-order differential equation that is an \(\mathcal{O}(\varepsilon)\)-perturbation of the stationary NLS equation. We select \(h\) to be a homoclinic orbit with exponential decay to \(0\) at infinity that is \(\mathcal{O}(\varepsilon^{2})\)-close to the NLS approximation (15), cf. (14), computed at the pulse solution (5) for \(\bar{c}=0\) and \(\widetilde{\omega}=-\mathrm{sgn}(\omega^{\prime\prime}_{n_{0}}(l_{0}))=- \mathrm{sgn}(\gamma_{n_{0}}(l_{0}))\)._
**Remark 1.10**.: _If we take the a solution \(v\) of Theorem 1.7 for \(t=0\) as an initial condition for the semi-linear wave equation (1), and take arbitrary initial conditions outside of the interval \([-\varepsilon^{-(2N+1)},\varepsilon^{-(2N+1)}]\), then due to the finite maximal speed of propagation \(c_{\max}=\mathcal{O}(1)>0\), the solutions of Theorem 1.7 also exist for all_
\[\{(x,t):\quad t\in[0,\varepsilon^{-(2N+1)}/c_{\max}],\quad x\in[-\varepsilon^{ -(2N+1)}+c_{\max}t,\varepsilon^{-(2N+1)}-c_{\max}t]\}.\]
_Hence for \(N\in\mathbb{N}\) the modulated pulse solutions are approximated by \(h_{\text{app}}\) much longer than on the \(\mathcal{O}(\varepsilon^{-2})\)-time scale guaranteed by the approximation theorem given in [1]._
**Remark 1.11**.: _If the non-resonance condition (11) is satisfied for all odd \(m\geq 3\), then \(N\) can be chosen arbitrarily large, but has to be fixed. The result of [10] was improved in [10] to exponentially small tails and exponentially long time intervals w.r.t. \(\varepsilon\). It is not obvious that the exponential smallness result can be transferred to the spatially periodic case. We also do not use the Hamiltonian setup from [10] because it is not clear how the Hamiltonian structure of the semi-linear wave equation (1) can be developed in the spatial dynamics formulation._
We shall describe the strategy of the proof of Theorem 1.7. As in [10, 10, 10] the construction of the modulating pulse solutions is based on a combination of spatial dynamics, normal form transformations, and invariant manifold theory. Plugging the ansatz (12) into (1), we obtain an evolutionary system w.r.t. the unbounded space variable \(\xi\), the spatial dynamics formulation, i.e., we obtain a system of the form
\[\partial_{\xi}\widetilde{u}=M(\partial_{z},\partial_{x},x)\widetilde{u}+ \widetilde{N}(\partial_{z},\partial_{x},x,\widetilde{u}), \tag{16}\]
with \(M\widetilde{u}\) linear and \(N\) nonlinear in \(\widetilde{u}\) which is a vector containing \(v\) and derivatives of \(v\). For all values of the bifurcation parameter \(0<\varepsilon\ll 1\) there are infinitely many eigenvalues of \(M(\partial_{z},\partial_{x},x)\) on the imaginary axis, cf. Figure 2, and hence the center manifold reduction
Figure 3. A generalized modulating pulse solution as constructed in Theorem 1.7 with \(\mathcal{O}(\varepsilon^{2N})\) tails existing for \(x\) in an interval of length \(\mathcal{O}(\varepsilon^{-(2N+1)})\) with an envelope advancing with group velocity \(c_{g}=\omega_{n_{0}}^{\prime}(l_{0})\), modulating a carrier wave advancing with phase velocity \(c_{p}=\omega_{n_{0}}(l_{0})/l_{0}\), and leaves behind the standing periodic Bloch wave. The wavelength of the carrier wave and the period of the coefficients \(\rho\), \(r\) are of a comparable order.
is of no use. However, the system is of the form
\[\partial_{\xi}\widetilde{u}_{0} =M_{0}\widetilde{u}_{0}+\widetilde{N}_{0}(\widetilde{u}_{0}, \widetilde{u}_{r}), \tag{17}\] \[\partial_{\xi}\widetilde{u}_{r} =M_{r}\widetilde{u}_{r}+\widetilde{N}_{r}(\widetilde{u}_{0}, \widetilde{u}_{r})+H_{r}(\widetilde{u}_{0}), \tag{18}\]
where \(\widetilde{u}_{0}\) is a vector in \(\mathbb{C}^{2}\) corresponding to the eigenvalues of \(M\) which are close to zero and where \(\widetilde{u}_{r}\) corresponds to the infinite-dimensional remainder, i.e., to all the eigenvalues of \(M\) which are bounded away from zero for small \(|\varepsilon|\). For \(\varepsilon=0\) all eigenvalues of \(M_{0}\) are zero. The nonlinearity in the \(\widetilde{u}_{r}\)-equation is split into two parts such that \(\widetilde{N}_{r}(\widetilde{u}_{0},0)=0\). By finitely many normal form transformations in the \(\widetilde{u}_{r}\)-equation we can achieve that the remainder term \(H_{r}\) in (18) has the property \(H_{r}(\widetilde{u}_{0})=\mathcal{O}(|\widetilde{u}_{0}|^{2N+2})\) where \(N\) is an arbitrary, but fixed number, if certain non-resonance conditions are satisfied, cf. Remark 3.6. Concerning orders of \(\varepsilon\), we have \(\widetilde{u}_{0}=\mathcal{O}(\varepsilon)\) and \(\widetilde{u}_{r}=\mathcal{O}(\varepsilon^{2N+2})\). Hence, the finite-dimensional subspace \(\{\widetilde{u}_{r}=0\}\) is approximately invariant, and setting the highest order-in-\(\varepsilon\) term \(H_{r}(\tilde{u}_{0})=\mathcal{O}(\varepsilon^{2N+2})\) to \(0\), we obtain the reduced system
\[\partial_{\xi}\widetilde{u}_{0}=M_{0}\widetilde{u}_{0}+\widetilde{N}_{0}( \widetilde{u}_{0},0).\]
For the reduced system, a homoclinic solution inside the subspace \(\{\widetilde{u}_{r}=0\}\) can be found, which bifurcates with respect to \(\varepsilon\) from the trivial solution. The persistence of this solution for the system (17)-(18) cannot be expected, since the finite-dimensional subspace \(\{\widetilde{u}_{r}=0\}\) is not truly invariant for (17)-(18), and therefore the necessary intersection of the stable and unstable manifolds is unlikely to happen in an infinite-dimensional phase space. However, the approximate homoclinic orbit can be used to prove that the center-stable manifold intersects the fixed space of reversibility transversally which in the end allows us to construct a modulating pulse solution with the properties stated in Theorem 1.7.
**Organization of the paper.** In Section 2 we introduce the spatial dynamics formulation by using Fourier series and Bloch modes. We develop near-identity transformations in Section 3 for reducing the size of the tails and increasing the size of the spatial domain. A local center-stable manifold in the spatial dynamics problem is constructed in Section 4. The proof of Theorem 1.7 is completed in Section 5 by establishing an intersection of the center-stable manifold with the fixed space of reversibility.
**Acknowledgement.** The work of Dmitry E. Pelinovsky is partially supported by AvHumboldt Foundation. The work of Guido Schneider is partially supported by the Deutsche Forschungsgemeinschaft DFG through the SFB 1173 "Wave phenomena" Project-ID 258734477.
## 2. Spatial dynamics formulation
In this section we introduce the spatial dynamics formulation by using Fourier series and Bloch modes. We fix \(l_{0}\in\mathbb{B}\) and define
\[u(x,t)=v(\xi,z,x)\quad\text{with}\ \ \xi=x-ct,\ \ z=l_{0}x-\omega t, \tag{19}\]
where \(\omega\) and \(c\) are to be determined and \(v(\xi,\cdot,\cdot)\) satisfies
\[v(\xi,z+2\pi,x)=v(\xi,z,x+2\pi)=v(\xi,z,x),\quad\forall(\xi,z,x)\in\mathbb{R} ^{3}.\]
Inserting (19) into the semi-linear wave equation (1) and using the chain rule, we obtain a new equation for \(v\):
\[\left[(c^{2}-1)\partial_{\xi}^{2}+2(c\omega-l_{0})\partial_{\xi} \partial_{z}-2\partial_{\xi}\partial_{x}+(\omega^{2}-l_{0}^{2})\partial_{z}^{2} -2l_{0}\partial_{z}\partial_{x}-\partial_{x}^{2}\right]v(\xi,z,x)\] \[\qquad+\rho(x)v(\xi,z,x)=\gamma r(x)v(\xi,z,x)^{3},\quad\xi\in \mathbb{R},\quad x,z\in[0,2\pi)_{\rm per}. \tag{20}\]
In order to consider this equation as an evolutionary system with respect to \(\xi\in\mathbb{R}\), we use Fourier series in \(z\)
\[v(\xi,z,x)=\sum_{m\in\mathbb{Z}}\tilde{v}_{m}(\xi,x)e^{{\rm i}mz},\quad\tilde{ v}_{m}(\xi,x)=\frac{1}{2\pi}\int_{0}^{2\pi}v(\xi,z,x)e^{-{\rm i}mz}dz. \tag{21}\]
Equation (20) is converted through the Fourier expansion (21) into the spatial dynamics system for every \(c\neq\pm 1\):
\[\partial_{\xi}\left(\begin{array}{c}\tilde{v}_{m}\\ \tilde{w}_{m}\end{array}\right)=A_{m}(\omega,c)\left(\begin{array}{c}\tilde{ v}_{m}\\ \tilde{w}_{m}\end{array}\right)-\gamma(1-c^{2})^{-1}\left(\begin{array}{c}0 \\ r(x)(\tilde{v}*\tilde{v}*\tilde{v})_{m}\end{array}\right), \tag{22}\]
for \(\xi\in\mathbb{R}\), \(m\in\mathbb{Z}\), \(x\in[0,2\pi)_{per}\), where \(\tilde{w}_{m}:=\partial_{\xi}\tilde{v}_{m}\), \(A_{m}(\omega,c)\) is defined by
\[A_{m}(\omega,c)=\left(\begin{array}{cc}0&1\\ (1-c^{2})^{-1}[-(\partial_{x}+{\rm i}ml_{0})^{2}+\rho(x)-m^{2}\omega^{2}]&2(1- c^{2})^{-1}[{\rm i}mc\omega-(\partial_{x}+{\rm i}ml_{0})]\end{array}\right)\]
and the double convolution sum is given by
\[(\tilde{v}*\tilde{v}*\tilde{v})_{m}:=\sum_{m_{1},m_{2}\in\mathbb{Z}}\tilde{v} _{m_{1}}\tilde{v}_{m_{2}}\tilde{v}_{m-m_{1}-m_{2}}.\]
The dynamical system (22) can also be written in the scalar form as
\[[(c^{2}-1)\partial_{\xi}^{2}+2{\rm i}mc\omega\partial_{\xi}-2(\partial_{x}+{ \rm i}ml_{0})\partial_{\xi}-m^{2}\omega^{2}-(\partial_{x}+{\rm i}ml_{0})^{2}+ \rho(x)]\tilde{v}_{m}=\gamma r(x)(\tilde{v}*\tilde{v}*\tilde{v})_{m}. \tag{23}\]
**Remark 2.1**.: _If \(\rho\in L^{\infty}_{per}([0,2\pi])\), then the domain \(\tilde{D}\) and the range \(\tilde{R}\) of the linear operator \(A_{m}(\omega,c):\tilde{D}\subset\tilde{R}\to\tilde{R}\) are given by_
\[\tilde{D}=H^{2}_{per}([0,2\pi])\times H^{1}_{per}([0,2\pi]),\qquad\tilde{R}=H ^{1}_{per}([0,2\pi])\times L^{2}([0,2\pi]). \tag{24}\]
_Solutions of the dynamical system (22) are then sought such that at each \(\xi\in\mathbb{R}\) they lie in the phase space_
\[\mathcal{D}:=\{(\tilde{v}_{m},\tilde{w}_{m})_{m\in\mathbb{Z}} \in (\ell^{2,2}(\mathbb{Z},L^{2}([0,2\pi]))\cap\ell^{2,1}(\mathbb{Z}, H^{1}_{\rm per}([0,2\pi]))\cap\ell^{2,0}(\mathbb{Z},H^{2}_{\rm per}([0,2\pi]))) \tag{25}\] \[\times(\ell^{2,1}(\mathbb{Z},L^{2}([0,2\pi]))\cap\ell^{2,0}( \mathbb{Z},H^{1}_{\rm per}([0,2\pi])))\},\]
_with the range in_
\[\mathcal{R}:=\{(\tilde{f}_{m},\tilde{g}_{m})_{m\in\mathbb{Z}} \in (\ell^{2,1}(\mathbb{Z},L^{2}([0,2\pi]))\cap\ell^{2,0}(\mathbb{Z}, H^{1}_{\rm per}([0,2\pi]))) \tag{26}\] \[\times\ell^{2,0}(\mathbb{Z},L^{2}([0,2\pi]))\},\]
_where \(\ell^{2,k}(\mathbb{Z},H^{s})\), with \(k\in\mathbb{N}\), is a weighted \(\ell^{2}\)-space equipped with the norm_
\[\|(\tilde{v}_{m})_{m\in\mathbb{Z}}\|_{\ell^{2,k}(\mathbb{Z},H^{s})}=\left( \sum_{m\in\mathbb{Z}}\|\tilde{v}_{m}\|^{2}_{H^{s}}(1+m^{2})^{k}\right)^{1/2}.\]
**Remark 2.2**.: _Real solutions \(v=v(\xi,z,x)\) after the Fourier expansion (21) enjoy the symmetry:_
\[\tilde{v}_{-m}(\xi,x)=\overline{\tilde{v}}_{m}(\xi,x),\quad\forall m\in\mathbb{Z },\ \ \forall(\xi,x)\in\mathbb{R}^{2}. \tag{27}\]
_The cubic nonlinearity maps the space of Fourier series where only the odd Fourier modes are non-zero to the same space. Hence, we can look for solutions of the spatial dynamics system (22) in the subspace_
\[\mathcal{D}_{\rm odd}:=\{(\tilde{v}_{m},\tilde{w}_{m})_{m\in\mathbb{Z}}\in \mathcal{D}:\quad\tilde{v}_{2m}=\tilde{w}_{2m}=0,\quad\tilde{v}_{-m}= \overline{\tilde{v}}_{m},\quad\tilde{w}_{-m}=\overline{\tilde{w}}_{m},\quad \forall m\in\mathbb{Z}\}.\]
_Hence the components \((\tilde{v}_{m},\tilde{w}_{m})\) for \(-m\in\mathbb{N}_{\rm odd}\) can be obtained from the components \((\tilde{v}_{m},\tilde{w}_{m})\) for \(m\in\mathbb{N}_{\rm odd}\) by using the symmetry (27)._
### Linearized Problem
Truly localized modulating pulse solutions satisfy
\[\lim_{\xi\to\pm\infty}v(\xi,z,x)=0,\]
i.e., such solutions are homoclinic to the origin with respect to the evolutionary variable \(\xi\). If these solutions exist, they lie in the intersection of the stable and unstable manifold of the origin. However, the modulating pulse solutions are not truly localized because of the existence of the infinite-dimensional center manifold for the spatial dynamics system (22).
The following lemma characterizes zero eigenvalues \(\lambda\) of the operators \(A_{m}(\omega_{0},c_{g})\), where \(\omega_{0}=\omega_{n_{0}}(l_{0})\) and \(c_{g}=\omega_{n_{0}}^{\prime}(l_{0})\).
**Lemma 2.3**.: _Fix \(N\in\mathbb{N}\). Under the non-degeneracy and the non-resonance assumptions (9), (10), and (11) the operator \(A_{m}(\omega_{0},c_{g})\) with \(m\in\{1,3,\cdots,2N+1\}\) has a zero eigenvalue if and only if \(m=1\). The zero eigenvalue is algebraically double and geometrically simple._
**Proof.** Let \(m\in\mathbb{N}_{\rm odd}\). The eigenvalue problem \(A_{m}(\omega_{0},c_{g})\left(\begin{smallmatrix}V\\ W\end{smallmatrix}\right)=\lambda\left(\begin{smallmatrix}V\\ W\end{smallmatrix}\right)\) can be reformulated in the scalar form:
\[[-(\partial_{x}+{\rm i}ml_{0}+\lambda)^{2}+\rho(x)]V(x)=(m\omega_{0}-{\rm i}c _{g}\lambda)^{2}V(x). \tag{28}\]
Eigenvalues \(\lambda\) are obtained by setting \(V(x)=f_{n}(ml_{0}-{\rm i}\lambda,x)\) and using the spectral problem (2) for \(l\in\mathbb{C}\), where both \(f_{n}(l,x)\) and \(\omega_{n}(l)\) are analytically continued in \(l\in\mathbb{C}\). The eigenvalues are the roots of the nonlinear equations
\[\omega_{n}^{2}(ml_{0}-{\rm i}\lambda)=(m\omega_{0}-{\rm i}c_{g}\lambda)^{2}, \qquad n\in\mathbb{N}. \tag{29}\]
Zero eigenvalues \(\lambda=0\) exist if and only if there exist solutions of the nonlinear equations \(\omega_{n}^{2}(ml_{0})=m^{2}\omega_{0}^{2}\). Since \(\omega_{0}=\omega_{n_{0}}(l_{0})\), \(\omega_{n}^{2}(ml_{0})=m^{2}\omega_{0}^{2}\) is satisfied for \(m=1\) and \(n=n_{0}\). Due to the non-degeneracy assumption (9), \(\omega_{n}^{2}(l_{0})=\omega_{0}^{2}\) does not hold for any other \(n\). This shows the geometric simplicity of \(\lambda=0\) for \(m=1\). It follows from (9) and (11) that no other solutions of \(\omega_{n}^{2}(ml_{0})=m^{2}\omega_{0}^{2}\) exist for \(m\in\{1,3,\cdots,2N+1\}\).
It remains to prove that the zero eigenvalue for \(m=1\) is algebraically double. To do so, we again employ the equivalence of the eigenvalue problem (2) and (28) for \(\lambda=0\), \(l=l_{0}\), \(n=n_{0}\), and \(m=1\). For \(n=n_{0}\), and \(l=l_{0}\), this equation and its two derivatives with respect to \(l\) generate the following relations:
\[[-(\partial_{x}+{\rm i}l_{0})^{2}+\rho(x)-\omega_{0}^{2}]f_{n_{0} }(l_{0},x) =0, \tag{30}\] \[[-(\partial_{x}+{\rm i}l_{0})^{2}+\rho(x)-\omega_{0}^{2}]\partial _{l}f_{n_{0}}(l_{0},x) =2\omega_{0}c_{g}f_{n_{0}}(l_{0},x)+2{\rm i}(\partial_{x}+{\rm i}l_ {0})f_{n_{0}}(l_{0},x),\] (31) \[[-(\partial_{x}+{\rm i}l_{0})^{2}+\rho(x)-\omega_{0}^{2}] \partial_{l}^{2}f_{n_{0}}(l_{0},x) =4\omega_{0}c_{g}\partial_{l}f_{n_{0}}(l_{0},x)+4{\rm i}(\partial _{x}+{\rm i}l_{0})\partial_{l}f_{n_{0}}(l_{0},x)\] \[+2(\omega_{0}\omega_{n_{0}}^{\prime\prime}(l_{0})+c_{g}^{2}-1)f_{ n_{0}}(l_{0},x). \tag{32}\]
The non-degeneracy condition (10) implies that \(c_{g}^{2}\neq 1\). Computing the Jordan chain for \(A_{1}(\omega_{0},c_{g})\) at the zero eigenvalue with the help of (30) and (31) yields
\[A_{1}(\omega_{0},c_{g})F_{0}=0,\quad F_{0}(x):=\left(\begin{array}{c}f_{n_{0}} (l_{0},x)\\ 0\end{array}\right), \tag{33}\]
and
\[A_{1}(\omega_{0},c_{g})F_{1}=F_{0},\qquad F_{1}(x):=\left(\begin{array}{c}- \mathrm{i}\partial_{l}f_{n_{0}}(l_{0},x)\\ f_{n_{0}}(l_{0},x)\end{array}\right). \tag{34}\]
We use \(f_{n_{0}}\) and \(\partial_{l}f_{n_{0}}\) to denote \(f_{n_{0}}(l_{0},\cdot)\) and \(\partial_{l}f_{n_{0}}(l_{0},\cdot)\) respectively. It follows from (31) and (32) that
\[\omega_{0}c_{g}-l_{0}+\langle f_{n_{0}},\mathrm{i}f^{\prime}_{n_ {0}}\rangle=0, \tag{35}\] \[\omega_{0}\omega^{\prime\prime}_{n_{0}}(l_{0})+c_{g}^{2}-1+2( \omega_{0}c_{g}-l_{0})\langle f_{n_{0}},\partial_{l}f_{n_{0}}\rangle+2 \langle f_{n_{0}},\mathrm{i}\partial_{l}f^{\prime}_{n_{0}}\rangle=0, \tag{36}\]
where \(f^{\prime}_{n_{0}}\) denotes \(\partial_{x}f_{n}(l_{0},\cdot)\) and where we have used the normalization \(\|f_{n_{0}}(l_{0},\cdot)\|_{L^{2}(0,2\pi)}=1\).
**Remark 2.4**.: _Let us define_
\[\langle\langle f,g\rangle\rangle:=\langle f_{1},g_{1}\rangle+\langle f_{2},g_{ 2}\rangle,\]
_where \(\langle\phi,\psi\rangle=\int_{0}^{2\pi}\bar{\phi}\psi dx\) is the standard inner product in \(L^{2}(0,2\pi)\). With some abuse of notation in the following we write \(\langle f,g\rangle\) for \(\langle\langle f,g\rangle\rangle\)._
Using complex conjugation, transposition, and integration by parts, the adjoint operator to \(A_{1}(\omega,c)\) in \(L^{2}(0,2\pi)\) is computed as follows:
\[A_{1}^{*}(\omega,c)=\left(\begin{array}{cc}0&(1-c^{2})^{-1}[-(\partial_{x}+ \mathrm{i}l_{0})^{2}+\rho(x)-\omega^{2}]\\ 1&-2(1-c^{2})^{-1}[\mathrm{i}c\omega-(\partial_{x}+\mathrm{i}l_{0})]\end{array} \right), \tag{37}\]
for which we obtain
\[A_{1}^{*}(\omega_{0},c_{g})G_{0}=0,\quad G_{0}(x):=\frac{1}{\omega_{0}\omega^{ \prime\prime}_{n_{0}}(l_{0})}\left(\begin{array}{c}2[\mathrm{i}c_{g}\omega _{0}-(\partial_{x}+\mathrm{i}l_{0})]f_{n_{0}}(l_{0},x)\\ (1-c_{g}^{2})f_{n_{0}}(l_{0},x)\end{array}\right), \tag{38}\]
where the normalization has been chosen such that \(\langle G_{0},F_{1}\rangle=1\) due to the relation (36). Note also that \(\langle G_{0},F_{0}\rangle=0\) due to the relation (35).
For the generalized eigenvector of \(A_{1}^{*}(\omega_{0},c_{g})\) we have
\[A_{1}^{*}(\omega_{0},c_{g})G_{1}=G_{0}, \tag{39}\]
with
\[G_{1} :=\frac{1-c_{g}^{2}}{\omega_{0}\omega^{\prime\prime}_{n_{0}}(l_{0 })}\left(\begin{array}{c}f_{n_{0}}+2\mathrm{i}(1-c_{g}^{2})^{-1}[\mathrm{i}c _{g}\omega_{0}-(\partial_{x}+\mathrm{i}l_{0})]\partial_{l}f_{n_{0}}\\ \mathrm{i}\partial_{l}f_{n_{0}}\end{array}\right)+\nu G_{0},\] \[=\frac{1-c_{g}^{2}}{\omega_{0}\omega^{\prime\prime}_{n_{0}}(l_{0 })}\left(\begin{array}{c}f_{n_{0}}+2\mathrm{i}(1-c_{g}^{2})^{-1}[\mathrm{i}c _{g}\omega_{0}-(\partial_{x}+\mathrm{i}l_{0})](\partial_{l}f_{n_{0}}-\mathrm{ i}\nu f_{n_{0}})\\ \mathrm{i}(\partial_{l}f_{n_{0}}-\mathrm{i}\nu f_{n_{0}})\end{array}\right),\]
where \(\nu\) is chosen so that \(\langle G_{1},F_{1}\rangle=0\). A direct calculation produces
\[\nu=\frac{2\mathrm{i}}{\omega_{0}\omega^{\prime\prime}_{n_{0}}(l_{0})}\left( (1-c_{g}^{2})\ \mathrm{Re}\ \langle f_{n_{0}},\partial_{l}f_{n_{0}}\rangle-(c_{g}\omega_{0}-l_{0})\| \partial_{l}f_{n_{0}}\|^{2}-\mathrm{Im}\langle\partial_{x}\partial_{l}f_{n_{0}},\partial_{l}f_{n_{0}}\rangle\right).\]
As \(A_{1}(\omega_{0},c_{0})\) has a compact resolvent, a standard argument using Fredholm's alternative guarantees that there exists a \(2\pi\)-periodic solution of the inhomogeneous equation
\[A_{1}(\omega_{0},c_{g})\left(\begin{array}{c}\tilde{v}\\ \tilde{w}\end{array}\right)=F_{1}\]
if and only if \(F_{1}\) is orthogonal to \(\operatorname{Ker}(A_{1}^{*})\), i.e. to \(G_{0}\). However, since \(\langle G_{0},F_{1}\rangle=1\), the Jordan chain for the zero eigenvalue terminates at the first generalized eigenvector (34), i.e. \(\lambda=0\) is algebraically double.
**Remark 2.5**.: _By using the same argument, we verify that also the adjoint operator \(A_{1}^{*}(\omega_{0},c_{g})\) has a double zero eigenvalue. This follows from the existence of solutions in (38) and (39) and non-orthogonality of the generalized eigenvector \(G_{1}\) of \(A_{1}^{*}(\omega_{0},c_{g})\) to \(\ker(A_{1})\), i.e. to \(F_{0}\) since \(\langle G_{1},F_{0}\rangle=\langle G_{1},A_{1}F_{1}\rangle=\langle A_{1}^{*}G _{1},F_{1}\rangle=\langle G_{0},F_{1}\rangle=1\)._
In the low-contrast case, i.e., when the periodic coefficient \(\rho\) is near \(\rho\equiv 1\), the non-degeneracy conditions (9), (10) and the non-resonance condition (11) are easy to verify. If \(\rho(x)=1\), the eigenvalues \(\omega_{n}(l)\) are known explicitly, see (7). The following lemma specifies the sufficient conditions under which the non-resonance assumption (11) is satisfied.
**Lemma 2.6**.: _Let \(\rho(x)=1+\delta\rho_{1}(x)\) with \(\rho_{1}(x)=\rho_{1}(x+2\pi)\) and \(\delta\) being a constant parameter. There exists \(\delta_{0}>0\) such that for every \(\delta\in(-\delta_{0},\delta_{0})\) the non-degeneracy assumptions (9) and (10) are satisfied. The non-resonance assumption (11) is satisfied if_
\[n_{0}+l_{0}\neq\frac{m^{2}-1-\kappa^{2}}{2m\kappa},\quad\text{ where }\quad(m,\kappa)\in\{3,5,\ldots,2N+1\}\times\mathbb{Z}. \tag{40}\]
**Proof.** The non-degeneracy assumptions (9) and (10) are satisfied because equation (8) for \(\delta=0\) implies \(c_{g}\in(-1,1)\) and \(\omega_{n_{0}}^{\prime\prime}(l_{0})\neq 0\). As \(\omega_{n_{0}}(l_{0})\) and \(\omega_{n_{0}}^{\prime\prime}(l_{0})\) depend continuously on \(\delta\), we get that (9) and (10) hold for \(|\delta|\) small enough.
For the non-resonance assumption (11) we set \(n=mn_{0}+\kappa\) with \(m\in\{3,5,\ldots,2N+1\}\), \(\kappa\in\mathbb{Z}\) and note that at \(\delta=0\) we have
\[\omega_{n}^{2}(ml_{0}) =1+(mn_{0}+\kappa+ml_{0})^{2},\] \[m^{2}\omega_{n_{0}}^{2}(l_{0}) =m^{2}(1+(n_{0}+l_{0})^{2}),\]
see (7). Condition (11) at \(\delta=0\) is thus equivalent to (40). As eigenvalues depend continuously on \(\delta\), condition (11) is satisfied for \(|\delta|\) small enough if it is satisfied for \(\delta=0\).
**Remark 2.7**.: _The non-resonance condition (40) is satisfied for all \(m\in\mathbb{N}\) if \(l_{0}\in\mathbb{R}\setminus\mathbb{Q}\)._
### Formal Reduction
Let us now consider a formal restriction of system (23) to the subspace
\[\mathcal{S}:=\{(\tilde{v}_{m},\tilde{w}_{m})_{m\in\mathbb{Z}}\in\mathcal{D}_{ \mathrm{odd}}:\quad\tilde{v}_{m}=\tilde{w}_{m}=0,\ \ m\in\mathbb{Z}_{\mathrm{odd}}\backslash\{-1,1\}\}\]
leading to the NLS approximation (15). As \(\mathcal{S}\) is not an invariant subspace of system (23), this reduction is only formal and a justification analysis has to be performed, which we do in the remainder of this paper.
The nonlinear (double-convolution) term on \(\mathcal{S}\) is given by
\[(\tilde{v}*\tilde{v}*\tilde{v})_{1}=3|\tilde{v}_{1}|^{2}\tilde{v}_{1}.\]
The scalar equation (23) on \(\mathcal{S}\) reduces to
\[\left[(c^{2}-1)\partial_{\xi}^{2}+2\mathrm{i}c\omega\partial_{\xi}-2 \partial_{\xi}(\partial_{x}+\mathrm{i}l_{0})-\omega^{2}-(\partial_{x}+\mathrm{i }l_{0})^{2}+\rho(x)\right]\tilde{v}_{1}(\xi,x)\] \[=3\gamma r(x)|\tilde{v}_{1}(\xi,x)|^{2}\tilde{v}_{1}(\xi,x)\]
for \(m=1\) and to the complex conjugate equation for \(m=-1\). Using the Jordan block for the double zero eigenvalue in Lemma 2.3, we write the two-mode decomposition:
\[\left\{\begin{array}{l}\tilde{v}_{1}(\xi,x)=\psi_{1}(\xi)f_{n_{0}}(l_{0},x)- \mathrm{i}\phi_{1}(\xi)\partial_{l}f_{n_{0}}(l_{0},x),\\ \tilde{w}_{1}(\xi,x)=\phi_{1}(\xi)f_{n_{0}}(l_{0},x),\end{array}\right. \tag{41}\]
where \(\psi_{1}(\xi)=\varepsilon A(X)\) with \(X=\varepsilon\xi\) and real \(A(X)\). It follows from \(\partial_{\xi}\tilde{v}_{1}(\xi,x)=\tilde{w}_{1}(\xi,x)\) that \(\phi_{1}(\xi)=\varepsilon^{2}A^{\prime}(X)\), where the \(\mathcal{O}(\varepsilon^{3})\) terms are neglected. Using \(\omega=\omega_{0}+\widetilde{\omega}\varepsilon^{2}\) and \(c=c_{g}\) with \(\omega_{0}=\omega_{n_{0}}(l_{0})\) and \(c_{g}=\omega_{n_{0}}^{\prime}(l_{0})\), we obtain the following equation at order \(\mathcal{O}(\varepsilon^{3})\):
\[(c_{g}^{2}-1)A^{\prime\prime}f_{n_{0}}+2c_{g}\omega_{0}A^{\prime\prime} \partial_{l}f_{n_{0}}+2\mathrm{i}A^{\prime\prime}\partial_{l}f_{n_{0}}^{ \prime}-2\tilde{\omega}\omega_{0}Af_{n_{0}}=3\gamma rA^{3}|f_{n_{0}}|^{2}f_{n_ {0}}, \tag{42}\]
where equations (30) and (31) have been used and \(f_{n_{0}}\) again denotes \(f_{n_{0}}(l_{0},\cdot)\). Projecting (42) onto \(\mathrm{span}\{f_{n_{0}}\}\) and using (36) yields the stationary NLS equation
\[-\omega_{0}\omega_{n_{0}}^{\prime\prime}(l_{0})A^{\prime\prime}-2\omega_{0} \widetilde{\omega}A=\omega_{0}\gamma_{n_{0}}(l_{0})A^{3}, \tag{43}\]
which recovers the stationary version of the NLS equation (4) for \(A(X,T)\) replaced by \(A(X)e^{-\mathrm{i}\widetilde{\omega}T}\) with real \(A(X)\). The modulated pulse solution corresponds to the soliton solution of the stationary NLS equation (43),
\[A(X)=\gamma_{1}\mathrm{sech}(\gamma_{2}X), \tag{44}\]
where \(\gamma_{1}\) and \(\gamma_{2}\) are given by (6). Note that among the positive and decaying at infinity solutions of the stationary NLS equation (43) the pulse solution (44) is unique up to a constant shift in \(X\).
**Remark 2.8**.: _Unfolding the transformations (19), (21), and (41) with \(\psi_{1}(\xi)=\varepsilon A(\varepsilon\xi)\) gives an approximation for \(h_{\mathrm{app}}(\xi,z,x)\) on \(\mathcal{S}\), see (15)._
### Reversibility
Because \(\rho\) and \(r\) are even functions, the semi-linear wave equation (1), being second order in space, is invariant under the parity transformation: \(u(x,t)\mapsto u(-x,t)\). Similarly, since it is also second-order in time, it is invariant under the reversibility transformation: \(u(x,t)\mapsto u(x,-t)\).
The two symmetries are inherited by the scalar equations (20) and (23): If \(v(\xi,z,x)\) is a solution of (20), so is \(v(-\xi,-z,-x)\) and if \((\tilde{v}_{m}(\xi,x))_{m}\) is a solution of (23), so is \((\widetilde{\tilde{v}}_{m}(-\xi,-x))_{m}\). Since the symmetry is nonlocal in \(x\), one can use the Fourier series in \(x\) given by
\[\tilde{v}_{m}(\xi,x)=\sum_{k\in\mathbb{Z}}\hat{v}_{m,k}(\xi)e^{\mathrm{i}kx}, \quad\hat{v}_{m,k}(\xi)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\tilde{v}_{m}(\xi,x)e^{ -\mathrm{i}kx}dx, \tag{45}\]
and similarly for \(\tilde{w}_{m}\) to rewrite the symmetry in the form:
\[\begin{array}{l}\mbox{If }\{\hat{v}_{m,k}(\xi),\hat{w}_{m,k}(\xi)\}_{(m,k)\in \mathbb{N}_{\mathrm{odd}}\times\mathbb{Z}}\mbox{ \ is a solution of (\ref{eq:w}) with (\ref{eq:w}),}\\ \mbox{so is }\{\{\tilde{v}_{m,k}(-\xi),-\widetilde{w}_{m,k}(-\xi)\}_{(m,k)\in \mathbb{N}_{\mathrm{odd}}\times\mathbb{Z}}.}\end{array} \tag{46}\]
The implication of the symmetry (27) and (46) is that if a solution \(\{\hat{v}_{m,k}(\xi),\hat{w}_{m,k}(\xi)\}_{(m,k)\in\mathbb{N}_{\mathrm{odd}} \times\mathbb{Z}}\) constructed for \(\xi\geq 0\) satisfies the reversibility constraint:
\[\mbox{Im }\hat{v}_{m,k}(0)=0,\quad\mbox{Re }\hat{w}_{m,k}(0)=0,\quad\forall(m,k) \in\mathbb{N}_{\mathrm{odd}}\times\mathbb{Z}, \tag{47}\]
then the solution \(\{\hat{v}_{m,k}(\xi),\hat{w}_{m,k}(\xi)\}_{(m,k)\in\mathbb{N}_{\rm odd}\times \mathbb{Z}}\) can be uniquely continued for \(\xi\leq 0\) using the extension
\[\hat{v}_{m,k}(\xi)=\overline{\hat{v}}_{m,k}(-\xi),\quad\hat{w}_{m,k}(\xi)=- \overline{\hat{w}}_{m,k}(-\xi),\quad\forall\xi\in\mathbb{R}_{-}. \tag{48}\]
This yields a symmetric solution of the spatial dynamics system (22) for every \(\xi\in\mathbb{R}\) after being reformulated with the Fourier expansion (45).
**Remark 2.9**.: _The pulse solution (44) gives a leading order approximation (41) on \(\mathcal{S}\subset\mathcal{D}_{\rm odd}\) which satisfies the reversibility constraint (47). Indeed, since \(A^{\prime}(0)=0\), we have \(\tilde{w}_{1}(0,x)=0\) which implies \(\hat{w}_{1,k}(0)=0\). On the other hand, we have \(\tilde{v}_{1}(0,x)=\varepsilon A(0)f_{n_{0}}(l_{0},x)\) with real \(A(0)\) and generally complex \(f_{n_{0}}(l_{0},x)\). However, the Bloch mode \(f_{n_{0}}(l_{0},x)\) satisfies the symmetry_
\[\overline{f}_{n_{0}}(l_{0},-x)=f_{n_{0}}(l_{0},x),\]
_thanks to the non-degeneracy assumption (9): If \(f_{n_{0}}(l_{0},x)\) is a solution of (2) so is \(\overline{f}_{n_{0}}(l_{0},-x)\) and the eigenvalue \(\omega_{0}^{2}=\omega_{n_{0}}^{2}(l_{0})\), is simple in the spectral problem (2). Consequently, all Fourier coefficients of \(f_{n_{0}}(l_{0},x)\) are real which implies \(\mathrm{Im}\ \hat{v}_{1,k}(0)=0\)._
### \(So(2)\)-symmetry
The starting equation (20) for \(v=v(\xi,z,x)\) is translational invariant with respect to \(z\mapsto z+z_{0}\), with \(z_{0}\in\mathbb{R}\) arbitrary. This corresponds to an invariance under the mapping \((\tilde{v}_{m},\tilde{w}_{m})\mapsto(\tilde{v}_{m},\tilde{w}_{m})e^{imz_{0}}\) for all \(m\in\mathbb{Z}_{\rm odd}\) in the Fourier representation (21). This symmetry allows us to restrict \(A\) in (41) to real-valued functions.
## 3. Near-identity transformations
By Lemma 2.3, the Fredholm operator \(A_{1}(\omega_{0},c_{g})\) has the double zero eigenvalue, whereas \(A_{m}(\omega_{0},c_{g})\) for \(3\leq m\leq 2N+1\) admit no zero eigenvalues. In what follows, we decompose the solution in \(X\) into a two-dimensional part corresponding to the double zero eigenvalue and the infinite-dimensional remainder term.
### Separation of a two-dimensional problem
Like in the proof of Lemma 2.3, we denote the eigenvector and the generalized eigenvector of \(A_{1}(\omega_{0},c_{g})\) for the double zero eigenvalue by \(F_{0}\) and \(F_{1}\), see (33) and (34), and the eigenvector and the generalized eigenvector of \(A_{1}^{*}(\omega_{0},c_{g})\) for the double zero eigenvalue by \(G_{0}\) and \(G_{1}\), see (38) and (39).
We define \(\Pi\) as the orthogonal projection onto the orthogonal complement of the generalized eigenspace \(\mathrm{span}(G_{0},G_{1})\), i.e.
\[\Pi:L^{2}(0,2\pi)\times L^{2}(0,2\pi)\to\mathrm{span}(G_{0},G_{1})^{\perp},\] \[\Pi\Psi:=\Psi-\langle G_{0},\Psi\rangle F_{1}-\langle G_{1},\Psi \rangle F_{0}.\]
The orthogonality follows from our normalization, which was chosen in the proof of Lemma 2.3 so that \(\langle G_{0},F_{0}\rangle=\langle G_{1},F_{1}\rangle=0\) and \(\langle G_{0},F_{1}\rangle=\langle G_{1},F_{0}\rangle=1\). Also note that \(\ker\,\Pi=\mathrm{span}(F_{0},F_{1})\). Moreover, we have
\[A_{1}(\omega_{0},c_{g})\Pi\Psi = A_{1}(\omega_{0},c_{g})\Psi-\langle G_{0},\Psi\rangle A_{1}( \omega_{0},c_{g})F_{1}-\langle G_{1},\Psi\rangle A_{1}(\omega_{0},c_{g})F_{0}\] \[= A_{1}(\omega_{0},c_{g})\Psi-\langle G_{0},\Psi\rangle F_{0}\] \[= A_{1}(\omega_{0},c_{g})\Psi-\langle G_{0},A_{1}(\omega_{0},c_{g} )\Psi\rangle F_{1}-\langle G_{1},A_{1}(\omega_{0},c_{g})\Psi\rangle F_{0}\] \[= \Pi A_{1}(\omega_{0},c_{g})\Psi.\]
Compared to the two-mode decomposition (41), we write
\[\left(\begin{array}{c}\tilde{v}_{1}(\xi,x)\\ \tilde{w}_{1}(\xi,x)\end{array}\right)=\varepsilon q_{0}(\xi)F_{0}(x)+\varepsilon q _{1}(\xi)F_{1}(x)+\varepsilon S_{1}(\xi,x),\]
where \(q_{0},q_{1}:\mathbb{R}\to\mathbb{C}\) are unknown coefficients and where \(S_{1}(\xi,\cdot)\in\tilde{D}\) for \(\xi\in\mathbb{R}\) satisfies \(\Pi S_{1}=S_{1}\), i.e.
\[\langle G_{0},S_{1}(\xi,\cdot)\rangle=\langle G_{1},S_{1}(\xi,\cdot)\rangle=0,\qquad\forall\xi\in\mathbb{R}.\]
Similarly, we write
\[\left(\begin{array}{c}\tilde{v}_{m}(\xi,x)\\ \tilde{w}_{m}(\xi,x)\end{array}\right)=\varepsilon Y_{m}(\xi,x),\quad m\in \mathbb{N}_{\rm odd}\backslash\{1\}\]
and define \(Y_{1}:=q_{0}F_{0}+q_{1}F_{1}+S_{1}\). Furthermore, we represent \(Y_{m}=(V_{m},W_{m})^{T}\), i.e., \(\widetilde{v}_{m}=\varepsilon V_{m}\), \(\widetilde{w}_{m}=\varepsilon W_{m}\), and use the notation \({\bf V}:=(V_{m})_{m\in\mathbb{N}_{\rm odd}}\) and \({\bf V}_{\geq 3}:=(V_{m})_{m\in\mathbb{N}_{\rm odd}\backslash\{1\}}\).
For \(\omega=\omega_{0}+\varepsilon^{2}\widetilde{\omega}\) and \(c=c_{g}\), we write
\[A_{m}(\omega,c_{g})=A_{m}(\omega_{0},c_{g})+\varepsilon^{2}\widetilde{\omega} (1-c_{g}^{2})^{-1}B_{m},\quad B_{m}=\left(\begin{array}{cc}0&0\\ -m^{2}(\omega+\omega_{0})&2{\rm i}mc_{g}\end{array}\right).\]
Because of
\[\left(\begin{array}{cc}\langle G_{0},F_{0}\rangle&\langle G_{0},F_{1} \rangle\\ \langle G_{1},F_{0}\rangle&\langle G_{1},F_{1}\rangle\end{array}\right)= \left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\]
the spatial dynamics system (22) with \(\omega=\omega_{0}+\widetilde{\omega}\varepsilon^{2}\) and \(c=c_{g}\) is now rewritten in the separated form:
\[\left(\begin{array}{c}\partial_{\xi}q_{1}\\ \partial_{\xi}q_{0}-q_{1}\end{array}\right)=\varepsilon^{2}H_{0}(q_{0},q_{1},S_{1},{\bf V}_{\geq 3}), \tag{49a}\] \[\partial_{\xi}S_{1}=\Pi A_{1}(\omega_{0},c_{g})S_{1}+\varepsilon^{2} \widetilde{\omega}(1-c_{g}^{2})^{-1}\Pi B_{1}Y_{1}+\varepsilon^{2}\Pi H_{1}(q _{0},q_{1},S_{1},{\bf V}_{\geq 3}),\] (49b) and for \[m\in\mathbb{N}_{\rm odd}\backslash\{1\}\], \[\partial_{\xi}Y_{m}=A_{m}(\omega_{0},c_{g})Y_{m}+\varepsilon^{2} \widetilde{\omega}(1-c_{g}^{2})^{-1}B_{m}Y_{m}+\varepsilon^{2}H_{m}(q_{0},q_{1 },S_{1},{\bf V}_{\geq 3}), \tag{49c}\]
where the correction terms \(H_{0}\) and \((H_{m})_{m\in\mathbb{N}_{\rm odd}}\) are given by
\[H_{0} =\frac{\widetilde{\omega}}{\omega_{0}\omega_{n_{0}}^{\prime \prime}(l_{0})}\left(\begin{array}{c}-(\omega+\omega_{0})q_{0}+{\rm i}(( \omega+\omega_{0})\langle f_{n_{0}},\partial_{l}f_{n_{0}}\rangle+2c_{g})q_{1} \\ \langle\partial_{l}f_{n_{0}}-{\rm i}\nu f_{n_{0}},f_{n_{0}}\rangle({\rm i}( \omega+\omega_{0})q_{0}+2c_{g}q_{1})+(\omega+\omega_{0})\langle\partial_{l}f_{n _{0}}-{\rm i}\nu f_{n_{0}},\partial_{l}f_{n_{0}}\rangle q_{1}\end{array}\right)\] \[+\frac{1}{\omega_{0}\omega_{n_{0}}^{\prime\prime}(l_{0})}\left( \begin{array}{c}\langle f_{n_{0}},\widetilde{\omega}(B_{1}S_{1})_{2}-\gamma r ({\bf V}*{\bf V}*{\bf V})_{1}\rangle\\ -{\rm i}\langle\partial_{l}f_{n_{0}}-{\rm i}\nu f_{n_{0}},\widetilde{\omega}(B _{1}S_{1})_{2}-\gamma r({\bf V}*{\bf V}*{\bf V})_{1}\rangle\end{array}\right),\]
and
\[H_{m}=-\gamma(1-c_{g}^{2})^{-1}\left(\begin{array}{c}0\\ r({\bf V}*{\bf V}*{\bf V})_{m}\end{array}\right),\quad m\in\mathbb{N}_{\rm odd}.\]
**Remark 3.1**.: _System (49) does not have an invariant reduction at \(S_{1}=0\) and \({\bf V}_{\geq 3}={\bf 0}\) because_
\[({\bf V}*{\bf V}*{\bf V})_{1}|_{S_{1}=0,{\bf V}_{\geq 3}={\bf 0}} =3|q_{0}f_{n_{0}}-{\rm i}q_{1}\partial_{l}f_{n_{0}}|^{2}(q_{0}f_{ n_{0}}-{\rm i}q_{1}\partial_{l}f_{n_{0}}),\] \[({\bf V}*{\bf V}*{\bf V})_{3}|_{S_{1}=0,{\bf V}_{\geq 3}={\bf 0}} =(q_{0}f_{n_{0}}-{\rm i}q_{1}\partial_{l}f_{n_{0}})^{3},\]
_which contributes to \(\Pi H_{1}\) and \(H_{3}\) (as well as to \(H_{0}\))._
### Resolvent operators for the linear system
In order to derive bounds (13) and (14), we need to perform near-identity transformations, which transform systems (49b) and (49c) to equivalent versions but with residual terms of the order \(\mathcal{O}(\varepsilon^{2(N+1)})\). To be able to do so, we will ensure that the operators \(\Pi A_{1}(\omega_{0},c_{g})\Pi\) and \(A_{m}(\omega_{0},c_{g})\), \(3\leq m\leq 2N+1\) are invertible with a bounded inverse.
By Lemma 2.3, these operators do not have zero eigenvalues but this is generally not sufficient since eigenvalues of these infinite-dimensional operators may accumulate near zero. However, the operators \(A_{m}(\omega_{0},c_{g})\) have the special structure
\[A_{m}(\omega_{0},c_{g})=\left(\begin{array}{cc}0&1\\ L_{m}&M_{m}\end{array}\right),\hskip 14.226378ptL_{m}=(1-c_{g}^{2})^{-1}[-( \partial_{x}+{\rm i}ml_{0})^{2}+\rho(x)-m^{2}\omega_{0}^{2}], \tag{50}\]
which we explore to prove invertibility of these operators under the non-degeneracy and non-resonance conditions. The following lemma gives the result.
**Lemma 3.2**.: _If the conditions (9), (10), and (11) are satisfied, then there exists a \(C_{0}>0\) such that_
\[\|(\Pi A_{1}(\omega_{0},c_{g})\Pi)^{-1}\|_{\tilde{R}\to\tilde{D}}+\sum_{m=3}^{ 2N+1}\|A_{m}(\omega_{0},c_{g})^{-1}\|_{\tilde{R}\to\tilde{D}}\leq C_{0}, \tag{51}\]
_where \(\tilde{D}\) and \(\tilde{R}\) are defined in (24)._
**Proof.** Under the non-degeneracy condition (10) which yields \(c_{g}\neq\pm 1\), the entries of \(A_{m}(\omega,c_{g})\) are all non-singular. To ensure the invertibility of \(A_{m}(\omega_{0},c_{g})\) with \(m\in\mathbb{N}_{\rm odd}\backslash\{1\}\), we consider the resolvent equation
\[A_{m}(\omega_{0},c_{g})\left(\begin{array}{c}v\\ w\end{array}\right)=\left(\begin{array}{c}f\\ g\end{array}\right),\]
for a given \((f,g)\in\tilde{R}\). The solution \((v,w)\in\tilde{D}\) is given by \(w=f\) and \(v\) obtained from the scalar Schrodinger equation
\[L_{m}v=g-M_{m}f.\]
Under the non-resonance conditions (11), \(0\) is in the spectral gap of \(L_{m}\) making the linear operator \(L_{m}:H^{2}_{\rm per}\mapsto L^{2}\) is invertible with a bounded inverse from \(L^{2}\) to \(H^{2}_{\rm per}\). Hence \(v=L_{m}^{-1}(g-M_{m}f)\) and \(A_{m}(\omega_{0},c_{g})\) is invertible with a bounded inverse from \(\tilde{R}\) to \(\tilde{D}\).
The operator \(A_{1}(\omega_{0},c_{g})\) is not invertible due to the double zero eigenvalue in Lemma 2.3, However, it is a Fredholm operator of index zero and hence by the closed range theorem there exists a solution of the inhomogeneous equation
\[A_{1}(\omega_{0},c_{g})\left(\begin{array}{c}v\\ w\end{array}\right)=\Pi\left(\begin{array}{c}f\\ g\end{array}\right),\]
for every \((f,g)\in\tilde{R}\). The solution is not uniquely determined since \({\rm span}(F_{0},F_{1})\) can be added to the solution \((v,w)\in\tilde{D}\), however, the restriction to the subset defined by the condition
\[\Pi\left(\begin{array}{c}v\\ w\end{array}\right)=\left(\begin{array}{c}v\\ w\end{array}\right)\]
removes projections to \({\rm span}(F_{0},F_{1})\). Consequently, the operator \((\Pi A_{1}(\omega_{0},c_{g})\Pi)\) is invertible with a bounded inverse from \(\tilde{R}\) to \(\tilde{D}\).
In the limit \(\delta\to 0\) of Lemma 2.6 we can calculate the eigenvalues \(\lambda\) of \(A_{m}(\omega_{0},c_{g})\) graphically, see Figure 4, and explicitly. The following lemma summarizes the key properties of eigenvalues which are needed for Assumption 4.11 in Theorem 1.7.
**Lemma 3.3**.: _Let \(\rho\equiv 1\). For every fixed \(m\in\mathbb{N}_{\rm odd}\), the operator \(A_{m}(\omega_{0},c_{g})\) has purely imaginary eigenvalues, Jordan blocks of which have length at most two, and complex semi-simple eigenvalues with nonzero real parts bounded away from zero. Moreover, if \(l_{0}\in\mathbb{R}\backslash\mathbb{Q}\), then all nonzero, purely imaginary eigenvalues are semi-simple._
**Proof.** Eigenvalues of \(A_{m}(\omega_{0},c_{g})\) are found as solutions of the nonlinear equations (29). For \(\rho\equiv 1\) we use Fourier series (45) and write \(\omega_{n}^{2}(l)=1+(n+l)^{2}\) with \(n=k\in\mathbb{Z}\). After simple manipulations, eigenvalues \(\lambda\) are found from
\[\big{(}\omega_{0}^{-1}\lambda+{\rm i}\omega_{0}(k-mn_{0})\big{)}^{2}=1+k^{2}+2 l_{0}mk-m^{2}(\omega_{0}^{2}-l_{0}^{2})-\omega_{0}^{2}(k-mn_{0})^{2},\quad k \in\mathbb{Z},\]
where \(m\in\mathbb{N}_{\rm odd}\), \(n_{0}\in\mathbb{N}\), and \(l_{0}\in\mathbb{B}\) are fixed and where \(\omega_{0}=\sqrt{1+(n_{0}+l_{0})^{2}}\). Setting \(\kappa:=k-mn_{0}\), we get
\[\big{(}\omega_{0}^{-1}\lambda+{\rm i}\omega_{0}\kappa\big{)}^{2}=1-(m-\kappa( n_{0}+l_{0}))^{2}.\]
Eigenvalues \(\lambda\) are found explicitly as
\[\lambda=-{\rm i}\kappa\omega_{0}^{2}\pm{\rm i}\omega_{0}\sqrt{(m-\kappa(n_{0} +l_{0}))^{2}-1}.\]
For each \(k\in\mathbb{Z}\), the value of \(\kappa\in\mathbb{Z}\) is fixed. Eigenvalues are double if
\[(m-\kappa(n_{0}+l_{0}))^{2}-1=0\]
for some \(m\in\mathbb{N}_{\rm odd}\) and \(\kappa\in\mathbb{Z}\), in which case the Jordan blocks have length two. If \(l_{0}\in\mathbb{R}\backslash\mathbb{Q}\) and \(\kappa\neq 0\), then \((m-\kappa(n_{0}+l_{0}))^{2}-1\neq 0\) and the eigenvalues are semi-simple, in which case there are no Jordan blocks.
If \(\kappa=0\), then \(\lambda=\pm\mathrm{i}\omega_{0}\sqrt{m^{2}-1}\) which includes a double zero eigenvalue for \(m=1\) and pairs of semi-simple purely imaginary eigenvalues. If \(\kappa\neq 0\), then complex eigenvalues off the imaginary axis arise for each \((m,\kappa)\in\mathbb{N}_{\mathrm{odd}}\times\mathbb{Z}\) with \(|m-\kappa(n_{0}+l_{0})|<1\); the two complex eigenvalues appear in pairs symmetrically about \(\mathrm{i}\mathbb{R}\). Therefore, complex eigenvalues with nonzero real parts are semi-simple. Moreover, \(\mathrm{Im}(\lambda)=-\kappa\omega_{0}^{2}\) and hence \(|\mathrm{Im}(\lambda)|\geq\omega_{0}^{2}\) for each complex eigenvalue.
**Remark 3.4**.: _Conditions (40) ensure that_
\[\sqrt{(m-\kappa(n_{0}+l_{0}))^{2}-1}\neq\kappa\omega_{0},\quad 1\leq m\leq 2N+1, \quad\kappa\in\mathbb{N}.\]
_As a result, the purely imaginary non-zero eigenvalues of Lemma 3.3 are bounded away from zero by_
\[D_{m}:=|\omega_{0}|\inf_{\kappa\in\mathbb{N}}|\sqrt{(m-\kappa(n_{0}+l_{0}))^{2 }-1}-\kappa\omega_{0}|>0,\quad 1\leq m\leq 2N+1.\]
_The inequality \(D_{m}>0\) follows from the fact that_
\[|\sqrt{(m-\kappa(n_{0}+l_{0}))^{2}-1}-\kappa\omega_{0}|\sim\kappa(\sqrt{1+(n_{ 0}+l_{0})^{2}}-n_{0}-l_{0})\]
_as \(\kappa\to\infty\), where \(\omega_{0}=\sqrt{1+(n_{0}+l_{0})^{2}}\) has been used. We do not need invertibility of \(A_{m}(\omega_{0},c_{g})\) as \(m\to\infty\) since we only use the near-identity transformations for \(1\leq m\leq 2N+1\). Therefore, we do not need to investigate whether \(D_{m}\to 0\) as \(m\to\infty\)._
In the next two subsections we proceed with near identity transformations by using the bounds (51) and prove the following theorem.
**Theorem 3.5**.: _There exists \(\varepsilon_{0}>0\) such that for every \(\varepsilon\in(-\varepsilon_{0},\varepsilon_{0})\), there exists a sequence of near-identity transformations which transforms system (49) to the following form:_
\[\left(\begin{array}{c}\partial_{\xi}q_{1}\\ \partial_{\xi}q_{0}-q_{1}\end{array}\right)=\sum_{j=1}^{N}\varepsilon^{2j}Z_{j }^{(0)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N)})+\varepsilon^{2N+2}Z_{N+ 1}^{(0)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N)}), \tag{52a}\] \[\partial_{\xi}S_{1}^{(N)}=\Pi A_{1}(\omega_{0},c_{g})S_{1}^{(N)}+\sum_{j=1}^{N} \varepsilon^{2j}Z_{j}^{(1)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N)})+ \varepsilon^{2N+2}Z_{N+1}^{(1)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N )}),\] (52b) _and for \[m\in\mathbb{N}_{\mathrm{odd}}\backslash\{1\}\],_ \[\partial_{\xi}Y_{m}^{(N)}=A_{m}(\omega_{0},c_{g})Y_{m}^{(N)}+\sum_{j=1}^{N} \varepsilon^{2j}Z_{j}^{(m)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N )})+\varepsilon^{2N+2}Z_{N+1}^{(m)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N )}), \tag{52c}\]
_where \(Z_{j}^{(m)}(q_{0},q_{1},0,\mathbf{0})=0\) for every \(1\leq j\leq N\) and \(m\in\mathbb{N}_{\mathrm{odd}}\). The variables \(S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N)}\), and \(Y_{m}^{(N)}\) are obtained from \(S_{1},\mathbf{V}_{\geq 3}\), and \(Y_{m}\) via \(N\) near-identity transformations depending on \(q_{0}\) and \(q_{1}\), e.g.,_
\[S_{1}^{(N)}=S_{1}+\varepsilon^{2}\Phi^{(2)}(q_{0},q_{1})+\varepsilon^{4}\Phi^{( 4)}(q_{0},q_{1})+\cdots+\varepsilon^{2N}\Phi^{(2N)}(q_{0},q_{1}),\]
_where \(\Phi^{(2j)}\) depends polynomially on \(q_{0},\bar{q}_{0},q_{1},\) and \(\bar{q}_{1}\), and analogously for \(\mathbf{V}_{\geq 3}^{(N)}\) and \(Y_{m}^{(N)}\). Moreover, the transformations preserve the reversibility of the system, cf. Section 2.3._
**Remark 3.6**.: _It is well known that in the equation for \(u_{j}\) with eigenvalue \(i\lambda_{j}\) a term of the form \(u_{j_{1}}^{n_{j_{1}}}\ldots u_{j_{r}}^{n_{j_{r}}}\) can be eliminated by a near identity transformation if the non-resonance condition \(\lambda_{j}-\sum_{k=1}^{r}n_{j_{k}}\lambda_{n_{j_{k}}}\neq 0\) is satisfied, see Sec. 3.3 in [10]. Since the eigenvalues for the \((q_{0},q_{1})\)-part vanish and since the eigenvalues for the \(S_{1}^{(N)}\)-part and the \(Y_{m}^{(N)}\)-part do not vanish, all polynomial terms in \((q_{0},q_{1})\) can be eliminated in the equations for the \(S_{1}^{(N)}\)and \(Y_{m}^{(N)}\). This elimination is done by Theorem 3.5 up to order \(\mathcal{O}(\varepsilon^{2N})\). Some detailed calculations can be found in the subsequent Sections 3.3 and 3.4._
**Remark 3.7**.: _The condition \(Z_{j}^{(m)}(q_{0},q_{1},0,\mathbf{0})=0\) for every \(1\leq j\leq N\) and \(m\in\mathbb{N}_{\mathrm{odd}}\) corresponds to \(\widetilde{N}_{r}(\widetilde{u}_{0},0)=0\) in system (18). Ignoring the higher order terms_
\[\varepsilon^{2N+2}Z_{N+1}^{(m)}(q_{0},q_{1},S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N )})\]
_for \(m\in\mathbb{N}_{\mathrm{odd}}\) in (52b) and (52c) gives an invariant subspace \(\{(S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N)})=(0,\mathbf{0})\}\) in which an approximate homoclinic solution for the full system can be found. It bifurcates for \(\varepsilon\neq 0\) from the trivial solution for \(\varepsilon=0\)._
### Removing polynomial terms in \(q_{0}\) and \(q_{1}\) from (49c)
In order to show how the near-identity transformations reduce (49c) to (52c), we consider a general inhomogeneous term \(\varepsilon^{2j}q_{0}^{k_{1}}\bar{q}_{0}^{k_{2}}q_{1}^{k_{3}}\bar{q}_{1}^{k_{4}}\) in the right-hand side of (49c) with some \(1\leq j\leq N\) and positive integers \(k_{1},k_{2},k_{3},k_{4}\). The transformations are produced sequentially, from terms of order \(\mathcal{O}(\varepsilon^{2})\) to terms of order \(\mathcal{O}(\varepsilon^{2N})\) and for each polynomial order in \((q_{0},q_{1})\).
At the lowest order \(j=1\), there exists only one inhomogeneous term in (49c) for \(m=3\), cf. Remark 3.1, which is given by
\[H_{3}^{(q)}=-\gamma(1-c_{g}^{2})^{-1}\left(\begin{array}{c}0\\ r(q_{0}f_{n_{0}}-\mathrm{i}q_{1}\partial_{l}f_{n_{0}})^{3}\end{array}\right).\]
Hence, \(H_{3}=H_{3}^{(q)}+H_{3}^{\mathrm{rest}}\), where \(H_{3}^{\mathrm{rest}}=0\) if \((S_{1},\mathbf{V}_{\geq 3})=(0,\mathbf{0})\). Substituting \(Y_{3}=\tilde{Y}_{3}+\varepsilon^{2}\mathfrak{Y}_{3}\) with
\[\mathfrak{Y}_{3}:=h_{0}\left(\begin{array}{c}q_{0}^{3}\\ 3q_{0}^{2}q_{1}\end{array}\right)+h_{1}\left(\begin{array}{c}q_{0}^{2}q_{1} \\ 2q_{0}q_{1}^{2}\end{array}\right)+h_{2}\left(\begin{array}{c}q_{0}q_{1}^{2}\\ q_{1}^{3}\end{array}\right)+h_{3}\left(\begin{array}{c}q_{1}^{3}\\ 0\end{array}\right)\]
into (49c) yields
\[\partial_{\xi}\tilde{Y}_{3}=A_{3}(\omega_{0},c_{g})\tilde{Y}_{3}+\varepsilon^ {2}\widetilde{\omega}(1-c_{g}^{2})^{-1}B_{3}\tilde{Y}_{3}+\varepsilon^{2} \tilde{H}_{3}.\]
The choice of the second components in each term of \(\mathfrak{Y}_{3}\) is dictated by the fact that \(\partial_{\xi}q_{0}-q_{1}\) and \(\partial_{\xi}q_{1}\) are of the order of \(\mathcal{O}(\varepsilon^{2})\) due to equation (49a). We are looking for scalar functions \(h_{j}\in H_{\mathrm{per}}^{2}\) from the sequence of linear inhomogeneous equations obtained with the help of (50):
\[q_{0}^{3}: L_{3}h_{0}=\gamma(1-c_{g}^{2})^{-1}rf_{n_{0}}^{3},\] \[q_{0}^{2}q_{1}: L_{3}h_{1}=-3\mathrm{i}\gamma(1-c_{g}^{2})^{-1}rf_{n_{0}}^{2} \partial_{l}f_{n_{0}}-3M_{3}h_{0},\] \[q_{0}q_{1}^{2}: L_{3}h_{2}=-3\gamma(1-c_{g}^{2})^{-1}rf_{n_{0}}(\partial_{l}f_{n_{ 0}})^{2}-2M_{3}h_{1}+6h_{0},\] \[q_{1}^{3}: L_{3}h_{3}=\mathrm{i}\gamma(1-c_{g}^{2})^{-1}r(\partial_{l}f_{n_{ 0}})^{3}-M_{3}h_{2}+2h_{1}.\]
Since \(L_{3}\) is invertible with a bounded inverse by Lemma 3.2, there exist unique functions \(h_{j}\in H_{\mathrm{per}}^{2}\) which are obtained recursively from \(h_{0}\) to \(h_{3}\). After the inhomogeneous terms are
removed by the choice of \(h_{j}\), the transformed right-hand side \(\tilde{H}_{3}\) becomes
\[\tilde{H}_{3} =H_{3}^{\text{rest}}-h_{0}\left(\begin{array}{c}3q_{0}^{2}(\dot{q }_{0}-q_{1})\\ 6q_{0}q_{1}(\dot{q}_{0}-q_{1})+3q_{0}^{2}\dot{q}_{1}\end{array}\right)-h_{1} \left(\begin{array}{c}2q_{0}q_{1}(\dot{q}_{0}-q_{1})+q_{0}^{2}\dot{q}_{1}\\ 2(\dot{q}_{0}-q_{1})q_{1}^{2}+4q_{0}q_{1}\dot{q}_{1}\end{array}\right)\] \[\quad-h_{2}\left(\begin{array}{c}(\dot{q}_{0}-q_{1})q_{1}^{2}+2 q_{0}q_{1}\dot{q}_{1}\\ 3q_{1}^{2}\dot{q}_{1}\end{array}\right)-h_{3}\left(\begin{array}{c}3q_{1}^{2 }\dot{q}_{1}\\ 0\end{array}\right)+\varepsilon^{2}\widetilde{\omega}(1-c_{g}^{2})^{-1}B_{3} \mathfrak{Y}_{3},\]
where \(H_{3}^{\text{rest}}\) is also modified due to the transformation. Substituting for \(\dot{q}_{0}-q_{1}\) and \(\dot{q}_{1}\) from (49a) shows that \(\tilde{H}_{3}(q_{0},q_{1},0,\mathbf{0})=\mathcal{O}(\varepsilon^{2})\), hence the first step of the procedure transforms (49c) into (52c) with \(N=1\). One can then define \(\tilde{H}_{3}=\tilde{H}_{3}^{(q)}+\tilde{H}_{3}^{\text{rest}}\) with \(\tilde{H}_{3}^{\text{rest}}=0\) if \((S_{1},\mathbf{V}_{\geq 3})=(0,\mathbf{0})\) and proceed with next steps of the procedure.
A general step of this procedure is performed similarly. Without loss of generality, since the principal part of system (49a) is independent of \(\bar{q}_{0}\) and \(\bar{q}_{1}\), we consider a general polynomial of degree \(M\) at fixed \(m\in\{3,5,\ldots,2N+1\}\):
\[H_{m}^{(q)}=\sum_{j=0}^{M}q_{0}^{M-j}q_{1}^{j}\left(\begin{array}{c}a_{j}\\ b_{j}\end{array}\right),\]
where \((a_{j},b_{j})^{T}\) depend on \(x\) only. Substituting
\[Y_{m}=\tilde{Y}_{m}+\varepsilon^{2}\mathfrak{Y}_{m},\qquad\mathfrak{Y}_{m}:= \sum_{j=0}^{M}q_{0}^{M-j}q_{1}^{j}\left(\begin{array}{c}h_{j}\\ g_{j}\end{array}\right)\]
into (49c) yields (52c) with \(N\) being incremented by one if \(h_{m},g_{m}\in H_{\text{per}}^{2}\) are found from two chains of recurrence equations for \(j\in\{0,1,\ldots,M\}\):
\[g_{j} =-a_{j}+(M+1-j)h_{j-1},\] \[L_{m}h_{j} =-b_{j}-M_{m}g_{j}+(M+1-j)g_{j-1},\]
which are truncated at \(h_{-1}=g_{-1}=0\). Since \(L_{m}\) for \(3\leq m\leq 2N+1\) are invertible with a bounded inverse by Lemma 3.2, the recurrence equations are uniquely solvable from \(g_{0}\) to \(h_{0}\), then to \(g_{1}\) and \(h_{1}\) and so on to \(g_{M}\) and \(h_{M}\). The aforementioned first step is obtained from here with \(M=3\) and \(a_{0}=a_{1}=a_{2}=a_{3}=0\).
### Removing polynomial terms in \(q_{0}\) and \(q_{1}\) from (49b)
Similarly, we perform near-identity transformations which reduce (49b) to (52b). The only complication is the presence of the projection operator \(\Pi\) in system (49b).
At lowest order \(j=1\), there exist two inhomogeneous terms in (49b) due to \(\Pi B_{1}Y_{1}\) and \(\Pi H_{1}\), which can be written without the projection operator \(\Pi\) as follows:
\[\frac{\widetilde{\omega}}{1-c_{g}^{2}}\left(q_{0}B_{1}F_{0}+q_{1} B_{1}F_{1}\right)-\frac{\gamma}{1-c_{g}^{2}}\left(\begin{array}{c}0\\ 3r(q_{0}f_{n_{0}}-\mathrm{i}q_{1}\partial_{l}f_{n_{0}})^{2}(\bar{q}_{0}\bar{f} _{n_{0}}+\mathrm{i}\bar{q}_{1}\partial_{l}\bar{f}_{n_{0}})\end{array}\right)\] \[=:q_{0}\mathbf{H}^{(0)}+q_{1}\mathbf{H}^{(1)}+|q_{0}|^{2}q_{0} \mathbf{H}^{(2)}+q_{0}^{2}\bar{q}_{1}\mathbf{H}^{(3)}+|q_{0}|^{2}q_{1}\mathbf{ H}^{(4)}+q_{0}|q_{1}|^{2}\mathbf{H}^{(5)}+\bar{q}_{0}q_{1}^{2}\mathbf{H}^{(6)}+|q_{1}|^{ 2}q_{1}\mathbf{H}^{(7)}.\]
Substituting \(S_{1}=\tilde{S}_{1}+\varepsilon^{2}\mathfrak{S}_{1}\) with
\[\mathfrak{S}_{1}:=q_{0}\mathbf{S}^{(0)}+q_{1}\mathbf{S}^{(1)}+|q_{0}|^{2}q_{0} \mathbf{S}^{(2)}+q_{0}^{2}\bar{q}_{1}\mathbf{S}^{(3)}+|q_{0}|^{2}q_{1}\mathbf{S }^{(4)}+q_{0}|q_{1}|^{2}\mathbf{S}^{(5)}+\bar{q}_{0}q_{1}^{2}\mathbf{S}^{(6)}+| q_{1}|^{2}q_{1}\mathbf{S}^{(7)},\]
where \(\Pi\mathbf{S}^{(j)}=\mathbf{S}^{(j)}\), into (49b) yields
\[\partial_{\xi}\tilde{S}_{1}=\Pi A_{1}(\omega_{0},c_{g})\tilde{S}_{1}+ \varepsilon^{2}\widetilde{\omega}(1-c_{g}^{2})^{-1}\Pi B_{1}\tilde{S}_{1}+ \varepsilon^{2}\Pi\tilde{H}_{1},\]
where \(\tilde{H}_{1}\) is of the next \(\mathcal{O}(\varepsilon^{2})\) order if \(\mathbf{S}^{(0)},\mathbf{S}^{(1)}\) are chosen from the system of inhomogeneous equations
\[q_{0}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(0)}=-\Pi\mathbf{H}^{(0)},\] \[q_{1}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(1)}=-\Pi\mathbf{H}^{(1)}+ \mathbf{S}^{(0)},\]
and \(\mathbf{S}^{(2)},\ldots,\mathbf{S}^{(7)}\) are chosen from the system of inhomogeneous equations
\[|q_{0}|^{2}q_{0}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(2)}=-\Pi\mathbf{H}^{(2)},\] \[q_{0}^{2}\bar{q}_{1}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(3)}=-\Pi\mathbf{H}^{(3)}+ \mathbf{S}^{(2)},\] \[|q_{0}|^{2}q_{1}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(4)}=-\Pi\mathbf{H}^{(4)}+ 2\mathbf{S}^{(2)},\] \[q_{0}|q_{1}|^{2}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(5)}=-\Pi\mathbf{H}^{(5)} +2\mathbf{S}^{(3)}+\mathbf{S}^{(4)},\] \[\bar{q}_{0}^{2}q_{1}^{2}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(6)}=-\Pi\mathbf{H}^{(6)}+ \mathbf{S}^{(4)},\] \[|q_{1}|^{2}q_{1}: \Pi A_{1}(\omega_{0},c_{g})\mathbf{S}^{(7)}=-\Pi\mathbf{H}^{(7)} +\mathbf{S}^{(5)}+\mathbf{S}^{(6)}.\]
Since \(\Pi A_{1}(\omega_{0},c_{g})\Pi\) is invertible with a bounded inverse by Lemma 3.2, the two chains of equations are uniquely solvable: from \(\mathbf{S}^{(0)}\) to \(\mathbf{S}^{(1)}\) and from \(\mathbf{S}^{(2)}\) to \(\mathbf{S}^{(7)}\).
It is now straightforward that a general step of the recursive procedure can be performed to reduce (49b) to (52b).
## 4. Construction of the local center-stable manifold
By Theorem 3.5, the spatial dynamical system can be transformed to the form (52), where the coupling of \((q_{0},q_{1})\in\mathbb{C}^{2}\) with \(S_{1}^{(N)}\in\tilde{D}\) and \(Y_{m}^{(N)}\in\tilde{D}\) for \(m\in\mathbb{N}_{\mathrm{odd}}\backslash\{0\}\) occurs at order \(\mathcal{O}(\varepsilon^{2N+2})\). We are now looking for solutions of system (52) for \(\xi\) on \([0,\xi_{0}]\) for some \(\varepsilon\)-dependent value \(\xi_{0}>0\). In order to produce the bound (13), we will need to extend the result to \(\xi_{0}=\varepsilon^{-(2N+1)}\).
The local center-stable manifold on \([0,\xi_{0}]\) will be constructed close to the homoclinic orbit of the system
\[\left(\begin{array}{c}\partial_{\xi}q_{1}\\ \partial_{\xi}q_{0}-q_{1}\end{array}\right)=\sum_{j=1}^{N}\varepsilon^{2j}Z_{ j}^{(0)}(q_{0},q_{1},0,\mathbf{0}) \tag{53}\]
which is a truncation of (52a). The leading-order term \(Z_{1}^{(0)}(q_{0},q_{1},0,\mathbf{0})\) is computed explicitly as
\[Z_{1}^{(0)}(q_{0},q_{1},0,\mathbf{0}) =\frac{2\widetilde{\omega}}{\omega_{0}\omega_{n_{0}}^{\prime \prime}(l_{0})}\left(\begin{array}{c}-\omega_{0}q_{0}+\mathrm{i}(\omega_{0} \langle f_{n_{0}},\partial_{l}f_{n_{0}}\rangle+c_{g})q_{1}\\ \langle\partial_{l}f_{n_{0}}-\mathrm{i}\nu f_{n_{0}},f_{n_{0}}\rangle(\mathrm{ i}\omega_{0}q_{0}+c_{g}q_{1})+\omega_{0}\langle\partial_{l}f_{n_{0}}-\mathrm{i}\nu f _{n_{0}},\partial_{l}f_{n_{0}}\rangle q_{1}\end{array}\right)\] \[\quad-\frac{3\gamma}{\omega_{0}\omega_{n_{0}}^{\prime\prime}(l_{0 })}\left(\begin{array}{c}\langle f_{n_{0}},r|q_{0}f_{n_{0}}-\mathrm{i}q_{1} \partial_{l}f_{n_{0}}|^{2}(q_{0}f_{n_{0}}-\mathrm{i}q_{1}\partial_{l}f_{n_{0}} )\rangle\\ \langle\mathrm{i}(\partial_{l}f_{n_{0}}-\mathrm{i}\nu f_{n_{0}}),r|q_{0}f_{n_{0 }}-\mathrm{i}q_{1}\partial_{l}f_{n_{0}}|^{2}(q_{0}f_{n_{0}}-\mathrm{i}q_{1} \partial_{l}f_{n_{0}})\rangle\end{array}\right).\]
The stationary NLS equation (43) for \(A\) rewritten as a first order system for \(q_{0}(\xi)=A(X)\) with \(X=\varepsilon\xi\) and \(q_{1}(\xi)=\varepsilon A^{\prime}(X)\) is
\[\left(\begin{array}{c}\partial_{\xi}q_{1}\\ \partial_{\xi}q_{0}-q_{1}\end{array}\right)=\varepsilon^{2}(\omega_{n_{0}}^{ \prime\prime}(l_{0}))^{-1}\left(\begin{array}{c}-2\widetilde{\omega}q_{0}- \gamma_{n_{0}}(l_{0})|q_{0}|^{2}q_{0}\\ 0\end{array}\right), \tag{54}\]
or equivalently
\[A^{\prime\prime}=(\omega_{n_{0}}^{\prime\prime}(l_{0}))^{-1}(-2\widetilde{ \omega}A-\gamma_{n_{0}}(l_{0})|A|^{2}A), \tag{55}\]
where \(\omega_{n_{0}}^{\prime\prime}(l_{0})\neq 0\) due to the non-degeneracy condition (10). Equation (54) is the leading order (in \(\varepsilon\)) part of (53) if \(q_{0}=\mathcal{O}(1)\) and \(q_{1}=\mathcal{O}(\varepsilon)\).
The following lemma gives persistence of the sech solution (44) of the reduced system (54) with \((q_{0},q_{1})=(A(\varepsilon\cdot),\varepsilon A^{\prime}(\varepsilon\cdot))\) as a solution of the truncated system (53).
**Lemma 4.1**.: _Assume (10) and let \(A\) be given by (44). For each \(N\in\mathbb{N}\) and a sufficiently small \(\varepsilon>0\), there exists a unique homoclinic orbit of system (53) satisfying the properties_
\[\mathrm{Im}\ q_{0}(0)=0,\quad\mathrm{Re}\ q_{1}(0)=0. \tag{56}\]
_such that_
\[\|q_{0}-A(\varepsilon\cdot)\|_{L^{\infty}}\leq C\varepsilon,\quad\|q_{1}- \varepsilon A^{\prime}(\varepsilon\cdot)\|_{L^{\infty}}\leq C\varepsilon^{2}, \tag{57}\]
_and_
\[|q_{0}(\xi)|\leq Ce^{-\varepsilon\alpha|\xi|},\quad|q_{1}(\xi)|\leq\varepsilon Ce ^{-\varepsilon\alpha|\xi|},\quad\forall\xi\in\mathbb{R}, \tag{58}\]
_for some \(\alpha>0\) and \(C>0\)._
**Proof.** Since
\[q_{0}F_{0}(x)+q_{1}F_{1}(x)=\left(\begin{array}{c}q_{0}f_{n_{0}}(l_{0},x)- \mathrm{i}q_{1}\partial_{l}f_{n_{0}}(l_{0},x)\\ q_{1}f_{n_{0}}(l_{0},x)\end{array}\right), \tag{59}\]
and since the Fourier coefficients of \(f_{n_{0}}(l_{0},x)\) and \(\partial_{l}f_{n_{0}}(l_{0},x)\) are real by Remark 2.9, the condition (56) expresses the reversibility condition (47) for \(m=1\) in the linear combination (59). The reduced system (54) has two symmetries: if \((q_{0}(\xi),q_{1}(\xi))\) is a solution, so is
\[(q_{0}(\xi+\xi_{0})e^{\mathrm{i}\theta_{0}},q_{1}(\xi+\xi_{0})e^{\mathrm{i} \theta_{0}}) \tag{60}\]
for real \(\xi_{0}\) and \(\theta_{0}\). In the scaling \(q_{0}(\xi)=\check{q}_{0}(X)\) with \(X=\varepsilon\xi\) and \(q_{1}(\xi)=\varepsilon\check{q}_{1}(X)\) system (53) is of the form
\[\left(\begin{array}{c}\partial_{X}\check{q}_{1}\\ \varepsilon^{-1}\left(\partial_{X}\check{q}_{0}-\check{q}_{1}\right)\end{array}\right) =\sum_{j=1}^{j=N}\varepsilon^{2j-2}Z_{j}^{(0)}(\check{q}_{0}, \varepsilon\check{q}_{1},0,\mathbf{0})\] \[=\left(\begin{array}{c}(\omega_{n_{0}}^{\prime\prime}(l_{0}))^ {-1}(-2\widetilde{\omega}\check{q}_{0}-\gamma_{n_{0}}(l_{0})|\check{q}_{0}|^{2 }\check{q}_{0})\\ 0\end{array}\right)+\mathcal{O}(\varepsilon). \tag{61}\]
For \(\varepsilon=0\) there is a homoclinic orbit \(q_{\mathrm{hom}}\) for system (54) which is given by \((\check{q}_{0},\check{q}_{1})=(A,A^{\prime})\) with \(A\) in (44). The existence of a homoclinic orbit for small \(\varepsilon>0\) can be established with the following reversibility argument. For \(\varepsilon=0\) in the point \((\check{q}_{0}^{*},\check{q}_{1}^{*})=(\gamma_{1},0)\) with \(\gamma_{1}=\sqrt{2|\widetilde{\omega}|/|\gamma_{n_{0}}(l_{0})|}\), the family of homoclinic orbits \(e^{i\theta}q_{\mathrm{hom}}(\cdot+\xi)\) intersects the fixed space of reversibility
\[\mathrm{Im}\ \check{q}_{0}(0)=0,\quad\mathrm{Re}\ \check{q}_{1}(0)=0\]
transversally. This can be seen as follows, see Figure 5.
In the coordinates \((\mathrm{Re}\check{q}_{0},\mathrm{Im}\check{q}_{0},\mathrm{Re}\check{q}_{1}, \mathrm{Im}\check{q}_{1})\) the fixed space of reversibility lies in the span of \((1,0,0,0)\) and \((0,0,0,1)\). The tangent space at the family of homoclinic orbits \(e^{i\theta}q_{\mathrm{hom}}(\cdot+\xi)\) in \((\check{q}_{0}^{*},\check{q}_{1}^{*})\) is spanned by the \(\xi\)-tangent vector which is proportional to \((0,0,1,0)\) and the \(\theta\)-tangent vector which is proportional to \((0,1,0,0)\). Since the vector field of (61) depends smoothly on the small parameter \(0<\varepsilon\ll 1\) this intersection persists under adding higher order terms, i.e. for small \(\varepsilon>0\). Thus, the reversibility operator gives a homoclinic orbit for (61) for small \(\varepsilon>0\), too. Undoing the scaling gives the homoclinic orbit for the truncated system (53) with the
It remains to prove the approximation bound (57). The symmetry (60) generates the two-dimensional kernel of the linearized operator associated with the leading-order part of the truncated system (53):
\[\left(\begin{array}{c}q_{0}\\ q_{1}\end{array}\right)=\left(\begin{array}{c}A^{\prime}(X)\\ \varepsilon A^{\prime\prime}(X)\end{array}\right)\quad\text{and}\quad\left( \begin{array}{c}q_{0}\\ q_{1}\end{array}\right)=\mathrm{i}\left(\begin{array}{c}A(X)\\ \varepsilon A^{\prime}(X)\end{array}\right) \tag{62}\]
The symmetry modes (62) do not satisfy the reversibility constraints (56) because \(A(0)\neq 0\) and \(A^{\prime\prime}(0)\neq 0\), whereas the truncated system (53) inherits the reversibility symmetry (56) of the original dynamical system (22). Therefore, if we substitute the decomposition
\[\left(\begin{array}{c}q_{0}(\xi)\\ q_{1}(\xi)\end{array}\right)=\left(\begin{array}{c}A(\varepsilon\xi)\\ \varepsilon A^{\prime}(\varepsilon\xi)\end{array}\right)+\left(\begin{array} []{c}\mathfrak{q}_{0}(\xi)\\ \varepsilon\mathfrak{q}_{1}(\xi)\end{array}\right)\]
into (53), then the correction term \((\mathfrak{q}_{0},\mathfrak{q}_{1})\) satisfies the nonlinear system where the residual terms of the order of \(\mathcal{O}(\varepsilon)\), see (61), are automatically orthogonal to the kernel of the linearized operator. By the implicit theorem in Sobolev space \(H^{1}(\mathbb{R})\), one can uniquely solve the nonlinear system for the correction term \((\mathfrak{q}_{0},\mathfrak{q}_{1})\) under the reversibility constraints (56) such that
\[\|\mathfrak{q}_{0}\|_{H^{1}}+\|\mathfrak{q}_{1}\|_{H^{1}}\leq C\varepsilon,\]
for some \(\varepsilon\)-independent \(C>0\). This yields the approximation bound (57) in the original variables due to the Sobolev embedding of \(H^{1}(\mathbb{R})\) into \(L^{\infty}(\mathbb{R})\).
**Remark 4.2**.: _The approximation bound (57) yields the estimate (14) in Theorem 1.7, where_
\[h(\xi,z,x)=\varepsilon q_{0}(\xi)f_{n_{0}}(l_{0},x)e^{\mathrm{i}z}-\mathrm{i} \varepsilon q_{1}(\xi)\partial_{l}f_{n_{0}}(l_{0},x)e^{\mathrm{i}z}+\mathrm{c.c.}\]
_with \(q_{0}\) and \(q_{1}\) from Lemma 4.1._
**Remark 4.3**.: _Referring to system (17), we have now constructed the homoclinic solution for the approximate reduced system_
\[\partial_{\xi}\widetilde{u}_{0}=M_{0}\widetilde{u}_{0}+\widetilde{N}_{0}( \widetilde{u}_{0},0).\]
_It remains to prove the persistence of the homoclinic solutions as generalized breather solutions under considering the higher order terms \(\varepsilon^{2N+2}Z^{(m)}_{N+1}(q_{0},q_{1},S^{(N)}_{1},\mathbf{V}^{(N)}_{ \geq 3})\) in (52b) and
Figure 5. Transversal intersection of the homoclinic orbit of system (53) with the fixed space of the reversibility operator.
(52c) _for \(m\in\mathbb{N}_{\rm odd}\) which lead to \((S_{1}^{(N)},\mathbf{V}_{\geq 3}^{(N)})\neq(0,\mathbf{0})\). We do so by constructing a center-stable manifold nearby the approximate homoclinic solution, cf. the rest of Section 4, and by proving that the center-stable manifold intersects the fixed space of reversibility transversally, cf. Section 5._
Let us denote the \(\varepsilon\)-dependent reversible homoclinic orbit of Lemma 4.1 by \((Q_{0},\varepsilon Q_{1})\) and introduce the decomposition \((q_{0},q_{1})=(Q_{0},\varepsilon Q_{1})+(\mathfrak{q}_{0},\varepsilon\mathfrak{ q}_{1})\). We abbreviate \(\mathbf{c}_{0,hom}:=(Q_{0},Q_{1},\overline{Q_{0}},\overline{Q_{1}})\) and \(\mathbf{c}_{0,r}:=(\mathfrak{q}_{0},\mathfrak{q}_{1},\overline{\mathfrak{q}_ {0}},\overline{\mathfrak{q}_{1}})\). Furthermore, we collect the components \(S_{1}^{(N)},Y_{m}^{(N)}\) for \(m\in\mathbb{N}_{\rm odd}\backslash\{1\}\) in \(\mathbf{c}_{r}\). With these notations system (52) can now be rewritten in the abstract form:
\[\partial_{\xi}\mathbf{c}_{0,r} =\varepsilon\Lambda_{0}(\xi)\mathbf{c}_{0,r}+\varepsilon\mathbf{ G}(\mathbf{c}_{0,r},\mathbf{c}_{r})+\varepsilon^{2N+1}\mathbf{G}_{R}( \mathbf{c}_{0,hom}+\mathbf{c}_{0,r},\mathbf{c}_{r}), \tag{63a}\] \[\partial_{\xi}\mathbf{c}_{r} =\Lambda_{r}(\xi)\mathbf{c}_{r}+\varepsilon^{2}\mathbf{F}( \mathbf{c}_{0,hom}+\mathbf{c}_{0,r},\mathbf{c}_{r})+\varepsilon^{2N+2} \mathbf{F}_{R}(\mathbf{c}_{0,hom}+\mathbf{c}_{0,r},\mathbf{c}_{r}), \tag{63b}\]
where the vector \(\mathbf{c}_{0,r}(\xi)\in\mathbb{C}^{4}\) is controlled in the norm \(\|\cdot\|_{\mathbb{C}^{4}}\), whereas the vector \(\mathbf{c}_{r}(\xi)\) is controlled in the phase space \(\mathcal{D}\) defined by (25) with the norm \(\|\cdot\|_{\mathcal{D}}\). The operator \(\varepsilon\Lambda_{0}\) is the linearization around the homoclinic orbit \(\mathbf{c}_{0,hom}\) and hence \(\mathbf{G}\) depends nonlinearly on \(\mathbf{c}_{0,r}\). Note that including the complex conjugated variables in \(\mathbf{c}_{0,r}\) is needed in order for the linearized system \(\partial_{\xi}\mathbf{c}_{0,r}=\varepsilon\Lambda_{0}(\xi)\mathbf{c}_{0,r}\) to be linear with respect to the complex vector field.
**Remark 4.4**.: _At leading order in \(\varepsilon\) the matrix \(\Lambda_{0}\) is derived from (61) in the form_
\[\Lambda_{0}=\begin{pmatrix}K&0\\ 0&K\end{pmatrix}-\gamma_{n_{0}}(l_{0})(\omega_{n_{0}}^{\prime\prime}(l_{0}))^ {-1}\begin{pmatrix}M_{1}(A)&M_{2}(A)\\ M_{2}(A)&M_{1}(A)\end{pmatrix}+\mathcal{O}(\varepsilon),\]
_where_
\[K=\begin{pmatrix}0&1\\ -2\widetilde{\omega}(\omega_{n_{0}}^{\prime\prime}(l_{0}))^{-1}&0\end{pmatrix},\quad M_{1}(A)=\begin{pmatrix}0&0\\ 2A^{2}&0\end{pmatrix},\quad M_{2}(A)=\begin{pmatrix}0&0\\ A^{2}&0\end{pmatrix},\]
_using the fact that \(A\) is real._
**Remark 4.5**.: _Although not indicated by our notation, the operators \(\Lambda_{0}\), \(\Lambda_{r}\) and the functions \(\mathbf{G}\), \(\mathbf{G}_{R}\), \(\mathbf{F}\), and \(\mathbf{F}_{R}\) depend on \(\xi\) and \(\varepsilon\) continuously._
**Remark 4.6**.: _We lose one power of \(\varepsilon\) in front of \(\mathbf{G}\) and \(\mathbf{G}_{R}\) by working with \(Q_{1}\) instead of \(\varepsilon Q_{1}\)._
### Residual terms of system (63)
Residual terms are controlled as follows.
**Lemma 4.7**.: _There exists \(\varepsilon_{0}>0\) such that for every \(\varepsilon\in(0,\varepsilon_{0})\) the residual terms of system (63) satisfy the bounds for every \(\xi\in\mathbb{R}\):_
\[\|\mathbf{G}(\mathbf{c}_{0,r},\mathbf{c}_{r})(\xi)\|_{\mathbb{C}^ {4}} \leq C\left(\|\mathbf{c}_{0,r}(\xi)\|_{\mathbb{C}^{4}}^{2}+\|\mathbf{c}_{r}( \xi)\|_{\mathcal{D}}\right),\] \[\|\mathbf{F}(\mathbf{c}_{0,hom}+\mathbf{c}_{0,r},\mathbf{c}_{r})( \xi)\|_{\mathcal{R}} \leq C\left(\|\mathbf{c}_{0,hom}(\xi)+\mathbf{c}_{0,r}(\xi)\|_{ \mathbb{C}^{4}}+\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}\right)\|\mathbf{c}_{r}\|_ {\mathcal{D}},\] \[\|\mathbf{G}_{R}(\mathbf{c}_{0,hom}+\mathbf{c}_{0,r},\mathbf{c}_{r })(\xi)\|_{\mathbb{C}^{4}} \leq C\left(\|\mathbf{c}_{0,hom}(\xi)+\mathbf{c}_{0,r}(\xi)\|_{ \mathbb{C}^{4}}+\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}\right),\] \[\|\mathbf{F}_{R}(\mathbf{c}_{0,hom}+\mathbf{c}_{0,r},\mathbf{c}_{r })(\xi)\|_{\mathcal{R}} \leq C\left(\|\mathbf{c}_{0,hom}(\xi)+\mathbf{c}_{0,r}(\xi)\|_{ \mathbb{C}^{4}}+\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}\right),\]
_as long as \(\|\mathbf{c}_{0,r}(\xi)\|_{\mathbb{C}^{4}}+\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}\leq C\), where \(C>0\) is a generic \(\varepsilon\)-independent constant, which may change from line to line._
**Proof.** The residual terms are defined in Theorem 3.5. Functions \(\mathbf{G}\), \(\mathbf{G}_{R}\), \(\mathbf{F}\), and \(\mathbf{F}_{R}\) map \(\mathcal{D}\) into \(\mathcal{D}\) since \(\mathcal{D}\) forms a Banach algebra with respect to pointwise multiplication. Using \((q_{0},q_{1})=\mathbf{c}_{0,hom}+\mathbf{c}_{0,r}\) and the fact that \(\|\mathbf{c}_{0,hom}(\xi)\|_{\mathbb{C}^{4}}\) is bounded independently of \(\varepsilon\), the bounds on \(\mathbf{G}\), \(\mathbf{G}_{R}\), \(\mathbf{F}\), and \(\mathbf{F}_{R}\) follow.
### Linearized operator of system (63a)
The linear part of system (63a) is the linearization around the approximate homoclinic orbit \(q_{0,hom}\) from Lemma 4.1. Due to the translational and the \(SO(2)\)-symmetry of the family of homoclinic orbits generated by these symmetries applied to the reversible homoclinic orbit, the solution space of the linearized equation includes a two-dimensional subspace spanned by exponentially decaying functions.
**Lemma 4.8**.: _Consider the linear inhomogeneous equation_
\[\partial_{\xi}\mathbf{c}_{0,r}=\varepsilon\Lambda_{0}\mathbf{c}_{0,r}+ \varepsilon\mathbf{F}_{h}, \tag{64}\]
_with a given \(\mathbf{F}_{h}\in C^{0}_{b}(\mathbb{R},\mathbb{C}^{4})\). The homogeneous equation has a two-dimensional stable subspace spanned by the two fundamental solutions_
\[\mathbf{s}_{1}(\xi)=\mathbf{c}^{\prime}_{0,hom}(\varepsilon\xi),\quad\mathbf{ s}_{2}(\xi)=\mathrm{i}J\mathbf{c}_{0,hom}(\varepsilon\xi), \tag{65}\]
_where \(J=\mathrm{diag}(1,1,-1,-1)\). If \(\mathbf{F}_{h}=(F_{0},F_{1},\overline{F_{0}},\overline{F_{1}})\) satisfies the constraints_
\[F_{0}(\xi)=\overline{F}_{0}(-\xi),\qquad F_{1}(\xi)=-\overline{F}_{1}(-\xi), \qquad\xi\in\mathbb{R}, \tag{66}\]
_then there exists a two-parameter family of solutions \(\mathbf{c}_{0,r}\in C^{0}_{b}(\mathbb{R})\) in the form_
\[\mathbf{c}_{0,r}=\alpha_{1}\mathbf{s}_{1}+\alpha_{2}\mathbf{s}_{2}+\tilde{ \mathbf{c}}_{0,r},\]
_where \((\alpha_{1},\alpha_{2})\in\mathbb{C}^{2}\) and \(\tilde{\mathbf{c}}_{0,r}\in C^{0}_{b}(\mathbb{R})\) is a particular solution satisfying the constraints (66) and the bound_
\[\|\tilde{\mathbf{c}}_{0,r}\|_{L^{\infty}(\mathbb{R})}\leq C\|\mathbf{F}_{h}\| _{L^{\infty}(\mathbb{R})} \tag{67}\]
_for an \(\varepsilon\)-independent constant \(C\)._
**Proof.** As already said, the existence of the two-dimensional stable subspace spanned by (65) follows from the translational symmetries due to spatial translations and phase rotations of the truncated system (53). Since the truncated system is posed in \(\mathbb{C}^{4}\), the solution space is four dimensional and the other two fundamental solutions of the homogeneous equation are exponentially growing as \(\xi\to\infty\). This can be seen from the limit of \(\Lambda_{0}(\xi)\) as \(\xi\to\infty\). Indeed, we have
\[\lim_{\xi\to\infty}\Lambda_{0}(\xi)=\begin{pmatrix}K&0\\ 0&K\end{pmatrix},\]
the eigenvalues of which are \(\pm\sqrt{-2\widetilde{\omega}(\omega^{\prime\prime}_{n_{0}}(l_{0}))^{-1}}\), each being double, where \(\widetilde{\omega}(\omega^{\prime\prime}_{n_{0}}(l_{0}))^{-1}<0\) by assumption of Theorem 1.7.
As a result, system (64) possesses an exponential dichotomy, see Proposition 1 in Chapter 4 and the discussion starting on page 13 of [12]. The existence of a particular bounded solution \(\mathbf{c}_{0,r}\equiv\mathbf{c}^{(p)}_{0,r}\) satisfying the bound (67) now follows from Theorem 7.6.3 in [10]. Let us then define \(\tilde{\mathbf{c}}_{0,r}:=\mathbf{c}^{(p)}_{0,r}+\tilde{\alpha}_{1}\mathbf{s} _{1}+\tilde{\alpha}_{2}\mathbf{s}_{2}=(\tilde{\mathbf{q}}_{0},\tilde{\mathbf{ q}}_{1},\overline{\tilde{\mathbf{q}}_{0}},\overline{\tilde{\mathbf{q}}_{1}})\) and pick the unique values of \(\tilde{\alpha}_{1}\) and \(\tilde{\alpha}_{2}\) to satisfy the constraints
\[\mathrm{Im}(\tilde{\mathbf{q}}_{0})(0)=0\quad\text{and}\quad\ \mathrm{Re}(\tilde{\mathbf{q}}_{1})(0)=0. \tag{68}\]
This is always possible since
\[\mathbf{s}_{1}(0)=\left(\begin{array}{c}Q^{\prime}_{0}(0)\\ Q^{\prime}_{1}(0)\\ \bar{Q}^{\prime}_{0}(0)\\ \bar{Q}^{\prime}_{1}(0)\end{array}\right)\quad\text{and}\quad\mathbf{s}_{2}(0) =\left(\begin{array}{c}\mathrm{i}Q_{0}(0)\\ \mathrm{i}Q_{1}(0)\\ -\mathrm{i}\bar{Q}_{0}(0)\\ -\mathrm{i}\bar{Q}_{1}(0)\end{array}\right),\]
where \(\mathrm{Re}(Q_{0}(0))=A(0)+\mathcal{O}(\varepsilon)\), \(\mathrm{Im}(Q_{1}(0))=\mathcal{O}(\varepsilon)\), \(\mathrm{Im}(Q_{0}^{\prime}(0))=\mathcal{O}(\varepsilon)\), and \(\mathrm{Re}(Q_{1}^{\prime}(0))=A^{\prime\prime}(0)+\mathcal{O}(\varepsilon)\) by Lemma 4.1 with \(A(0)\neq 0\) and \(A^{\prime\prime}(0)\neq 0\). Hence, for every \(\mathbf{c}_{0,r}^{(p)}\in L^{\infty}(\mathbb{R})\), the linear system (68) for \(\tilde{\alpha}_{1}\) and \(\tilde{\alpha}_{2}\) admits a unique solution such that
\[|\tilde{\alpha}_{1}|+|\tilde{\alpha}_{2}|\leq C\|\mathbf{c}_{0,r}^{(p)}\|_{L^ {\infty}(\mathbb{R})}.\]
The solution \(\tilde{\mathbf{c}}_{0,r}=\mathbf{c}_{0,r}^{(p)}+\tilde{\alpha}_{1}\mathbf{s}_ {1}+\tilde{\alpha}_{2}\mathbf{s}_{2}\) is bounded and satisfies the bound (67).
The matrix \(\Lambda_{0}\) commutes with the symmetry operator defined by (66), i.e. if \(\mathbf{f}\in C_{b}^{0}(\mathbb{R},\mathbb{C}^{4})\) satisfies (66), then so does \(\Lambda_{0}\mathbf{f}\). In addition, the right-hand side \(\mathbf{F}_{h}=(F_{0},F_{1},\overline{F_{0}},\overline{F_{1}})\) satisfies (66). Hence the vector field is closed in the subspace satisfying (66). If a bounded solution \(\tilde{\mathbf{c}}_{0,r}=(\tilde{\mathfrak{q}}_{0},\tilde{\mathfrak{q}}_{1}, \overline{\tilde{\mathfrak{q}}_{0}},\overline{\tilde{\mathfrak{q}}_{1}})\) on \((0,\infty)\) satisfies the constraints (68), then its extension on \((-\infty,\infty)\) belongs to the subspace satisfying (66). Thus, the existence of the bounded solution \(\tilde{\mathbf{c}}_{0,r}\) of (64) satisfying (66) and (67) is proven. A general bounded solution of (64) has the form \(\mathbf{c}_{0,r}=\alpha_{1}\mathbf{s}_{1}+\alpha_{2}\mathbf{s}_{2}+\tilde{ \mathbf{c}}_{0,r}\), where \((\alpha_{1},\alpha_{2})\in\mathbb{C}^{2}\) are arbitrary.
**Remark 4.9**.: _If \(|\alpha_{1}|+|\alpha_{2}|\neq 0\), then the solution \(\mathbf{c}_{0,r}=\alpha_{1}\mathbf{s}_{1}+\alpha_{2}\mathbf{s}_{2}+\tilde{ \mathbf{c}}_{0,r}\) does not satisfy the reversibility constraint (66) because \(\mathbf{s}_{1}\) and \(\mathbf{s}_{2}\) violate the reversibility constraints._
### Estimates for the local center-stable manifold
We are now ready to construct a local center-stable manifold for system (63). Let us split the components in \(\mathbf{c}_{r}\) in three sets denoted by \(\mathbf{c}_{s}\), \(\mathbf{c}_{u}\), and \(\mathbf{c}_{c}\), where \(\mathbf{c}_{s}\), \(\mathbf{c}_{u}\), and \(\mathbf{c}_{c}\) correspond to components of \(\Lambda_{r}\) with eigenvalues \(\lambda\) with \(\mathrm{Re}(\lambda)<0\), \(\mathrm{Re}(\lambda)>0\), and \(\mathrm{Re}(\lambda)=0\) respectively.
**Remark 4.10**.: _These coordinates correspond to the stable, unstable, and reduced center manifold of the linearized system in Lemma 2.3, where the reduced center manifold is obtained after the double zero eigenvalue is removed since the eigenspace of the double zero eigenvalue is represented by the coordinate \(\mathbf{c}_{0,r}\)._
We study the coordinates \(\mathbf{c}_{s}\), \(\mathbf{c}_{u}\), and \(\mathbf{c}_{c}\) in subsets of the phase space \(\mathcal{D}\) denoted by \(\mathcal{D}_{s}\), \(\mathcal{D}_{u}\), and \(\mathcal{D}_{c}\) respectively. Similarly, the restrictions of \(\Lambda_{r}\) to the three subsets of \(\mathcal{D}\) are denoted by \(\Lambda_{s}\), \(\Lambda_{u}\), and \(\Lambda_{c}\) respectively. Moreover, let \(P_{j}\) for \(j=s,u,c\) be the projection operator from \(\mathcal{D}\) to \(\mathcal{D}_{j}\) satisfying \(\|P_{s}\|_{\mathcal{D}\to\mathcal{D}}+\|P_{u}\|_{\mathcal{D}\to\mathcal{D}}+ \|P_{c}\|_{\mathcal{D}\to\mathcal{D}}\leq C\) for some \(C>0\).
We make the following assumption on the semi-groups generated by the linearized system, cf. [11].
**Assumption 4.11**.: _There exist \(K>0\) and \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\) we have_
\[\|e^{\Lambda_{s}\xi}\|_{\mathcal{D}\to\mathcal{D}} \leq K, \xi\geq 0,\] \[\|e^{\Lambda_{u}\xi}\|_{\mathcal{D}\to\mathcal{D}} \leq K, \xi\leq 0,\] \[\|e^{\Lambda_{c}\xi}\|_{\mathcal{D}\to\mathcal{D}} \leq K, \xi\in\mathbb{R}.\]
The following theorem gives the construction of the local center-stable manifold near the reversible homoclinic orbit of Lemma 4.1. It also provides a classification of all parameters of the local manifold which will be needed in Section 5 to satisfy the reversibility conditions. The center-stable manifold is constructed for \(\xi\in[0,\varepsilon^{-(2N+1)}]\) and not for all \(\xi\geq 0\). The bound of \(\mathcal{O}(\varepsilon^{2N})\) on the coordinates \(\mathbf{c}_{0,r}\) and \(\mathbf{c}_{r}\) is consistent with the bound (13) in Theorem 1.7.
**Theorem 4.12**.: _Under Assumption 4.11, there exist \(\varepsilon_{0}>0\), \(C>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\) the following holds. For every \(\mathbf{a}\in\mathcal{D}_{c}\), \(\mathbf{b}\in\mathcal{D}_{s}\), and \((\alpha_{1},\alpha_{2})\in\mathbb{C}^{2}\) satisfying_
\[\|\mathbf{a}\|_{\mathcal{D}_{c}}\leq C\varepsilon^{2N},\quad\|\mathbf{b}\|_{ \mathcal{D}_{s}}\leq C\varepsilon^{2N},\quad|\alpha_{1}|+|\alpha_{2}|\leq C \varepsilon^{2N}, \tag{69}\]
_there exists a family of local solutions of system (63) satisfying the bound_
\[\sup_{\xi\in[0,\varepsilon^{-(2N+1)}]}(\|{\bf c}_{0,r}(\xi)\|_{\mathbb{C}^{4}}+\|{ \bf c}_{c}(\xi)\|_{\mathcal{D}_{c}}+\|{\bf c}_{s}(\xi)\|_{\mathcal{D}_{s}}+\|{ \bf c}_{u}(\xi)\|_{\mathcal{D}_{u}})\leq C\varepsilon^{2N}, \tag{70}\]
_as well as the identities \({\bf c}_{c}(0)={\bf a}\), \(e^{-\xi_{0}\Lambda_{s}}{\bf c}_{s}(\xi_{0})=b\) at \(\xi_{0}=\varepsilon^{-(2N+1)}\), and \({\bf c}_{0,r}=\alpha_{1}{\bf s}_{1}+\alpha_{2}{\bf s}_{2}+\tilde{\bf c}_{0,r}\) with uniquely defined \(\tilde{\bf c}_{0,r}:[0,\varepsilon^{-(2N+1)}]\to\mathbb{C}^{4}\)._
**Proof.** In order to construct solutions of system (63) on \([0,\xi_{0}]\) with some \(\varepsilon\)-dependent \(\xi_{0}>0\), we multiply the nonlinear vector field of system (63b) by a smooth cut-off function \(\chi_{[0,\xi_{0}]}\in C^{\infty}([0,\infty))\) such that
\[\partial_{\xi}{\bf c}_{r}=\Lambda_{r}{\bf c}_{r}+\varepsilon^{2}\chi_{[0,\xi_{ 0}]}{\bf F}({\bf c}_{0,hom}+{\bf c}_{0,r},{\bf c}_{r})+\varepsilon^{2N+2}\chi _{[0,\xi_{0}]}{\bf F}_{R}({\bf c}_{0,hom}+{\bf c}_{0,r},{\bf c}_{r}),\quad\xi \in[0,\xi_{0}], \tag{71}\]
where \(\chi_{[0,\xi_{0}]}(\xi)=1\) for \(\xi\in[0,\xi_{0}]\) and \(\chi_{[0,\xi_{0}]}(\xi)=0\) for \(\xi\in(\xi_{0},\infty)\). Similarly, we multiply the nonlinear vector field of system (63a) by the same cut-off function and add a symmetrically reflected vector field on \([-\xi_{0},0]\) to obtain
\[\partial_{\xi}{\bf c}_{0,r} =\varepsilon\Lambda_{0}{\bf c}_{0,r}+\varepsilon\chi_{[0,\xi_{0} ]}{\bf G}({\bf c}_{0,r},{\bf c}_{r})+\varepsilon^{2N+1}\chi_{[0,\xi_{0}]}{\bf G }_{R}({\bf c}_{0,hom}+{\bf c}_{0,r},{\bf c}_{r})\] \[+\varepsilon\chi_{[-\xi_{0},0]}{\bf G}^{*}({\bf c}_{0,r},{\bf c} _{r})+\varepsilon^{2N+1}\chi_{[-\xi_{0},0]}{\bf G}_{R}^{*}({\bf c}_{0,hom}+{ \bf c}_{0,r},{\bf c}_{r}),\quad\xi\in[-\xi_{0},\xi_{0}]\] \[=:\varepsilon\Lambda_{0}{\bf c}_{0,r}+\varepsilon\tilde{\bf G}({ \bf c}_{0,r},{\bf c}_{r})+\varepsilon^{2N+1}\tilde{\bf G}_{r}, \tag{72}\]
where
\[\bar{\bf G}_{0}^{*}(-\xi):={\bf G}_{0}(\xi),\quad\bar{\bf G}_{1}^{*}(-\xi):=-{ \bf G}_{1}(\xi),\quad\bar{\bf G}_{R,0}^{*}(-\xi):={\bf G}_{R,0}(\xi),\quad \bar{\bf G}_{R,1}^{*}(-\xi):=-{\bf G}_{R,1}(\xi),\]
for all \(\xi\in[0,\xi_{0}]\), resulting in \(\tilde{\bf G}\) and \(\tilde{\bf G}_{R}\) satisfying the reversibility condition (66). This modification allows us to apply Lemma 4.8 on \(\mathbb{R}\).
We are looking for a global solution of system (71)-(72) for \(\xi\in[0,\infty)\) which may be unbounded as \(\xi\to\infty\). This global solution coincides with a local solution of system (63) on the interval \([0,\xi_{0}]\subset\mathbb{R}\).
We write \({\bf c}_{0,r}=\alpha_{1}{\bf s}_{1}+\alpha_{2}{\bf s}_{2}+\tilde{\bf c}_{0,r}\) and rewrite (72) as an equation for \(\tilde{\bf c}_{0,r}\). By the construction of the vector field in system (72), the vector field satisfies the reversibility constraints (66). By the bounds of Lemma 4.7 and the invertibility of the linear operator in Lemma 4.8, the implicit function theorem implies that there exists a unique map from \({\bf c}_{r}\in C^{0}_{b}([0,\xi_{0}],\mathcal{D})\) to \({\bf c}_{0,r}\in C^{0}_{b}([0,\xi_{0}],\mathbb{C}^{4})\) satisfying
\[\sup_{\xi\in[0,\xi_{0}]}\|{\bf c}_{0,r}(\xi)\|_{\mathbb{C}^{4}}\leq|\alpha_{1} |+|\alpha_{2}|+C\sup_{\xi\in[0,\xi_{0}]}\|{\bf c}_{r}(\xi)\|_{\mathcal{D}}+ \varepsilon^{2N}C\sup_{\xi\in[0,\xi_{0}]}\left(1+\|{\bf c}_{r}(\xi)\|_{ \mathcal{D}}\right), \tag{73}\]
as long as \(|\alpha_{1}|+|\alpha_{2}|+\sup_{\xi\in[0,\xi_{0}]}\|{\bf c}_{r}(\xi)\|_{ \mathcal{D}}\leq C\varepsilon^{\mu}\) for some \(C>0\) and \(\mu>0\).
Using the variation of constant formula the solution of system (71) projected to \(\mathcal{D}_{c}\oplus\mathcal{D}_{s}\oplus\mathcal{D}_{u}\) can be rewritten in the integral form
\[{\bf c}_{c}(\xi) =e^{\xi\Lambda_{c}}{\bf a}+\varepsilon^{2}\int_{0}^{\xi}e^{(\xi- \xi^{\prime})\Lambda_{c}}P_{c}{\bf F}({\bf c}_{0,hom}(\varepsilon\xi^{\prime})+ {\bf c}_{0,r}(\xi^{\prime}),{\bf c}_{r}(\xi^{\prime}))d\xi^{\prime}\] \[+\varepsilon^{2N+2}\int_{0}^{\xi}e^{(\xi-\xi^{\prime})\Lambda_{c}}P _{c}{\bf F}_{R}({\bf c}_{0,hom}(\varepsilon\xi^{\prime})+{\bf c}_{0,r}(\xi^{ \prime}),{\bf c}_{r}(\xi^{\prime}))d\xi^{\prime}, \tag{74}\]
\[\mathbf{c}_{s}(\xi) =e^{\xi\Lambda_{s}}\mathbf{b}-\varepsilon^{2}\int_{\xi}^{\xi_{0}}e^{ (\xi-\xi^{\prime})\Lambda_{s}}P_{s}\mathbf{F}(\mathbf{c}_{0,hom}(\varepsilon\xi^ {\prime})+\mathbf{c}_{0,r}(\xi^{\prime}),\mathbf{c}_{r}(\xi^{\prime}))d\xi^{\prime}\] \[-\varepsilon^{2N+2}\int_{\xi}^{\xi_{0}}e^{(\xi-\xi^{\prime}) \Lambda_{s}}P_{s}\mathbf{F}_{R}(\mathbf{c}_{0,hom}(\varepsilon\xi^{\prime})+ \mathbf{c}_{0,r}(\xi^{\prime}),\mathbf{c}_{r}(\xi^{\prime}))d\xi^{\prime}, \tag{75}\]
and
\[\mathbf{c}_{u}(\xi) =-\varepsilon^{2}\int_{\xi}^{\xi_{0}}e^{(\xi-\xi^{\prime}) \Lambda_{u}}P_{u}\mathbf{F}(\mathbf{c}_{0,hom}(\varepsilon\xi^{\prime})+ \mathbf{c}_{0,r}(\xi^{\prime}),\mathbf{c}_{r}(\xi^{\prime}))d\xi^{\prime}\] \[-\varepsilon^{2N+2}\int_{\xi}^{\xi_{0}}e^{(\xi-\xi^{\prime}) \Lambda_{u}}P_{u}\mathbf{F}_{R}(\mathbf{c}_{0,hom}(\varepsilon\xi^{\prime})+ \mathbf{c}_{0,r}(\xi^{\prime}),\mathbf{c}_{r}(\xi^{\prime}))d\xi^{\prime}, \tag{76}\]
where \(\mathbf{c}_{c}(0)=\mathbf{a}\), \(\mathbf{c}_{s}(\xi_{0})=e^{\xi_{0}\Lambda_{s}}\mathbf{b}\), and \(\mathbf{c}_{u}(\xi_{0})=\mathbf{0}\). It is assumed in (74), (75), and (76) that \(\mathbf{c}_{0,r}\in C_{b}^{0}([0,\xi_{0}],\mathbb{C}^{4})\) is expressed in terms of \(\mathbf{c}_{r}\in C_{b}^{0}([0,\xi_{0}],\mathcal{D})\) by using the map satisfying (73). The existence of a unique local (small) solution \(\mathbf{c}_{c}\in C_{b}^{0}[0,\xi_{0}],\mathcal{D}_{c})\), \(\mathbf{c}_{s}\in C_{b}^{0}([0,\xi_{0}],\mathcal{D}_{s})\), and \(\mathbf{c}_{u}\in C_{b}^{0}([0,\xi_{0}],\mathcal{D}_{u})\) in the system of integral equations (74), (75), and (76) follows from the implicit function theorem for small \(\varepsilon>0\) and finite \(\xi_{0}>0\). To estimate this solution and to continue it for larger values of \(\xi_{0}\), we use the bounds of Lemma 4.7 and Assumption 4.11. It follows from (74) that
\[\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{c}(\xi)\|_{\mathcal{D}_{c}} \leq K\left[\|\mathbf{a}\|_{\mathcal{D}_{c}}+\varepsilon^{2}C \int_{0}^{\xi_{0}}\|\mathbf{c}_{0,hom}(\varepsilon\xi)\|_{\mathbb{C}^{4}}\| \mathbf{c}_{r}(\xi)\|_{\mathcal{D}}d\xi+\varepsilon^{2}C\xi_{0}\sup_{\forall \xi\in[0,\xi_{0}]}\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}^{2}\right.\] \[+\left.\varepsilon^{2N+2}C\int_{0}^{\xi_{0}}\|\mathbf{c}_{0,hom}( \varepsilon\xi)\|_{\mathbb{C}^{4}}d\xi+\varepsilon^{2N+2}C\xi_{0}\sup_{\xi\in [0,\xi_{0}]}\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}\right]\]
as long as \(|\alpha_{1}|+|\alpha_{2}|+\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{r}(\xi)\|_{ \mathcal{D}}\leq C\varepsilon^{\mu},\mu>0\). Similar estimates are obtained for \(\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{s}(\xi)\|_{X_{s}}\) and \(\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{u}(\xi)\|_{X_{u}}\).
We denote
\[S(\xi_{0}):=\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{c}(\xi)\|_{\mathcal{D}_{c}}+ \sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{s}(\xi)\|_{\mathcal{D}_{s}}+\sup_{\xi\in[ 0,\xi_{0}]}\|\mathbf{c}_{u}(\xi)\|_{\mathcal{D}_{u}}.\]
Since
\[\varepsilon\int_{0}^{\infty}\|\mathbf{c}_{0,hom}(\varepsilon\xi)\|_{\mathbb{C}^ {4}}d\xi\leq C\]
due to the bound (58), it follows from the previous estimates that there exists \(C>0\) such that
\[S(\xi_{0})\leq C\left(\|\mathbf{a}\|_{\mathcal{D}_{c}}+\|\mathbf{b}\|_{ \mathcal{D}_{s}}+\varepsilon^{2N+1}+\varepsilon S(\xi_{0})+\varepsilon^{2}\xi_ {0}S(\xi_{0})^{2}+\varepsilon^{2N+2}\xi_{0}S(\xi_{0})\right).\]
Using a bootstrapping argument, we show that \(S(\varepsilon^{-(2N+1)})\leq C\varepsilon^{2N}\) if \(\varepsilon>0\) is small enough. To do so, let us choose \(\mathbf{a}\), \(\mathbf{b}\) and \((\alpha_{1},\alpha_{2})\) to satisfy the bound (69) and let \(\delta\in(0,1)\). Then
\[S(\xi_{0})\leq C\left(\varepsilon^{2N}+(\varepsilon+\varepsilon^{2N+1+\delta} \xi_{0})S(\xi_{0})\right) \tag{77}\]
as long as \(S(\xi_{0})\leq\varepsilon^{2N-1+\delta}\).
For \(\xi_{0}=0\) we have \(S(\xi_{0})\leq C\varepsilon^{2N}\) because \(\|\mathbf{c}_{c}(0)\|_{\mathcal{D}_{c}}=\|\mathbf{a}\|_{\mathcal{D}_{c}}\), \(\|\mathbf{c}_{s}(0)\|_{\mathcal{D}_{s}}=\|\mathbf{b}\|_{\mathcal{D}_{s}}\), and \(\|\mathbf{c}_{u}(0)\|_{\mathcal{D}_{u}}=0\). Let us assume that there is \(\xi_{*}\in(0,\varepsilon^{-(2N+1)}]\) such that \(S(\xi_{*})=\varepsilon^{2N-1+\delta}\)
and \(S(\xi_{0})<\varepsilon^{2N-1+\delta}\) for all \(\xi_{0}\in(0,\xi_{*})\). Then (77) implies \(S(\xi_{0})\leq C(\varepsilon^{2N}+\varepsilon^{\delta}S(\xi_{0}))\) for all \(\xi_{0}\in(0,\xi_{*})\) and hence
\[S(\xi_{*})\leq C\varepsilon^{2N}<\varepsilon^{2N-1+\delta}\]
for \(\varepsilon>0\) small enough. This is a contradiction and we get that \(S(\xi_{0})\leq\varepsilon^{2N-1+\delta}\) for all \(\xi_{0}\in[0,\varepsilon^{-(2N+1)}]\). Applying again (77), we get
\[S(\varepsilon^{-(2N+1)})\leq C\varepsilon^{2N}. \tag{78}\]
In view of the bound (73), it follows that the local solution satisfies the bound (70).
**Remark 4.13**.: _The proximity bound (70) yields the estimate (13) in Theorem 1.7._
**Remark 4.14**.: _Assumption 4.11 can be satisfied for smooth small-contrast potentials, see Lemmas 2.6 and 3.3. For \(\rho\) with a small non-zero contrast spectral gaps occur in Figure 4. Smoothness of \(\rho\) allows to control the size of the spectral gaps for large \(\lambda\), cf. [1]. Assumption 4.11 can be weakened and Jordan-blocks can be allowed, see Remark 4.15._
**Remark 4.15**.: _In the generic case of eigenvalues, the Jordan blocks of which have length two, the bounds of Assumption 4.11 must be replaced by_
\[\|e^{\Lambda_{s}\xi}\|_{\mathcal{D}\to\mathcal{D}} \leq K|t|,\qquad\xi\geq 0,\] \[\|e^{\Lambda_{u}\xi}\|_{\mathcal{D}\to\mathcal{D}} \leq K|\xi|,\qquad\xi\leq 0,\] \[\|e^{\Lambda_{c}\xi}\|_{\mathcal{D}\to\mathcal{D}} \leq K|\xi|,\qquad\xi\in\mathbb{R}.\]
_The equivalent bound for the estimate of \(\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{c}(\xi)\|_{\mathcal{D}_{c}}\) is given by_
\[\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{c}(\xi)\|_{\mathcal{D}_{c}} \leq K\left[\xi_{0}\|\mathbf{a}\|_{\mathcal{D}_{c}}+\varepsilon^{ 2}C\int_{0}^{\xi_{0}}(\xi_{0}-\xi)\|\mathbf{c}_{0,hom}(\varepsilon\xi)\|_{ \mathbb{C}^{4}}\|\mathbf{c}_{r}(\xi)\|_{\mathcal{D}}d\xi\right.\] \[+ \left.\varepsilon^{2}C\xi_{0}^{2}\sup_{\xi\in[0,\xi_{0}]}\| \mathbf{c}_{r}(\xi)\|_{\mathcal{D}}^{2}+\varepsilon^{2N+2}C\int_{0}^{\xi_{0}} (\xi_{0}-\xi)\|\mathbf{c}_{0,hom}(\varepsilon\xi)\|_{\mathbb{C}^{4}}dy\right.\] \[+ \left.\varepsilon^{2N+2}C\xi_{0}^{2}\sup_{\xi\in[0,\xi_{0}]}\| \mathbf{c}_{r}(\xi)\|_{\mathcal{D}}\right]\]
_and similarly for \(\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{s}(\xi)\|_{X_{s}}\) and \(\sup_{\xi\in[0,\xi_{0}]}\|\mathbf{c}_{u}(\xi)\|_{X_{u}}\). This yields with the help of Gronwall's inequality and the bound_
\[\lim_{\xi_{0}\to\infty}\varepsilon^{2}\int_{0}^{\xi_{0}}(\xi_{0}-\xi)\| \mathbf{c}_{0,hom}(\varepsilon\xi)\|_{\mathbb{C}^{4}}d\xi=\lim_{y_{0}\to\infty }\int_{0}^{y_{0}}(y_{0}-y)\|\mathbf{c}_{0,hom}(y)\|_{\mathbb{C}^{4}}dy\leq C\]
_that_
\[S(\xi_{0})\leq C\left(\xi_{0}\|\mathbf{a}\|_{\mathcal{D}_{c}}+\xi_{0}\| \mathbf{b}\|_{\mathcal{D}_{s}}+\varepsilon^{2N}+\varepsilon^{2}\xi_{0}^{2}S( \xi_{0})^{2}+\varepsilon^{2N+2}\xi_{0}^{2}S(\xi_{0})\right).\]
_This bound still implies \(S(\xi_{0})\leq C\varepsilon^{2N}\) but for \(\xi_{0}=\varepsilon^{-N-1}\) and \(\|\mathbf{a}\|_{\mathcal{D}_{c}}+\|\mathbf{b}\|_{\mathcal{D}_{s}}\leq \varepsilon^{3N+1}\). Thus, the justification result of Theorem 1.7 can be extended on the scale of \(\xi\in\mathcal{O}(\varepsilon^{-N-1})\) to the generic case of eigenvalues with Jordan blocks of length two when Assumption 4.11 cannot be used, see Lemma 3.3._
## 5. End of the proof of Theorem 1.7
In Theorem 4.12 we constructed a family of local bounded solutions of system (63) on \([0,\varepsilon^{-(2N+1)}]\). These solutions are close to the reversible homoclinic orbit of Lemma 4.1 in the sense of the bound (13) for appropriately defined \(v\) and \(h\) but only on \([0,\varepsilon^{-(2N+1)}]\). It remains to extract those solutions of this family which satisfy (13) not only on \([0,\varepsilon^{-(2N+1)}]\), but also on \([-\varepsilon^{-(2N+1)},\varepsilon^{-(2N+1)}]\). We do so by extending the local solutions on \([0,\varepsilon^{-(2N+1)}]\) to the interval \([-\varepsilon^{-(2N+1)},\varepsilon^{-(2N+1)}]\) with the help of the reversibility constraints. Obviously this is only possible for the solutions which intersect the fixed space of reversibilty. Hence, for the proof of Theorem 1.7 it remains to prove that the local invariant center-stable manifold of system (63) intersects the subspace given by the reversibility constraints (47).
* Since the initial data \(\mathbf{c}_{c}(0)=\mathbf{a}\in\mathcal{D}_{c}\) in the local center-stable manifold of Theorem 4.12 are arbitrary, the components of \(\mathbf{a}\) can be chosen to satisfy the reversibility constraints (47). For example, using \(\hat{a}_{m,k}=(\hat{a}_{m,k}^{(v)},\hat{a}_{m,k}^{(w)})\) for Fourier representation (45), we can specify the reversibility constraints as \[\mathrm{Im}\ \hat{a}_{m,k}^{(v)}=0,\quad\mathrm{Re}\ \hat{a}_{m,k}^{(w)}=0,\quad(m,k)\in\mathbb{N}_{\mathrm{odd}}\times \mathbb{Z}.\]
* We have \(\mathbf{c}_{0,r}=\alpha_{1}\mathbf{s}_{1}+\alpha_{2}\mathbf{s}_{2}+\tilde{ \mathbf{c}}_{0,r}\), where \(\tilde{\mathbf{c}}_{0,r}\) satisfies the reversibility constraints (47) by Lemma 4.8. Since \(\mathbf{s}_{1}\) and \(\mathbf{s}_{2}\) violate (47), setting \(\alpha_{1}=\alpha_{2}=0\) satisfies the reversibility constraints for \(\mathbf{c}_{0,r}=\tilde{\mathbf{c}}_{0,r}\). The choice of \(\alpha_{1}=\alpha_{2}=0\) is unique by the implicit function theorem in the proof of Theorem 4.12.
* The initial data \(\mathbf{c}_{s}(0)\) and \(\mathbf{c}_{u}(0)\) are not arbitrary since the stable and unstable manifold theorems are used for construction of \(\mathbf{c}_{s}\) and \(\mathbf{c}_{u}\) in the proof of Theorem 4.12. Combining \(\mathbf{c}_{s/u}:=(\mathbf{c}_{s},\mathbf{c}_{u})\) together for the complex eigenvalues outside \(\mathrm{i}\mathbb{R}\), we can write \(\mathbf{c}_{s/u}(0)=\mathbf{b}+\tilde{\mathbf{c}}_{s/u}(0)\), where \(\tilde{\mathbf{c}}_{s/u}(0)\) are uniquely defined of the order of \(\mathcal{O}(\varepsilon^{2N})\) and depend on \(\mathbf{b}\in\mathcal{D}_{s}\) in higher orders. By the Implicit Function Theorem, there exists a unique solution \(\mathbf{b}\in\mathcal{D}_{s}\) of \(\mathbf{b}=\mathbf{c}_{s/u}(0)-\tilde{\mathbf{c}}_{s/u}(0)\) satisfying the reversibility constraints (47) and this unique \(\mathbf{b}\in\mathcal{D}_{s}\) satisfies the bound (69).
**Remark 5.1**.: _There still exist infinitely many parameters after \(\mathbf{a}\in\mathcal{D}_{c}\) have been chosen to satisfy the constraint (47), namely, \(\mathrm{Re}\ \hat{a}_{m,k}^{(v)}\) and \(\mathrm{Im}\ \hat{a}_{m,k}^{(w)}\) for \((m,k)\in\mathbb{N}_{\mathrm{odd}}\times\mathbb{Z}\). These parameters of the solution of Theorem 1.7 must satisfy the bound (69) in Theorem 4.12._
**Remark 5.2**.: _Since the local center-stable manifold intersects the plane given by the reversibility constraints (47), we have thus constructed a family of reversible solutions on \([-\varepsilon^{-(2N+1)},\varepsilon^{-(2N+1)}]\) while preserving the bound (14). Tracing the coordinate transformations back to the original variables completes the proof of Theorem 1.7._
|
2310.15491 | Early Planet Formation in Embedded Disks (eDisk) X: Compact Disks,
Extended Infall, and a Fossil Outburst in the Class I Oph IRS43 Binary | We present the first results from the Early Planet Formation in Embedded
Disks (eDisk) ALMA Large Program toward Oph IRS43, a binary system of solar
mass protostars. The 1.3 mm dust continuum observations resolve a compact disk,
~6au radius, around the northern component and show that the disk around the
southern component is even smaller, <~3 au. CO, 13CO, and C18O maps reveal a
large cavity in a low mass envelope that shows kinematic signatures of rotation
and infall extending out to ~ 2000au. An expanding CO bubble centered on the
extrapolated location of the source ~130 years ago suggests a recent outburst.
Despite the small size of the disks, the overall picture is of a remarkably
large and dynamically active region. | Suchitra Narayanan, Jonathan P. Williams, John J. Tobin, Jes K. Jorgensen, Nagayoshi Ohashi, Zhe-Yu Daniel Lin, Merel L. R. van't Hoff, Zhi-Yun Li, Adele L. Plunkett, Leslie W. Looney, Shigehisa Takakuwa, Hsi-Wei Yen, Yusuke Aso, Christian Flores, Jeong-Eun Lee, Shih-Ping Lai, Woojin Kwon, Itziar de Gregorio-Monsalvo, Rajeeb Sharma, Chang Won Lee | 2023-10-24T03:39:02Z | http://arxiv.org/abs/2310.15491v1 | # Early Planet Formation in Embedded Disks (eDisk) X:
###### Abstract
We present the first results from the Early Planet Formation in Embedded Disks (eDisk) ALMA Large Program toward Oph IRS43, a binary system of solar mass protostars. The 1.3 mm dust continuum observations resolve a compact disk, \(\sim 6\) au radius, around the northern component and show that the disk around the southern component is even smaller, \(\lesssim 3\) au. CO, \({}^{13}\)CO, and C\({}^{18}\)O maps reveal a large cavity in a low mass envelope that shows kinematic signatures of rotation and infall extending out to \(\sim 2000\) au. An expanding CO bubble centered on the extrapolated location of the source \(\sim 130\) years ago suggests a recent outburst. Despite the small size of the disks, the overall picture is of a remarkably large and dynamically active region.
Prot
stars may significantly affect their surroundings and evolution (Offner et al., 2010). However, disks can persist around each individual star and/or the system and ultimately turn into stable planetary systems (Thebault and Haghighipour, 2015). The study of young binary systems is therefore important both for a more complete picture of star and planet formation in general and also to extend the range of physical conditions for testing models.
The focus of this paper is on the protostellar binary system, Oph IRS43 (also known as YLW 15 and GY 265), that was observed as part of the Atacama Large Millimeter Array (ALMA) Large Program, Early Planet Formation in Embedded Disks (hereafter eDisk). This program, described in Ohashi et al. (2023), observed 12 Class 0 and 7 Class I protostars at high spatial resolution (\(0\farcs 04\)) with the primary goal of studying the properties of their accompanying disks.
Oph IRS43, hereafter IRS43, is a Class I embedded protostellar binary located in the L1688 region of the Ophiuchus molecular cloud complex. We adopt a distance to the source of 137.3 pc based on Very Long Baseline Array (VLBA) parallax measurements of 12 other (single) young stellar objects in L1688 (Ortiz-Leon et al., 2017). The combined bolometric luminosity and temperature of the system are \(L_{\rm bol}=4.15\,L_{\odot},\ T_{\rm bol}=193\,\)K (see Ohashi et al., 2023).
Girart et al. (2000) first showed that IRS43 was a \(0\farcs 6\) binary from centimeter wavelength observations with the Very Large Array (VLA) carried out in 1989. We follow their naming convention and designate the northern source, VLA1, and the southern source, VLA2. The latter is much brighter in the near-infrared (Duchene et al., 2007) and spectroscopy shows that it is a heavily extincted, \(A_{\rm V}=40\,\)mag, cool star with a KIV/V spectral type corresponding to an effective temperature \(\sim 4300\,\)K. It has a strong continuum excess (veiling = 3.0 at 2.2 \(\mu\)m) that indicates a high accretion rate, \(\sim 10^{-6}\,M_{\odot}\,\)yr\({}^{-1}\), and is rapidly rotating at a rate, \(v\sin i\sim 50\,\)km s\({}^{-1}\), that is typical of embedded protostars (Greene and Lada, 2002).
Subsequent VLA imaging over 12 years revealed a common proper motion of 24 milli-arcsecond yr\({}^{-1}\)and relative orbital motion of the binary (Curiel et al., 2003). ALMA observations of the source were presented by Brinch et al. (2016) who used the extended time baseline and archival VLA data to extend the astrometry and determine an orbital solution. They found that the motions are consistent with a circular orbit in or near the plane of the sky with a period of 450 years and a total mass of \(2.01\pm 0.47\,M_{\odot}\) with an equal mass ratio, i.e. two solar mass stars. These astrometric constraints on the proper motion and total mass are essential to our interpretation of the eDisk data in SS3.
The first millimeter interferometric observations of IRS43 were made using the Submillimeter Array (SMA) by Jorgensen et al. (2009) and Brinch and Jorgensen (2013), revealing that IRS43 has strong lines on top of a weak continuum with an extended, flattened structure in HCO\({}^{+}\) but the \(1.7^{\prime\prime}\times 1.4^{\prime\prime}\) resolution was too low to resolve the binary. The Brinch et al. (2016) ALMA observations were the highest resolution measurements of this system to date, \(0\farcs 2\), and clearly separated the disks around each protostar but they remained unresolved with an upper limit to their radius of \(\sim 20\,\)au. However, the HCO\({}^{+}\) and HCN line data suggested that the larger scale flattened structure was rotating around the binary and that the misalignment between the stellar orbits and circumbinary material testified to a turbulent origin.
IRS43 is also a well-known X-ray source that undergoes energetic flares every \(\sim 20\,\)hr quasi-periodically due most likely to a strong star-disk interface (Montmerle et al., 2000). It stands out for having undergone the brightest "superflare" ever witnessed in T Tauri stars, when in 1995 its X-ray luminosity peaked at \(L_{\rm X}\sim 10-100\,L_{\odot}\) and outshone the entire system at all other wavelengths for a couple of hours (Grosso et al., 1997). Although the positional accuracy of the data was unable to determine whether the source was VLA1 or VLA2, such a superflare must have been powered by a massive accretion event and would have been accompanied by full ionization of the surroundings within a few tenths of an au. Our eDisk observations suggest a different, though perhaps related, outburst event occurred \(\sim 130\,\)yr ago.
The rest of the paper is organized as follows: SS2.1 describes the ALMA observations, data reduction and imaging procedure. SS3 presents the results separated into subsections focused on the 1.3 mm dust continuum, the molecular line data showing the kinematics of the system, and an expanding bubble that signposts a significant recent outburst. SS4 discusses the implications of our findings and SS5 summarizes the paper.
## 2 Observations and Data Reduction
### Observations
The ALMA observations used in this work were taken in Cycle 7 as part of program 2019.1.00261.L (PI: N. Ohashi). Detailed information on the configurations, spectral setup and targeted lines, and the other observed targets are in Ohashi et al. (2023). In brief, the IRS43 observations were carried out in 5 execution blocks (EB) between May and October 2021 with between 41 and 46
antennas in a compact and extended configuration (C43-5 and C43-8, respectively) with baselines extending over 15-11500 m. The total on-source integration time was 170 minutes in a single Band 6 spectral setting. Here we present results from the 234 GHz continuum and the CO, \({}^{13}\)CO, C\({}^{18}\)O, SO, and H\({}_{2}\)CO lines at 219-230 GHz.
### Calibration
The data were calibrated using the Common Astronomy Software Applications (CASA) package (McMullin et al., 2007) version 6.2.1. To ensure uniformity in the eDisk data products and comparison of structures and kinematics across the sample, the project team created a specific eDisk data reduction routine1, that builds on the calibration strategy developed for the Disk Substructures at High Angular Resolution Project (DSHARP) ALMA Large Program (Andrews et al., 2018). That general reduction procedure is described in Ohashi et al. (2023) but an additional step (the "two-pass" method) was required for 5 of the 19 sources, including IRS43, where some of the delivered data had high phase decorrelation which we describe below.
Footnote 1: All of the data reduction (i.e., self-calibration and imaging) can be found at [http://github.com/jitobin/edisk](http://github.com/jitobin/edisk).
#### 2.2.1 Combining multiple datasets with high phase noise
Each execution block was first passed through the standard ALMA data reduction pipeline (version 2021.2.0.128) to remove atmospheric and instrumental effects using the quasar calibrators followed by self-calibration on the source itself to increase the signal-to-noise ratio. The top panel of Figure 1 shows the real part of the source visibility amplitudes for each dataset across a range of overlapping baseline lengths. There is some scatter due to time variability in the calibrators and/or source. Notably, there is a significant loss of flux at long baselines in the first short-baseline track (SB1) due to decorrelation caused by high phase noise. The middle panel of Figure 1 shows what the visibility amplitudes would be had we followed the standard eDisk reduction procedure where the fluxes are normalized to a common scale first and then self-calibrated (see Ohashi et al. (2023)). In the "two-pass" method, however, we first run the self-calibration to completion without scaling the fluxes. We then determined which EB had the the highest quality data based on the smoothness of the visibility-amplitude profiles and quality of the images made on a per-EB basis, and selected this to be the reference EB. The reference EB showed the least amount of decorrelation and lowest level of imaging artifacts due to phase noise. For IRS43, this was the third long-baseline (LB3). We subsequently rescaled the other EBs to the same average amplitude for \(uv-\)distances out to 800 k\(\lambda\), and re-ran the data reduction script from the start with the pre-determined scaling factors. The result of this second pass through the calibration process, shown in the bottom panel of Figure 1, has a much more uniform flux profile and reduced scatter across all datasets and baselines (consistent with a \(\sim 10\%\) absolute flux calibraiton uncertainty) which enables a higher quality image to be produced from their combination.
### Imaging
Due to the range of spatial scales and surface brightness of the various features, we imaged the data using different baseline ranges and visibility weightings. To provide an overall view and show the faint, extended continuum emission, we used the short-baseline
Figure 1: _Top:_ Visibility amplitudes as a function of _uv_-distance after the standard pipeline calibration. Note that short-baseline 1 (SB1) decreases anomalously due to loss of coherence on long baselines. _Middle:_ Same as top panel but after the standard self-calibration procedure (for detailed description see Ohashi et al., 2023). _Bottom:_ Same as top panel but after joint self-calibration using the two-pass method. The procedure is described in Section 2.2.1. For these data, the long-baseline 3 (LB3) was chosen as the reference, or best, dataset to which all the other datasets were scaled.
data only (hereafter SB) and natural weighting (Briggs \(\mathtt{robust}=2\)) with a correspondingly relatively large beam size \(0\farcs 31\times 0\farcs 24\) and low continuum RMS noise levels, \(0.015\,\mathrm{mJy\,beam^{-1}}\). For the highest resolution view of the disks, we combined the short- and long-baseline data (hereafter SB+LB) and weighted more toward the longer baselines (Briggs weighting \(\mathtt{robust}=-1\) or \(-0.5\)) to achieve much smaller beam sizes \(\sim\!0\farcs 04\) but at the expense of higher continuum RMS \(\sim\!0.05\,\mathrm{mJy\,beam^{-1}}\). The line maps were all produced from the SB+LB data using a \(2000\,\mathrm{k\lambda}\) taper and intermediate weighting (Briggs \(\mathtt{robust}=0.5\)) with typical beam sizes \(\sim 0\farcs 15\) and RMS levels \(\sim 1.5\,\mathrm{mJy\,beam^{-1}\,km\,\,s^{-1}}\). For the weaker lines, we smoothed the data to a beam size of \(0\farcs 25\).
## 3 Results
### Continuum data
#### 3.1.1 Overall view
A map of the large scale \(1.3\,\mathrm{mm}\) continuum emission toward the region, optimized to highlight low surface brightness features more than high resolution, is shown in Figure 2. There are 3 compact sources corresponding to disks around each member of the IRS43 binary and also the infrared source, GY 263. We present higher resolution images of these in the following subsection. There is also an elongated, slightly curved structure that extends \(\sim 7\arcsec\approx 10^{3}\,\mathrm{au}\) on either side of the binary. The line data, presented in SS3.2, reveal this to be a rotating, infalling envelope. The flux density of this extended continuum emission (without the disks) is \(36\,\mathrm{mJy}\), which corresponds to a total dust mass of only \(10\,M_{\oplus}\) at a temperature of \(30\,\mathrm{K}\) derived from H\({}_{2}\)CO line ratios. Assuming an ISM gas-to-dust ratio of 100, the implied total mass is \(3\times 10^{-3}\,M_{\odot}\). Note, however, that the envelope size is greater than the maximum recoverable size of the observations, \(\sim 3\arcsec\), so this envelope mass measurement should be considered a lower limit.
#### 3.1.2 Circumstellar disks
Including the long-baseline data and using different weighting mechanisms, we can zoom into each of the three disks. Figure 3 shows the two disks in the IRS43 binary from the SB+LB data weighted heavily to long baselines (Briggs \(\mathtt{robust}=-1\)). The disks are extremely compact. Their positions and sizes were determined from Gaussian fits in the image plane using the CASA imfit routine and listed in Table 3. VLA1 is resolved in both major and minor axes, with a deconvolved FWHM size of \(12\,\mathrm{au}\), and, assuming a flat disk, has an inclination inferred from the arc-cosine of their ratio equal to \(78\arcdeg\). VLA2 is just barely resolved along the beam minor axis with a remarkably small deconvolved FWHM size of \(2.3\,\mathrm{au}\). The source is faint so we checked different weighting schemes and found that it was detected at slightly higher signal-to-noise but also resolved in a slightly larger beam with a similar deconvolved size (\(2.7\,\mathrm{au}\)) with Briggs \(\mathtt{robust}=-0.5\). To our knowledge, this is the smallest resolved protostellar disk in the ALMA literature to date.
The flux densities are \(10.6\,\mathrm{mJy}\) and \(1.1\,\mathrm{mJy}\) for VLA1 and VLA2, respectively. Using the simplest assumptions of optically thin, isothermal emission, these convert to dust masses \(3.6\,M_{\oplus}\) and \(0.38\,M_{\oplus}\) at a temperature \(T_{\mathrm{dust}}=30\,\mathrm{K}\) appropriate for Class I sources (Tobin et al., 2015) and a dust opacity \(\kappa_{\nu}=2.3\,\mathrm{cm^{2}\,g^{-1}}\)(Beckwith et al., 1990). These are likely lower limits, especially for VLA1, since the high brightness temperature indicates that the emission is in fact optically thick.
The third disk lies around the known Class II source, GY 263 (Allen et al., 2002), \(6\farcs 6\) (\(900\,\mathrm{au}\)) to the North-West of IRS43. Figure 4 shows a high resolution image from the SB+LB data with Briggs \(\mathtt{robust}=-0.5\). Although this disk was not the target of the eDisk program, it is nonetheless interesting in its own right as we serendipitously find a central hole. The disk flux of \(13.1\,\mathrm{mJy}\) implies a dust mass, \(4.5\,M_{\oplus}\), and the ring has a radius of \(0\farcs 17=23\,\mathrm{au}\) with an inclination of \(24\arcdeg\) at a position angle of \(130\arcdeg\). This tells us that GY 263 is a small transition disk (e.g., van der Marel et al., 2022) while also demonstrating both the ability of the eDisk data to image small scale disk features and the clear visual differences with the two embedded disks. Nevertheless, given the large projected distance between GY 263 and IRS43, and because we do not see any signs of interaction in the line data or in archival datasets, we consider the sources to be essentially physically independent and do not discuss GY 263 further here.
### Line data
Spectral line emission is detected predominantly from \(v_{\mathrm{LSR}}=-7\) to \(2\,\mathrm{km\,\,s^{-1}}\) and \(6\) to \(15\,\mathrm{km\,\,s^{-1}}\) in CO and its isotopologues, H\({}_{2}\)CO, and SO. There is very little signal around the central velocities of the system, \(2\) to \(6\,\mathrm{km\,\,s^{-1}}\), in CO and \({}^{13}\)CO. The most likely cause is spatial filtering of large scale emission, although there may also be some absorption by an intervening cold molecular layer. The actual central velocity of each source cannot be precisely determined as we do not detect any line emission that can be clearly associated with either of the two disks. However, from the more red- and blue-shifted parts of the spectra, we can study the kinematics of the envelope seen in the extended continuum.
The IRS43 protostellar binary lies at the center of a molecular filament or envelope that we detect in CO 2-1 and its isotopologues \({}^{13}\)CO and C\({}^{18}\)O, as well as H\({}_{2}\)CO \(3_{0,3}-2_{0,2}\) and SO \(6_{5}-5_{4}\). A 3-color CO, \({}^{13}\)CO, and C\({}^{18}\)O moment 0 map is shown in Figure 5. The overlay of the 3 lines gives a better representation of the molecular gas structure than any single line due to the range of column densities that they trace and the high level of absorption and interferometric filtering of large scale structure.
The faint extended dust emission in Figure 2 is best seen in the CO isotopologues because the emission from the optically thick CO line is relatively uniform resulting in a weak interferometric response. The total integrated \({}^{13}\)CO emission over the entirety (760 square arcseconds) of the mapped structure is 39 Jy km s\({}^{-1}\) which corresponds to a mean column density of \(1.1\times 10^{15}\) cm\({}^{-2}\) at 30 K and a total gas mass \(\sim 4\times 10^{-4}\,M_{\odot}\) for an abundance \([^{13}\)CO]/[H\({}_{2}]=2\times 10^{-6}\)(Ripple et al., 2013), which in good agreement with the dust-derived mass.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Source & ICRS R.A. & ICRS Dec. & Deconvolved FWHM, PA & Peak \(I_{\nu},T_{\rm b}\) & \(F_{\nu}\) & incl & Dust Mass \\ & [h m s] & [d m s] & [mas,\({}^{\circ}\)] & [mJy beam\({}^{-1}\), K] & [mJy] & [\({}^{\circ}\)] & [\(M_{\oplus}\)] \\ \hline VLA1 & 16:27:26.906 & \(-\)24:40:50.81 & 89\(\times\)18, 133.5 \(\pm\) 1.1 & 3.30, 62 & 10.62 & 78 & 3.4 \\ VLA2 & 16:27:26.911 & \(-\)24:40:51.40 & 19\(\times\)16, 62.5 \(\pm\) 6.3 & 0.88, 20 & 1.08 & 32 & 0.34 \\ \hline \end{tabular}
\end{table}
Table 3: Gaussian fits to the continuum sources
Figure 2: 1.3 mm continuum image using only the short baseline data with a log stretch to emphasize the faint, extended emission around the IRS43 binary. The coordinates are arcsecond offsets from the position of VLA1 and the synthesized beam on the bottom left corresponds to 0\(\farcs\)31 \(\times\) 0\(\farcs\)24.
The CO emission is more prominent along the bright rims of a cavity that extends north (most prominently) and south of the circumbinary envelope. There is a hole in the northern structure that is more apparent in the channel maps which we discuss further in SS3.3.
There is a slight enhancement of CO toward VLA1 and VLA2 in the moment map but, partially due to the strong obscuration or filtering around the central velocities, we were unable to identify line emission in any molecule or transition that could be clearly identified withither disk. Williams & Best (2014) calculated the \({}^{13}\)CO and C\({}^{18}\)O emission for a grid of disk models with varying masses and abundances, marginalizing over size, surface density, and temperature profiles. From the non-detections here, we estimate a \(3\sigma\) upper limit to the gas mass of \(0.3\,M_{\rm Jup}\) for each disk. This is about 30 and 300 times the dust mass for VLA1 and VLA2, respectively.
The observing passband contains two additional H\({}_{2}\)CO lines, \(3_{2,1}-2_{2,0}\) and \(3_{2,2}-2_{2,1}\), that were weakly detected in the envelope. The relative strength of these higher excitation transitions constrain the gas temperature. The spatially and velocity integrated ratio of line strengths for the \(3_{0,3}-2_{0,2}/3_{2,1}-2_{2,0}\) transitions is \(7\pm 1.5\), from which we derive an average rotational temperature of \(28\pm 4\) K assuming LTE and using the radiative transfer code RADEX (van der Tak et al., 2007).
#### 3.2.2 Kinematics
The envelope shows a clear velocity gradient, redshifted to blueshifted from east to west centered on \(\sim 4\) km s\({}^{-1}\). A collage of peak emission, first moment, and position-velocity diagrams for the five brightest lines is shown in Figure 6. The optically thick CO emission is the most spatially distributed and shows the temperature structure of the gas, highlighting the rims of the cavity as noted above. The average brightness temperature over the envelope is 25 K but the cavity rims are about a factor of two higher. The \({}^{13}\)CO emission is more optically thin and highlights the envelope, with the peak emission here showing a flared appearance. The less abundant and optically thinner C\({}^{18}\)O line is detected where the CO and \({}^{13}\)CO lines are filtered out which provides an important view of the structure and kinematics of the envelope near the source velocity. The peak emission map is more uniform and less tightly pinched than the \({}^{13}\)CO but also centered around VLA1, and the velocity gradient in the position-velocity map extends linearly through the center. The H\({}_{2}\)CO transition shown here was observed at relatively low velocity resolution, 1.34 km s\({}^{-1}\), but is also less affected by spatial filtering than CO and \({}^{13}\)CO near the central velocities and has a similar spatial and kinematic appearance as C\({}^{18}\)O.
Figure 4: Zooming in on the continuum image toward the infrared source GY 263 with a beam size of \(0.\,056\times 0.\,039\) and an Asinh intensity scale. There is a well resolved central cavity showing that this is a small transition disk.
Figure 3: High resolution 1.3 mm continuum image of the compact dusty disks around each member of the IRS43 binary. This was produced by strongly weighting the long baseline visibilities and is shown on an Asinh intensity scale. The beam size is \(0.\,046\times 0.\,030\) (\(\sim 6\) au \(\times\) 4 au).
Finally, the SO line was also weakly detected in the envelope but, unlike the other tracers, peaked strongly on VLA1.
The position-velocity diagrams shown in the third column of Figure 6 show that the velocity gradient across the envelope is not simply inherited shear or rotation of the molecular cloud or filament from which the stars formed, but increases around the stars due to their gravity. The binary orbit has been accurately determined from the two decades of high resolution VLA and ALMA astrometry and the total mass of the system is constrained to be \(M_{*}=2.01\pm 0.47\,M_{\odot}\)(Brinch et al., 2016). However, the gas motions are significantly faster than Keplerian even for the maximum projected velocity case of the rotation axis being in the plane of the sky (solid cyan lines in Figure 6) and a central mass equal to the upper limit of \(2.5\,M_{\odot}\). This therefore indicates that the envelope is not purely rotating.
Following Cesaroni et al. (2011) who considered the motions in an envelope around a young massive star, we include a free-fall component perpendicular to the rotation, which results in a radial velocity profile,
\[v(r)=\bigg{(}\frac{GM_{*}}{r}\bigg{)}^{1/2}\,\bigg{(}\frac{x}{r}+\sqrt{2}\frac{ z}{r}\bigg{)}, \tag{1}\]
where \(x\) is the physical distance in the East-West direction in the plane of the sky, \(z\) is the physical distance along the line-of-sight, such that the radius from the center of mass of the binary is \(r=(x^{2}+z^{2})^{1/2}\). The dotted white lines in Figure 6 show the allowed range (minimum to maximum) of projected velocities that we would expect from this rotating and infalling model for the highest stellar mass consistent with the astrometry, \(M_{*}=2.5\,M_{\odot}\). This matches the outer extent of the emission for each line in Figure 6 where the pure-Keplerian fit is too low. In addition, there is some flow toward the observer that lie within the expected bounds of the model for the less optically thin lines, C\({}^{18}\)O, H\({}_{2}\)CO, and SO, where we can see through to the back side. Together, this demonstrates that the enve
Figure 5: Velocity integrated maps of the CO, \({}^{13}\)CO, and C\({}^{18}\)O emission (from -3 to +11 km/s) in red, green, and blue, respectively. The maps are centered on VLA1 and each autoscaled from their minimum to maximum and shown on a linear scaling. The CO map has been smoothed to \(0.^{{}^{\prime\prime}}25\) and the isotopologues to \(0.^{{}^{\prime\prime}}5\) to enhance the extended structures. The positions of the three disks are shown by the white crosses and other prominent features are labeled.
Figure 6: Structure and kinematics of the IRS43 envelope as seen in the five brightest lines detected in our observations: the 2–1 transitions of CO, \({}^{13}\)CO, and C\({}^{18}\)O, H\({}_{2}\)CO \(3_{0,3}-2_{0,2}\), and SO \(6_{5}-5_{4}\). The resolution of the data is indicated in the lower left corner of each panel. The left column shows the peak emission map on an Asinh scale over the following ranges in mJy beam\({}^{-1}\); CO: 10–220; \({}^{13}\)CO: 15–90; C\({}^{18}\)O: 10–30; H\({}_{2}\)CO: 5–15; SO: 10–30. The large and small crosses indicate the positions of VLA1 and VLA2, respectively. The middle column is the first moment for each line on the same velocity scale from 0 (red) to 6 km s\({}^{-1}\) (blue) and shows the East-West velocity gradient in the envelope. The rectangular box shows the cut along the RA axis with a width of \(0\hbox{$.\!\!^{\prime\prime}$}5\) in Dec used for the position-velocity diagrams displayed in the right column. All 5 tracers show emission at velocities greater than expected for pure Keplerian velocity for an edge-on geometry and a central mass of \(2.5\,M_{\odot}\) (the maximum mass allowed by the astrometry, shown by the solid cyan line). The dotted white line brackets the range of projected velocities for a combination of rotation and free-fall collapse with the same central mass and matches the data much better. The prominent SO emission centered on VLA1 has a high velocity gradient and may be due to an infall shock onto the disk.
lope is not only rotating but continuing to fall onto the system and the gravitational influence of the protostars is felt to beyond an outer radius \(R>2500\,\mathrm{au}\), limited by the map size and sensitivity of the data.
Only the SO line actually peaks on the source, and even then only on VLA1, not VLA2. The position-velocity diagram shows a high velocity gradient across VLA1, though it does not exceed the velocity range of the rotating-infalling envelope model. Inspection of channel maps shows that the gradient follows the same East-West direction as the larger scale envelope. The weak, distributed SO emission in the envelope is similar to that seen in other embedded protostars (Tychoniec et al., 2021), but the strong enhancement on the source may signpost infall from the envelope onto the disk similar to observations by Sakai et al. (2014) and modeled by van Gelder et al. (2021). The velocity gradient of this feature is approximately perpendicular to the major axis of the dust disk so it is also possible that it may be a small outflow or jet.
### Signature of a recent outburst
Understanding protostellar feedback and how newborn stars clear their surroundings is a key question in star formation and learning about the origin of the stellar mass function. Neither the CO, SO or any other line have high velocity line wings indicative of an unbound outflow. Moreover, we do not detect SiO 5-4 in the passband, although it is detected in the powerful jets from several other eDisk sources. It is therefore unclear what is the cause of the \(\sim 10^{3}\,\mathrm{au}\) cavity blown out of the envelope. However, inspection of the CO channel maps reveal a possible clue to the recent accretion and outburst history of IRS43.
Figure 7 presents channel maps for the redshifted side of the envelope emission. We find a ring like structure with a common spatial center but varying radius at different velocities (linearly proportional to the relative velocity difference) outlined by white dashed ellipses in each subplot. Remarkably, this ring center lies at the extrapolated position of the binary 130 years ago, as determined from its proper motion, \(\mu_{\alpha}=-7.6\), \(\mu_{\delta}=-25.3\,\mathrm{mas}\,\mathrm{year}^{-1}\)(Brinch et al., 2016), and illustrated by the solid blue line and open circle. We expect the molecular envelope is co-moving with the stars but, if something disrupted the surrounding gas, there could be features that decouple from this common motion. Consequently, we interpret this CO feature as an expanding molecular ring that is the remnant of a protostellar outburst event that occurred at the end of the \(19^{\mathrm{th}}\) century.
We can make a rough estimate of the energy of the bubble from its size, \(\sim 500\,\mathrm{au}\) in radius, and expansion speed, \(\sim 500\,\mathrm{au}/130\,\mathrm{yr}=20\,\mathrm{km}\,\mathrm{s}^{-1}\). If the density of the pre-burst gas was \(n_{\mathrm{H_{2}}}=10^{3}\,\mathrm{cm}^{-3}\), then the mass of gas in the bubble is about \(1\,M_{\oplus}\) and its total kinetic energy is \(\sim 2\times 10^{40}\,\mathrm{erg}\). This is about an order of magnitude greater than the total energy of the X-ray super-flare observed by Grosso et al. (1997), \(L_{\mathrm{X}}\sim 10^{1.5}\,L_{\odot}\) over a couple of hours. However, for a typical mechanical efficiency of a few percent for converting from stellar outburst scales to the ISM, the total energy requirement is perhaps similar to an EXor event but less than a FUor (Hartmann et al., 2016).
## 4 Discussion
The two disks around VLA1 and VLA2 are remarkably compact, just a few au in radius. They are the smallest in the eDisk sample (Ohashi et al., 2023) and may be the smallest yet measured with ALMA. It is well known that binary systems can severely truncate disks (Artymowicz and Lubow, 1994) but the \(74\,\mathrm{au}\) separation of the two sources appears to be too large for this to be the explanation. For the \(\sim 3\,\mathrm{au}\) disk around VLA2 to be dynamically affected, the semi-minor axis of the binary orbit would have to be no greater than \(9\,\mathrm{au}\) which implies a very high orbital eccentricity greater than \(0.88\). Interestingly, Manara et al. (2019) came to a similar conclusion for disks in multiple systems in Taurus. In this case, however, the positions of two sources have been accurately measured over \(25\,\mathrm{yr}\) (5% of the orbital period) and is consistent with a circular orbit (Brinch et al., 2016).
ALMA surveys of Class II objects demonstrate that many dust disks are quite small, \(\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}20\,\mathrm{au}\) in radius (Manara et al., 2022). A possible explanation is the radial drift of millimeter and larger sized grains with high Stokes numbers and a lack of pressure bumps to resist it (Facchini et al., 2019). Drift timescales are very short so this could indeed be important for protostellar disks (Tobin et al., 2020).
The high brightness temperature in VLA1 shows that the continuum emission is likely optically thick (Figure 3) and its mass may well be significantly underestimated. VLA2 is about 10 times fainter so the optical depth correction may be small and it may well have low mass. However, this is hard to reconcile with its high accretion rate (Greene and Lada, 2002). Infall from the envelope and rapid transport through the disk may explain this. Alternatively, if most of the mass were extremely centrally concentrated, into a \(\sim 1\,\mathrm{au}\) radial region, the emission would be both optically thick and beam diluted to a low brightness temperature in the \(\sim 0\farcs 05\) beam.
Infrared observations show that the high visual extinction to VLA2, \(A_{\rm V}=40\) mag, is much greater than the inferred value from the mean column density of the large scale circumbinary envelope, \(N_{\rm H_{2}}\sim 10^{20}\) cm\({}^{-2}\) (\(A_{\rm V}\simeq 0.2\) mag), which is additional evidence for localized dust around the protostar. Moreover, strong CO ice absorption at 4.67 \(\mu\)m reveals the presence of dense, cold molecular gas enveloping both stars (Herczeg et al., 2011).
A long standing problem in star formation is the low average luminosity of protostars despite the need to gain of order a solar mass in a million years (Kenyon & Hartmann, 1995). Punctuated bursts of accretion are a potential solution and a handful of protostars have been observed to dramatically brighten due to such an event (Hartmann et al., 2016). Although stars may indeed grow "when we are not looking", here we have found that associated outburst events may leave detectable signatures in the structure and kinematics of their surroundings. This was only possible due to the long-term astrometric monitoring of this binary system and consequent accurate measurements of its proper motion. Now, a decade into the ALMA era, there is potential to measure protostellar proper motions for other sources and search for fossil interactions with their surroundings.
It is unclear what caused the outburst 130 years ago. It does not appear to be a close passage by the other member of the binary since the orbital period is much longer, \(\sim 450\) yr, although it is worth considering that the system is part of a moderately clustered region where star formation is influenced by interactions and much more dynamic than the classical picture of isolated core collapse (Bate, 2012). Rather, it is likely related to the highly active nature of one of the individual stars, as witnessed by the recent X-ray superflare. The analysis here was limited by the small disk sizes and lack of detectable lines in the ALMA data, but the system has strong CO ro-vibrational line emission and absorption in the M-band at 4 \(\mu\)m (Herczeg et al., 2011) which may provide an alternative means to study the disk properties.
The eDisk survey of 12 Class 0 and 7 Class I objects revealed a diverse set of disk properties (Ohashi et al., 2023). Ranked by bolometric temperature, IRS43 is one of the more evolved sources but has the smallest disks in the sample. It is one of four binary systems and has the smallest projected separation though not by a significant margin. The circumstellar envelope surrounding the source is punctured by a large cavity but there are no signs of molecular outflows from either of the two protostars, in contrast to the rest of the sources in the survey. IRS43 mostly stands out due to its compact disks and high X-ray activity, properties that might possibly be related.
Figure 7: Channel maps of CO for the red-shifted side of its emission. The velocity of each channel is indicated at the top right of each panel in units of km s\({}^{-1}\). The maps are centered on VLA1 (red cross) and the cyan line shows the proper motion of the source projected back in time. The white dashed ellipses, which have the same inclination and center (cyan open circle) and radii that scale linearly with velocity, outline a ring-like feature in each channel. We interpret this as an expanding bubble centered on the location of the binary 130 years ago.
## 5 Summary
These observations of Oph IRS43, a Class I protobinary system, are part of a homogeneous dataset of 19 embedded protostellar disks in the eDisk ALMA Large Program. Our main results are as follows:
* We detect disks around each member of the IRS43 binary in millimeter continuum emission. The northern source, VLA1, is about 10 times brighter than the southern source, VLA2, opposite to how they appear in the infrared. Both disks are extremely small, just a few au in radius, and masses are low but, especially for VLA1, likely underestimated due to high optical depth. A third disk is detected around the Class II source, GY 263, that lies within in the ALMA field-of-view and is found to be a transition disk with a \(\sim 20\) au radius cleared central cavity.
* We map a flattened circumbinary envelope over \(\sim 2000\) au in the East-West direction in the dust continuum and multiple molecular species, \({}^{13}\)CO, C\({}^{18}\)O, H\({}_{2}\)CO, and SO. The CO emission extends in the North-South direction and delineates the rim of a wide cavity. The envelope is moving faster than expected for Keplerian rotation and we deduce that the motions must include a component of infall onto the system. The shock as it hits the disk may explain the enhanced SO emission toward VLA1.
* We discovered an expanding ring of CO emission in the channel maps with a center that sits at the projected position of the system 130 years ago. We interpret this as the signature of a fossil outburst with an energy estimated to be greater than the X-ray superflare observed in 1995 but lower than FU Ori type events.
The eDisk survey has revealed a wide range of disk properties in embedded protostellar systems. Here, we have presented the results from a first look at the data for one individual system. Future work will include looking for trends across the sample. However, it is already clear that disks form and evolve in quite heterogeneous ways. This has ramifications for studies of subsequent stages from the T Tauri Class II phase all the way to exoplanets. Understanding the nature of survey outliers such as the compact disks in IRS43 will be an important part of a complete picture of star and planet formation.
This paper makes use of the following ALMA data: ADS/JAO.ALMA#2019.1.00261.L. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSTC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Funding support is acknowledged from NSF GRFP grant No. 2236415 and NSF AST-2107841 (SN), NSF AST-2107841 (JPW), NASA RP 80NSSC22K1159 (JJT), NSF AST-2108794 (LWL), NSTC 109-2112-M-001-051, 110- 2112-M-001-031 (NO, CF), NASA 80NSSC20K0533 and NSF AST-1910106 (ZYL), Independent Research Fund Denmark grant No. 0135-00123B (JKJ, RS), PID2020-114461GB-I00 funded by MCIN/AEI/10.13039/501100011033 (IdG), NSTC 110-2628-M-001-003-MY3 from the Taiwan National Science and Technology Council and AS-CDA-111-M03 from the Academia Sinica Career Development Award (H-WY). CASA (McMullin et al., 2007), astropy (The Astropy Collaboration, 2013, 2018).
|
2309.00699 | Geometric Deep Learning: a Temperature Based Analysis of Graph Neural
Networks | We examine a Geometric Deep Learning model as a thermodynamic system treating
the weights as non-quantum and non-relativistic particles. We employ the notion
of temperature previously defined in [7] and study it in the various layers for
GCN and GAT models. Potential future applications of our findings are
discussed. | M. Lapenna, F. Faglioni, F. Zanchetta, R. Fioresi | 2023-09-01T18:42:53Z | http://arxiv.org/abs/2309.00699v1 | # Geometric Deep Learning: a Temperature Based Analysis of Graph Neural Networks
###### Abstract
We examine a Geometric Deep Learning model as a thermodynamic system treating the weights as non-quantum and non-relativistic particles. We employ the notion of temperature previously defined in [7] and study it in the various layers for GCN and GAT models. Potential future applications of our findings are discussed.
Keywords:Geometric Deep Learning Statistical Mechanics Machine Learning.
## 1 Introduction
Machine learning and statistical mechanics share a common root; starting from the pioneering works by Jaynes ([9]) and Hopfield ([8]), up to the visionary theory of (deep) Boltzmann machines ([1], [14]), it is clear there is a common ground and the understanding of statistically inspired machine learning models can bring a new impulse to the field. The powerful language of statistical mechanics, connecting the elusive microscopic and measurable macroscopic physical quantities seems the perfect framework to tackle the difficult interpretation questions that the successful deep neural networks present. Indeed many researchers (see [4], [5] and refs. therein) have conducted a thermodynamic study, through analogies, of various actors in the most popular algorithms, Deep Learning above all, and yet such analogies were not able to fully elucidate the mechanisms of generalization and representability that still elude our understanding. Along the same vein, new mathematical modeling, inspired by thermodynamics, brought along new interesting mathematics, see [3], [13], and in particular [15], that seems especially suitable to model the dissipative phenomenon we observe in the SGD experiments.
The purpose for our present paper is to initiate this thermodynamic analysis for the Geometric Deep Learning algorithm along the same line of our previous works [7] and [11] on more traditional Convolutional Neural Networks (CNN). We shall treat the parameters of the model as a thermodynamic system of particles and we exploit the sound notion of temperature we have previously given in
[7] and [11]. Then, we study the temperature of the system across layers in Graph Convolutional Networks (GCN) [23] and Graph Attention Networks (GAT) [25].
Our paper is organized as follows. In Sec. 2 we briefly recall the correspondence between thermodynamics concepts and neural networks ones ([7]). In Sec. 3 we present a Geometric Deep Learning model on the MNIST Superpixels dataset ([18]) and we study the temperature of layers comparing with the behaviour found for the CNN architecture in [7] and [11]. In particular, we study the dependence of temperature from the two hyperparameters learning rate and batch size at the end of the training, when loss and accuracy have reached their equilibrium values. We also analyse the dynamics of the weights inside a single Graph Convolutional layer. We consider both a GCN and a GAT models and compare the results. In Sec. 4 we draw our conclusions and we lay foundations for future work.
## 2 Thermodynamics and Stochastic Gradient Descent
We briefly summarize the thermodynamic analysis and modelling appearing in [4], [7], and [11].
Stochastic Gradient Descent (SGD) and its variations (e.g. Adam) are common choices, when performing optimization in Deep Learning algorithms.
Let \(\Sigma=\{z_{i}\,|1\leq i\leq N\}\subset\mathbb{R}^{D}\) denote a dataset of size \(N\), i.e., \(|\Sigma|=N\), \(L=(1/N)\sum L_{i}\) the loss function, with \(L_{i}\) the loss of the \(i\)-th datum \(z_{i}\) and \(\mathcal{B}\) the minibatch. The update of the weights \(w=(w_{k})\in\mathbb{R}^{d}\) of the chosen model (e.g., Geometric Deep Learning model), with SGD occurs as follows:
\[w(t+1)=w(t)-\eta\nabla_{\mathcal{B}}L(w),\quad\text{with}\quad\nabla_{ \mathcal{B}}L:=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\nabla L_{i} \tag{1}\]
where \(\eta\) denotes the learning rate. Equation (1) is modelled in [4] by the stochastic ODE (Ito formalism [24]) expressed in its continuous version as:
\[dw(t)=-\eta\nabla L(w)dt+\sqrt{2\zeta^{-1}D(w)}dW(t) \tag{2}\]
where \(W(t)\) is the Brownian motion term modelling the stochasticity of the descent, while \(D(w)\) is the diffusion matrix, controlling the anisotropy of the diffusivity in the process. The quantity \(\zeta=\eta/(2|\mathcal{B}|)\) in [4], is called the _temperature_. It accounts for the "noise" due to SGD: small minibatch sizes or a high learning rate will increase the noise in the trajectories of the weights during training.
In [7], the time evolution of the parameters is written in continuous and discrete version as:
\[dw(t)=-\eta\nabla_{\mathcal{B}}L(w)dt,\qquad w(t+1)=w(t)-\eta\nabla_{ \mathcal{B}}L(w) \tag{3}\]
The stochastic behaviour modelled by (3) is then accounted for introducing a microscopic definition of temperature mimicking Boltzmann statistical mechanics.
We first define the _instantaneous temperature_\(\mathcal{T}(t)\) of the system as its kinetic energy \(\mathcal{K}(t)\) divided by the number of degrees of freedom \(d\) (in our case the dimension of the weight space) and a constant \(k_{B}>0\) to obtain the desired units:
\[\mathcal{T}(t)=\frac{\mathcal{K}(t)}{k_{B}\,d}=\frac{1}{k_{B}\,d}\sum_{k=1}^{d }\frac{1}{2}m_{k}\,v_{k}(t)^{2} \tag{4}\]
where \(v_{k}(t)\) is the instantaneous velocity of one parameter, computed as the difference between the value of the \(k^{\rm th}\) parameter at one step of training and its value at the previous step (the shift in time \(\Delta t\) is unitary since we are computing the instantaneous velocity between consecutive steps or epochs):
\[v_{k}(t)=\frac{w_{k}(t)-w_{k}(t-1)}{\Delta t} \tag{5}\]
In the formula for the kinetic energy, \(m_{k}\) is the mass of parameter \(w_{k}\) and we set it to 1. This is because we have do not know if the different role of the parameters can be modelled through a parallelism with the concept of mass.
The _thermodynamic temperature_ is then the time average of \(\mathcal{T}(t)\):
\[T=\frac{1}{\tau}\int_{0}^{\tau}\mathcal{T}(t)\,dt=\frac{1}{\tau k_{B}\,d}\int _{0}^{\tau}\mathcal{K}(t)=\frac{K}{k_{B}\,d} \tag{6}\]
where \(K\) is the average kinetic energy and \(\tau\) is an interval of time long enough to account for small variation in temperature. In [7] we interpret this system as evolving at constant temperature: at each step the temperature is reset (analogy with system in contact with heat reservoir). Hence we do not have a _constant energy_ dynamics, as it is commonly referred to in atomic simulations, but we are faced with a dissipative effect occurring at each step.
The thermodynamic analysis performed in [7] and summarized here implies that with SGD we have a residual velocity for each particle even after equilibrium is reached. Our system does not evolve according to Newton dynamics and in particular the mechanical energy is not constant. The fact we maintain a residual temperature at equilibrium with a constant temperature evolution means that we achieve a minimum of free energy, not of the potential energy i.e. our loss function. This fact is stated in [4] and is well known among the machine learning and information geometry community (see also [3]).
Let us summarize the key points of the system dynamics. We have:
- No costant mechanical energy \(K+V\), where \(K=\sum_{k=1}^{d}(1/2)m_{k}v_{k}^{2}\) is the kinetic energy and \(V=L\) (\(L\) the loss function) is the potential energy;
- No maximization of entropy,
All of this is due to the stochasticity of SGD which is enhanced by small sizes of minibatch and high learning rate, as we shall elucidate more in our experimental section together with an analysis of the temperature in the layers.
## 3 Experiments
In this section we perform experiments with Geometric Deep Learning models on MNIST Superpixels PyTorch dataset ([18]), in order to test the dependence of the thermodynamic temperature from the hyperparameters learning rate and batch size. We examine the temperature of different layers and we look at the mean squared velocities of the weights of a single layer.We also analyze some key differences with the findings in [7] and in [11], where we proposed pruning techniques based on the notion of temperature.
We choose to investigate the behaviour of two separate and important Graph Neural Network (GNN) architectures. First, we implement a model using Graph Convolutional Network (GCN) layers from [23]. Then, we make a comparison with a Graph Attention Network (GAT) model employing attention mechanism during the convolution [25]. We stress that, in the literature, GAT models have outperformed GCN's in classification problems on superpixel images. We consider both models in order to compare the weights' dynamics and reason on the thermodynamic modelling we propose.
### GCN Architecture.
The architecture we use consists of four GCNConv layers followed by a concatenation of a mean and a max pooling layers and at the end a dense layer (Fig. 1). We use tanh as activation function.
We do not apply batch normalization ([17]) to the convolutional layers, since normalizing the weights could bias our experiments (compare with [7]). We optimize the network with SGD with a Cross Entropy loss and Adam optimizer, without any form ofweight regularization.
We train this model on the MNIST Superpixels dataset obtained in [18], where the 70.000 images from the original MNIST dataset were transformed into graphs with 75 nodes, each node corresponding to a superpixel. We train the model for 600 epochs, starting with a learning rate of \(10^{-3}\) and decaying it
Figure 1: Architecture of our GCN model in PyTorch framework. In the parenthesis next to each layer, the first and second number indicate the embedding dimension of the input and output respectively. The final linear layer takes as input an embedding which is twice the size of the output from the previous layer, due to concatenation of the two pooling operations.
of a 1/10 factor every 200 epochs. After training, the model reaches an accuracy of 64%, which is worse than the performance obtained on the same dataset in [18]. We think this is due to the fact that our architecture is much simpler than MoNET ([19]) used in [18]. Once equilibrium of accuracy and loss is reached, we further train the model for 100 epochs and we focus our thermodynamic analysis on these last epochs at equilibrium. In particular, to investigate the dependence of temperature from learning rate and batch size, we further train the same equilibrium model by changing either the learning rate or the batch size.
In Fig. 2 and 3, we show the behaviour of the temperature \(T\), as defined in our previous section, depending on the learning rate \(\eta\) and the inverse of the batch size \(1/\beta\). We try values of learning rate in the range from \(7\cdot 10^{-4}\) to \(3\cdot 10^{-3}\) (batch size fixed to 32) and values of batch size in the range from 8 to 128 (learning rate fixed). We stress that, for each layer of the architecture, the temperature was computed as the mean kinetic energy of the weights averaged over the 100 equilibrium epochs.
Despite in the literature ([4] and refs therein) the temperature \(\zeta\) (equation 2) is commonly believed to behave proportionally to such parameters, we observe quite a different behaviour. As in [7], the dependence of temperature from the learning rate is parabolic for the linear layer, whereas it is almost parabolic for
Figure 2: Temperature dependence from the learning rate for selected layers of the GCN architecture (the other behaving similarly). The equation resulting from the fit is shown in the top left.
the GCNConv layers, where the exponent of \(x\) in the fit is greater than 2 (see Fig. 2). As far as the dependence on the batch size, we notice that all layers of the architecture exhibit a linear dependence of temperature from \(1/\beta\). This is in line with the results obtained by [4], but differs from the ones in [7]. Indeed, in [7], the linear dependence of the temperature from \(1/\beta\) appears only for the output linear layer, while the other layers exhibit an essential non linearity. However, overall, as for the CNN architecture previously studied ([7], [11]), we find that, given our thermodynamic definition of temperature, \(T\) is not proportional to \(\eta/\beta\).
Furthermore, if we look at the mean squared velocities of the weights over the epochs, without averaging on the number of weights, we discover quite an interesting behaviour. Inside the same layer, the weights do not show all the same mean squared velocity at equilibrium, but different rows of the weight matrix show completely different thermal agitation (Fig. 4). This is similar to what happens in [11], where we use the concept of temperature to distinguish between "hot" and "cold" filters in a CNN layer and we discover that high temperature filters can be removed from the model without affecting the overall performance. We believe the same reasoning can be applied here for GNN layers and it could imply that some rows of the weight matrix are redundant to the learning. We expect to use this analysis to eliminate useless features or to reduce the dimensions of the feature embedding and speed up the optimization.
Figure 3: Temperature dependence from the inverse of the batch size for selected layers of the GCN architecture.
### GAT Architecture.
The architecture we use is inspired by [20] and consists of three GATConv layers followed by a final mean pooling layer and three dense layers (Fig. 5). We take ReLU as activation function ([21]).
As for the previous GCN model, we do not apply either batch normalization ([17]) or dropout ([22]) or other forms of regularization to the weights. We optimize the network as described for GCN and we obtain a test set accuracy of \(74\%\). To inspect the dependence of temperature on the two hyperparameters \(\eta\) and \(\beta\), again we train the equilibrium model for other \(100\) epochs and we restrict the analysis to these final epochs. In Fig. 6 and 7, we report the behaviour of the temperature dependence on the learning rate \(\eta\) and the inverse of the batch size \(1/\beta\) (range of values of the hyperparameters as for the GCN model). Similarly
Figure 4: Mean squared velocity for the weights of the second GCNConv layer. The weight matrix has dimensions \(64\times 64\) and different rows of the matrix show different temperature. The plot on the left is obtained by flattening the matrix.
Figure 5: Architecture of our GAT model in PyTorch framework. In the parenthesis next to each layer, the first number and second number indicate the embedding dimension of the input and output respectively. The third parameter indicates the number of heads. Since each GAT in the architecture has \(2\) heads, each layer following a GAT has input dimension twice the size of the ouput from the previous GAT layer.
to the model without attention, the dependence of \(T\) from \(\eta\) is parabolic for the final linear layer and almost parabolic for the other GATConv and linear layers, since for these layers the exponent of \(x\) in the fit is greater than 2 (Fig. 6). Furthermore, the dependence of \(T\) from \(1/\beta\) is again almost linear for every layer (only the final linear layer shows a more parabolic behaviour).
## 4 Conclusions
We investigate the parallelism between SGD dynamics and thermodynamic systems extending the study in [7] and [11] to Geometric Deep Learning algorithms. Experiments show that similarly to the Deep Learning setting of CNN architectures ([7], [11]), also for Geometric Deep Learning the temperatures of each layer behave independently. The temperature of the linear layers in the Geometric Deep Learning models considered behaves similarly to that of the linear layers in the CNN models studied in [7] and does not exhibit any simple dependence from \(\eta\) and \(\beta\) as originally assumed in [4]. On the contrary, the temperature of GCN and GAT "convolutional" layers behaves differently from the one of CNN layers considered in [7]: while the dependence of temperature from \(\eta\) is again almost parabolic, the one from \(\beta\) becomes linear. Furthermore, we find that different areas of GCNConv and GATConv layers have different temperature (see
Figure 6: Dependence of temperature from the learning rate for some layers of the GAT architecture.
Fig. 4) as observed for the filters of a CNN. This suggests a future technique of parameter pruning, based on temperature, as in [11], that may help speed up the optimization and make it more effective. More mathematical and physical modelling is needed to further advance in this direction.
|
2307.00335 | Single Sequence Prediction over Reasoning Graphs for Multi-hop QA | Recent generative approaches for multi-hop question answering (QA) utilize
the fusion-in-decoder method~\cite{izacard-grave-2021-leveraging} to generate a
single sequence output which includes both a final answer and a reasoning path
taken to arrive at that answer, such as passage titles and key facts from those
passages. While such models can lead to better interpretability and high
quantitative scores, they often have difficulty accurately identifying the
passages corresponding to key entities in the context, resulting in incorrect
passage hops and a lack of faithfulness in the reasoning path. To address this,
we propose a single-sequence prediction method over a local reasoning graph
(\model)\footnote{Code/Models will be released at
\url{https://github.com/gowtham1997/SeqGraph}} that integrates a graph
structure connecting key entities in each context passage to relevant
subsequent passages for each question. We use a graph neural network to encode
this graph structure and fuse the resulting representations into the entity
representations of the model. Our experiments show significant improvements in
answer exact-match/F1 scores and faithfulness of grounding in the reasoning
path on the HotpotQA dataset and achieve state-of-the-art numbers on the
Musique dataset with only up to a 4\% increase in model parameters. | Gowtham Ramesh, Makesh Sreedhar, Junjie Hu | 2023-07-01T13:15:09Z | http://arxiv.org/abs/2307.00335v1 | # Single Sequence Prediction over Reasoning Graphs for Multi-hop QA
###### Abstract
Recent generative approaches for multi-hop question answering (QA) utilize the fusion-in-decoder method Izacard and Grave (2021) to generate a single sequence output which includes both a final answer and a reasoning path taken to arrive at that answer, such as passage titles and key facts from those passages. While such models can lead to better interpretability and high quantitative scores, they often have difficulty accurately identifying the passages corresponding to key entities in the context, resulting in incorrect passage hops and a lack of faithfulness in the reasoning path. To address this, we propose a single-sequence prediction method over a local reasoning graph (SeqGraph)1 that integrates a graph structure connecting key entities in each context passage to relevant subsequent passages for each question. We use a graph neural network to encode this graph structure and fuse the resulting representations into the entity representations of the model. Our experiments show significant improvements in answer exact-match/F1 scores and faithfulness of grounding in the reasoning path on the HotpotQA dataset and achieve state-of-the-art numbers on the Musique dataset with only up to a 4% increase in model parameters.
Footnote 1: Code/Models will be released at [https://github.com/gowtham1997/SeqGraph](https://github.com/gowtham1997/SeqGraph)
## 1 Introduction
Multi-hop Question Answering (QA) involves reasoning over multiple passages and understanding the relationships between those pieces of information to answer a question. Compared with single-hop QA, which often extracts answers from a single passage, multi-hop QA is more challenging as it requires a model to determine the relevant facts from multiple passages and connect those facts for reasoning to infer the final answer.
To tackle multi-hop QA, recent works have investigated large pretrained _generative_ models Lewis et al. (2020); Roberts et al. (2020); Brown et al. (2020) and demonstrated their effectiveness over traditional _extractive_ models Chen et al. (2017). Compared with extractive models, the ability of generative models to effectively aggregate and combine evidence from multiple passages proves advantageous for multi-hop QA. In particular, Izacard and Grave (2021) propose a method called FiD (Fusion-in-Decoder), which leverages passage retrieval with a generative model, such as T5 Raffel et al. (2020) or BART Lewis et al. (2020), to achieve state-of-the-art performance on various single-hop QA tasks. However, this approach does not extend well to multi-hop QA tasks Yavuz et al. (2022), as it sorely relies on a black-box generative model to generate answers directly without explicitly modeling the multi-hop reasoning process. Additionally, FiD encodes multiple context passages independently for multi-hop QA, ignoring the structural and semantic relationship between these passages Yu et al. (2022). Building on FiD, Path-FiD Yavuz et al. (2022) addresses the interpretability issue by training a model to generate a reasoning path that contains supporting past
Figure 1: Localized graph construction connecting entity spans to corresponding passages in the context. If there are multiple passages with the same title, we connect the entity span to all such passages.
-size titles, facts, and the final answer. However, our analysis of Path-FiD outputs shows _disconnected reasoning_ with incorrect passage hops in the model's reasoning path which affects final answer generation. Recently, there have been multiple techniques Jiang and Bansal (2019); Lee et al. (2021); Ye et al. (2021) to counter disconnected reasoning which operate at the dataset level, using adversarial training, adding extra annotations or using dataset rebalancing for training. While these approaches optimize models to mitigate disconnected reasoning Trivedi et al. (2020), the performance on the original test set often suffers from a significant decrease.
In this paper, we propose a single-**seq**uence prediction method over a local reasoning **graph** (SeqGraph) that integrates a graph structure connecting key entities in each context passage to relevant subsequent passages for each question. Different from the prior works, our method not only mitigates the disconnected reasoning issue but also maintains robust performance on the original dataset. Intuitively, for each multi-hop question, our method leverages the structural relationship between different passages to learn structured representations through a graph neural network (GNN) Hamilton et al. (2017); Kipf and Welling (2017). The structured representations are fused to bias the generative model toward predicting a faithful, connected reasoning path which improves answer predictions. Our experiments on the Hotpot-QA dataset Yang et al. (2018) show clear improvements in exact-match(EM)/F1 scores compared to generative baselines in the _distractor_ setting while minimizing disconnected reasoning quantified by the DiRe score Trivedi et al. (2020). We also achieve the state-of-the-art performance on the Musique-Answerable test dataset Trivedi et al. (2022) with a 17-point improvement in answer F1 over the current best-performing model in the end-to-end (E2E) category.
To summarize, our contributions are as follows:
* We propose an interpretable single-**seq**uence prediction approach over local reasoning **graphs**, SeqGraph, to bias the model representations
* SeqGraph achieves notable performance improvements on two multi-hop QA benchmarks, Hotpot-QA and Musique (SOTA), with only a minimal increase in the model size.
* SeqGraph reduces disconnected reasoning as measured by DiRe score while maintaining strong performance gains on the original dataset.
## 2 Preliminaries
Problem Setup:In a multi-hop QA task, each QA pair in a labeled dataset \(\mathcal{D}\) is given along with a set of \(N\) passages, \(\mathcal{P}_{q}=\{p_{1},p_{2},...,p_{N}\}\), _i.e._, \((q,a,\mathcal{P}_{q})\in\mathcal{D}\), where a passage has its title and content \(p_{i}=(t_{i},c_{i})\). The task is to learn a model parameterized \(\theta\) to generate an answer string \(a\) for the given question \(q\) and \(\mathcal{P}_{q}\).
In this paper, we focus on the _distractor_ setting, where \(\mathcal{P}_{q}\) is given for each question and contains \(m\) distractors that are not useful to the answer prediction. Thus, this task requires a model to reason over multiple hops of the remaining \(N-m\) relevant passages. In addition to predicting the final answer \(a\), we also aim to train a model to predict a _reasoning path_\(R\) of important elements (_e.g._, relevant passage titles, supporting facts in a passage) that lead to the final answer.
Multi-hop QA as Single Sequence Generation:Recent generative question answering (QA) approaches (e.g., FiD Izacard and Grave (2021), Path-FiD Yavuz et al. (2022)) utilize an encoder-decoder model as the backbone to generate answers in a single text sequence. In particular, FiD is one of the popular formulations.
Specifically, for each passage \(p_{i}=(t_{i},c_{i})\in\mathcal{P}_{q}\) of a question \(q\), FiD encodes a combined sequence of the question, the passage title and contents into an embedding. These embeddings for all passages are concatenated as inputs to the decoder for generating the final answer.
Path-FiD builds upon this by explicitly modeling a reasoning path as part of the generation output in addition to the answer. Specifically, special index tokens \([f_{i}]\) are added to demarcate all sentences in each passage context. The sentences supporting the prediction of a final answer are considered facts. The decoder is then trained to generate the reasoning path \(R\) as a linearized sequence consisting of the passage titles and the index tokens of facts used within those passages to obtain the final answer. Figure 1 shows an example of a reasoning path.
Disconnected Reasoning in Path-FiD:Since the model predictions now include the reasoning path, we can analyze which facts in the passage are utilized by the model to determine the next passage to hop to and arrive at the final answer. For a perfectly faithful model, all predictions with
correct answers should have correctly identified passages and facts. However, due to the presence of shortcuts in the datasets as well as the model's predicted reasoning path not being faithful, we observe model predictions containing correct final answers but incorrect identification of passage titles or facts. This unfaithful prediction issue is referred to as _disconnected reasoning_[17]. Different from Path-FiD, we use the presence of a local graph structure between different passages in the context to bias the representations of the model and help alleviate this problem.
## 3 Method
In this section, we describe our proposed method for solving disconnected reasoning for multi-hop QA in the _distractor_ setting.
Overview:Our method first constructs a local graph over passage contexts for each question (SS3.1), and integrates the graph information with the key entities to improve the generation of reasoning paths (SS3.2). Different from prior works that encode all the passages independently, we connect the passages through the key pivot entities into a local graph for a question, which allows us to encode structural representations across passages by a graph neural network. These graph structured representations are then fused with the contextualized text representations from a text encoder, guiding the model to leverage structural information to alleviate disconnected reasoning over passages.
### Graph Construction
In contrast to the _full-wiki_ setting where a model must retrieve relevant passages from Wikipedia or a large corpus, the distractor setting provides the model with a list of \(N\) passages \(\mathcal{P}_{q}\) consisting of \(N-m\) relevant passages and \(m\) distractors for each question \(q\). Conventionally, these passages are collected from Wikipedia, as Wikipedia remains one of the largest faithful knowledge sources available for public usage. Even for text passages out of Wikipedia, there are existing out-of-box entity linkers (e.g., SLING [14], BLINK [20]) that can identify key entities from texts and link them to their Wikipedia pages. As a result, each provided passage may contain pivot entities with hyperlinks connecting to their corresponding Wikipedia pages. We exploit such entity hyperlinks to construct a local directed graph \(\mathcal{G}=(\mathcal{N},\mathcal{L})\) containing two types of nodes (_i.e_., entities and passage titles) and links between these nodes. Specifically, for each pivot entity \(e\) in a passage \(p_{i}\), we create a link from \(e\) to the title \(t_{j}\) of another passage \(p_{j}\) (denoted as \(l_{e\to t_{j}}\)) whenever the entity span \(e\) points to a Wikipedia article that contains the passage \(p_{j}\).
For example, an entity span _"David Noughton"_ appears in the passage context: _"An American Werewolf in London is a 1981 horror comedy film starring David Noughton, Jenny Agutter...."_
This entity would be connected to a passage with the title of _"David Walsh Noughton"_, forming the link (David Noughton[Entity] \(\rightarrow\) David Walsh Noughton[Passage]). If there are multiple passages with the title _"David Walsh Noughton"_ among the \(N\) passages, the entity span would be connected to all of them with distinct links. Figure 1 shows an example of an entity-passage graph.
### Entity-to-Passage Fusion
Next, we describe how we encode such a local directed graph into vector representations for all nodes and fuse these node representations with the contextualized text representations of the corresponding entities from the language model.
We utilize the same model as Path-FiD with a pre-trained T5 model as our backbone architecture. The input for this method consists of the \(N\) sequences, where each sequence is a concatenation of the question \(q\), the title and contents of a passage \(p_{i}\) from the collection \(p_{i}\in\mathcal{P}_{q}\) together with their indicator tokens, denoted as \(S_{i}\) below:
\[S_{i}:=[\texttt{Question}]\;q\;[\texttt{Title}]\;t_{i}\;[\texttt{Content}]\;c_{i} \tag{1}\]
Given the T5's encoder of \(M\) transformer layers, we first encode \(S_{i}\) through the first \(L\) layers to obtain the intermediate hidden representations \(\mathbf{Z}_{i}^{L}\) in Eq. (2), which capture the shallow contextualized information of the input sequence.
\[\mathbf{Z}_{i}^{L}=\text{TextEncoder}(S_{i},L) \tag{2}\]
We utilize these shallow representations to initialize the node embeddings for a graph neural network. Specifically, we extract the representations of the entity spans or passage title spans (_i.e_., nodes in the graph \(\mathcal{G}\)) from \(\mathbf{Z}_{i}^{L}\) according to their span positions \([a,b]\) in \(S_{i}\). Next, for a text span \(S_{i,a:b}\) representing either an entity or a title in \(S_{i}\), we average the extracted representations of the text span to obtain an initial node embedding, _i.e_., \(\mathbf{n}=\text{avg}(\mathbf{Z}_{i,a:b}^{L})\). Finally, we stack the initial
embeddings for all nodes denoted as \(\mathbf{N}\) and apply a graph neural network (GNN) to further encode the structural embeddings on the graph \(\mathcal{G}\):
\[\mathbf{Z}^{G}=\text{GraphEncoder}(\mathbf{N},\mathcal{G}) \tag{3}\]
As we record the text span position \([a,b]\) for each node in \(\mathcal{G}\), we can leverage the node embeddings \(\mathbf{Z}^{G}\) to construct a new structured representation \(\mathbf{Z}^{G}_{i}\) (with the same size as \(\mathbf{Z}^{L}_{i}\)) for each sequence \(S_{i}\) where we fill in the node embeddings from \(\mathbf{Z}^{G}\) to their corresponding text span positions \([a,b]\) in \(S_{i}\) and fill in \(0\) to the other non-span positions.
Finally, we fuse the contextualized text representations \(\mathbf{Z}^{L}_{i}\) from the text encoder and the structured node representations \(\mathbf{Z}^{G}_{i}\) by an aggregation operator \(\oplus\), and pass them to the remaining layers of the text encoder to obtained the fused representations \(\mathbf{S}_{i}\) for each input sequence \(S_{i}\):
\[\mathbf{S}_{i}=\text{TextEncoder}(\mathbf{Z}^{G}_{i}\oplus\mathbf{Z}^{L}_{i },M-L) \tag{4}\]
In this work, the aggregation operator used is a simple addition. Complex aggregation mechanisms such as learning a weighted combination of the representations can be explored in future work.
We concatenate the fused representations \(\mathbf{S}_{i}\) from all of the \(N\) context sequences to form \(\mathbf{S}=[\mathbf{S}_{1};\mathbf{S}_{2}\cdots;\mathbf{S}_{N}]\).
Subsequently, \(\mathbf{S}\) is passed as inputs to the T5 decoder that estimates the conditional probability \(P_{\theta}(R|\mathbf{S})\) of predicting a reasoning path \(R\). Depending on the annotations in different datasets, a reasoning path \(R\) can take various formats. For example, the reasoning path takes the form "\(R:=[\texttt{title}]\;t_{i}\;[\texttt{facts}]\;f_{i}\;[\texttt{answer}]\;a\)" for Hotpot-QA and "\(R:=[\texttt{title}]\;t_{i}\;[\texttt{intermediate\_answer}]\) ans\({}_{i}\;[\texttt{answer}]\;a\)" for Musique. We also investigate variants of reasoning paths for Musique in our experiments. As we can construct ground-truth reasoning paths \(R^{*}\) during training, the model is optimized using a cross-entropy loss between the conditional probability \(P_{\theta}(R|\mathbf{S})\) and \(R^{*}\).
Figure 2: Given a question and the supporting passages, we construct a localized entity-passage graph. The representations from the \(L^{th}\) layer of the language model is used to initialize the weights of a graph neural network(GNN) and it is used to perform message passing on the constructed local graph. The representations for the entity spans and titles from the GNN are added to the LM representations and passed through the remaining \(M-L\) layers of the encoder. The T5 decoder performs cross-attention on the final hidden states from the encoder and generates the reasoning path with the final answer.
Experimental Setting
In this section, we elaborate on the datasets, the baseline models and the variants of SeqGraph we consider for our experiment settings. We consider two multi-hop QA datasets, Hotpot-QA and Musique. Since SeqGraph is primarily focused only on improving the efficacy of encoding, we consider only the _distractor_ setting for both datasets. Table 4 shows the standard train/dev/test statistics.
Hotpot-Qa:The final answer to each question in the distractor setting is extracted from 10 passages. The dataset includes two main types of questions: bridge (80%) and comparison (20%). Bridge questions often require identifying a bridge entity in the first passage to correctly hop to the second passage that contains the answer, while comparison questions do not have this requirement. Each question is also provided with annotations of 2 supporting passages (2-hop) and up to 5 corresponding relevant sentences as their supporting facts.
Musique:Musique has questions that range in difficulty from 2 to 4-hops and six types of reasoning chains. Musique uses a stringent filtering process as well as a bottom-up technique to iteratively combine single-hop questions from several datasets into a \(k\)-hop benchmark that is more difficult than each individual dataset and significantly less susceptible to the disconnected-reasoning problem. Unlike Hotpot-QA, Musique does not provide annotations of relevant sentences but provides supporting passage titles, question decomposition(decomposition of a multi-hop question into simpler 1-hop sub-questions) and also intermediate answers to the decomposed questions. Given this variety, we use the following reasoning path variants to train the model to generate:
* DA: Question decomposition and final answer
* SA: Supporting titles and final answer
* SIA: Supporting titles, intermediate answers and final answer
* DSIA: Question decomposition, supporting titles, intermediate answers and final answer
Table 6 shows an example of different reasoning paths. While the last variant (predicting every decomposition/intermediate answer or support title) is more interpretable, it encounters the challenge of producing a long sequence. SIA is our best-performing reasoning path variant which is used for all of our results and analysis.
### Models in Comparison
Our main baselines are generative approaches to multi-hop QA that include and build upon the FiD approach. For all of the models, we use the pretrained T5 encoder-decoder as the backbone and consider two sizes--base and large variants.
* FiD: Model generation includes only the final answer.
* Path-FiD: Model generation includes the reasoning path as well as the final answer.
* SeqGraph: Model that utilizes a fusion of representations from the language model and the Graph Neural Network. Similar to Path-FiD, we train the model to generate the reasoning path in addition to the final answer.
### Evaluation Metrics
For both Hotpot-QA and Musique, we use the standard quantitative metrics of exact-match and F1 scores to evaluate the quality of predicted answers. For models that predict the reasoning path in addition to the final answer, we can quantify how accurately they can identify the supporting facts (or supporting titles for Musique) using the Support-EM and Support-F1 scores Yang et al. (2018).
To quantify the level of disconnected reasoning, we compute dire F1 scores on the answer spans (**Answer**), supporting paragraphs (**Supp\({}_{\text{p}}\)**), supporting sentences (**Supp\({}_{\text{s}}\)**), joint metrics (**Ans+Supp\({}_{\text{p}}\)**, **Ans+Sup\({}_{\text{s}}\)**) of the Dire Hotpot-QA subset.
### Implementation details
We train all models using an effective batch size of 64. We use an initial learning rate of 1e-4, a linear rate scheduler, a warmup of 2,000 steps (1,000 steps for Musique), and finetune the models for 10 epochs. For SeqGraph, we use GAT Velickovic et al. (2017) for our GNN layers. A maximum sequence length of 256 tokens is used for constructing the input. All experiments have been conducted on a machine with either 4\(\times\)40G A100 GPUs or 4\(\times\)80G A100 GPUs. A detailed list of hyperparameters can be found in Appendix E.
## 5 Results and Analysis
In this section, we present the main results of the baselines and our proposed approach on Hotpot-QA and Musique (SS5.1), and then perform fine-grained analysis thereafter.
### Multi-hop Performance
The quantitative performance of the models in terms of exact-match and F1 scores for both the final answer and the predicted supports are shown in Table 1. We find that across both model sizes (Base and Large), explicitly predicting the reasoning path helps Path-FiD in improving the answer EM and F1 scores over the vanilla FiD approach. By biasing the model with graph representations, SeqGraph outperforms the baselines on both the Hotpot-QA and the Musique datasets.
SeqGraph achieves a 2-point improvement in both answer and support EM when considering the base variant and 1.5 point improvement for the large variant on the dev set of Hotpot-QA.
On the more challenging Musique dataset, we observe stronger results from SeqGraph where it records up to a 4-point improvement in both answer and support scores across both model sizes on the dev set. On the test set (in Table 8 of the appendix), the current best performing approach is a two stage RoBERTa/ Longformer-Large model, Select-Answer, where the passage selection/ranking and answer generation stage is optimized separately using different models. SeqGraph-Large achieves state-of-the-art numbers on Answer-F1 with a 5-point improvement over the Select-Answer model2 even though it is a single stage approach. When comparing with the top score in the end-to-end (E2E) category which all of our models belong to, SeqGraph gets a massive 17-point improvement in answer F1 and a 9-point improvement in support F1 establishing the efficacy of our approach. It should also be noted that all of the current models on the leaderboard are discriminative approaches with an encoder-only model (Longformer-Large) encoding a very long context length of 4,096, while all of our models are generative in nature with a much smaller context length of 256. Musique is also designed to be more challenging than Hotpot-QA and explicitly tackles the issue of disconnected reasoning during dataset curation, making it harder for the model to take shortcuts and cheat. The larger performance improvements of SeqGraph on Musique compared to Hotpot-QA showcases the advantage of our proposed approach, providing promising results for further research in this direction to mitigate disconnected reasoning.
Footnote 2: [https://leaderboard.allenai.org/musique_ans/](https://leaderboard.allenai.org/musique_ans/)
### Faithfulness of Reasoning Paths
We follow Yavuz et al. (2022) to perform analysis at the passage and individual fact level to determine how faithful the generated reasoning paths are across different models.
**Predicted Answer in Predicted Titles/Support:** _how often are the predicted answers found in one of the predicted passages or in the predicted supporting facts3_.
Footnote 3: We do this analysis only on Bridge type questions where the final answer span can be found in context passages, unlike comparison questions where the final answer is usually _yes/no_
**Gold Answer in Predicted Titles/Support:** _how often are the gold answers found in one of the predicted passages or in the predicted supporting facts_.
**Predicted Answer in Gold Titles/Support:** _how often are the predicted answers found in one of the gold passages or in the gold supporting facts_.
Figure 3 shows the described faithfulness metric scores on Hotpot-QA. We find that SeqGraph
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Hotpot-QA} & \multicolumn{4}{c}{Musique} \\ \cline{2-9} & \multicolumn{2}{c}{Answer} & \multicolumn{2}{c}{Support} & \multicolumn{2}{c}{Answer} & \multicolumn{2}{c}{Support} \\ & EM & F1 & EM & F1 & EM & F1 & EM & F1 \\ \hline FiD-Base & 61.84 & 75.20 & - & - & 29.38 & 39.97 & - & - \\ Path-FiD-Base & 62.03 & 75.69 & 60.45 & 86.00 & 34.71 & 44.93 & 57.30 & 80.18 \\ SeqGraph-Base & **64.19** & **77.60** & **62.44** & **87.72** & **37.36** & **47.11** & **58.05** & **80.39** \\ \hline FiD-Large & 65.59 & 79.39 & - & - & 36.04 & 46.66 & - & - \\ Path-FiD-Large\({}^{*}\) & 65.80 & 78.90 & 59.30 & 85.70 & - & - & - & - \\ Path-FiD-Large & 65.33 & 79.00 & 61.52 & 86.88 & 42.28 & 53.86 & 62.14 & 82.45 \\ SeqGraph-Large & **66.51** & **81.62** & **63.24** & **88.28** & **46.01** & **56.88** & **65.12** & **83.65** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance on the dev set of Hotpot-QA and Musique. Since FiD does not predict a reasoning path, we do not compute the Support EM and F1 scores. Path-FiD-Large\({}^{*}\) indicates the numbers reported from Yavuz et al. (2022), while the other numbers are from our reimplementation
is more faithful with a 0.5-1.5% improvement over Path-FiD across all considered categories.
### Performance vs Number of hops
We break down the final answer exact-match and F1 scores based on how many supporting facts(or titles for Musique) are required to answer the question. Figure 5 shows this performance breakdown for Hotpot-QA and Figure 6 shows it for Musique. We observe that SeqGraph improves Ver Path-FiD in the cases where the support includes two or three supporting facts (or titles), but the answer EM takes a hit when the number of supporting facts(titles) \(\geq 4\). We notice that SeqGraph has a higher support EM over Path-FiD in such cases where shortcuts may exist in the dataset and Path-FiD relies on those shortcuts to get a higher answer EM but a lower support EM. Section SS5.4 quantifies the extent to which Path-FiD suffers from disconnected reasoning as compared to SeqGraph.
### Probing Disconnected Reasoning
Hotpot-QA suffers from information leakage in the form of reasoning shortcuts leading to _disconnected reasoning_. This affects the generalization capability of such models and inflates the performance on the evaluation sets. Table 4 shows some qualitative examples of disconnected reasoning in Path-FiD that are avoided by SeqGraph
Trivedi et al. (2020) construct a probe of Hotpot-QA by splitting the two supporting paragraphs for the original question across two questions. If the model can answer modified questions correctly without the complete context, it suggests that the model uses disconnected reasoning for the original question. By measuring the performance of a model on such a dataset, we arrive at the DiRe score with a higher value implying more disconnected reasoning. Table 2 shows the DiRe scores for the various models. We see that SeqGraph reports to lower disconnected reasoning compared to Path-FiD while maintaining strong performance gains on the original evaluation set.
### Comparison with PathFiD+
Yavuz et al. (2022) extend Path-FiD and introduce Path-FiD + to improve the cross-passage interactions before feeding to the FiD decoder and show an improvement of 7 EM points and achieve state-of-the-art results on Hotpot-QA distractor
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Model** & **Answer \(\downarrow\)** & **Supp\({}_{\text{P}}\)**\(\downarrow\) & **Supps \(\downarrow\)** & **Ans + Supp\({}_{\text{P}}\)**\(\downarrow\) & **Ans + Supps \(\downarrow\)** \\ \hline \hline \multicolumn{1}{c}{F1D-Base} & 51.1 & - & - & - & - \\ Path-FiD-Base & 45.5 & 48 & 49.1 & 22.6 & 24.3 \\ SeqGraph-Base & 44.7 & 46.2 & 45.4 & 21.8 & 22.8 \\ \hline \multicolumn{1}{c}{F1D-Large} & 53.5 & - & - & - & - \\ Path-FiD-Large & 48.8 & 48.3 & 49.7 & 24.3 & 26.4 \\ SeqGraph-Base & 45.7 & 45.9 & 45.3 & 22.3 & 23.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: DiRe score (F1 scores) for various models on the probe dataset of Hotpot-QA indicating the extent of disconnected reasoning. Lower the score, the better the model.
Figure 3: Comparison of model faithfulness on Hotpot-QA. We find that SeqGraph improves over Path-FiD consistently across all categories.
dataset. However, we find the following limitations of the approach:
**Hop-assumption**: Path-FiD + adds pairs of contexts as input to the FID encoder, which assumes a fixed number of hops (in case of Hotpot-QA, two) and doubles the input sequence length, leading to increased training time.
**Multi-step**: To efficiently encode pairs of passages (instead of inefficient \(\binom{N}{2}\) passages, where N is the total number of passages), Path-FiD + also needs to run the vanilla Path-FiD or train another model to choose the first relevant context \(P*\) to jump to and then construct pairs (\(P*\), \(P_{n}\)). This makes it inefficient and not scalable to questions with higher hops or complex datasets like Musique
In contrast, our approach does not make any assumptions about the number of hops and is scalable. It produces output in a single shot without requiring multiple steps or increased sequence length. While Path-FiD + may achieve stronger performance in 2-hop Hotpot-QA, our proposed method is more general, efficient and scalable, making it a more practical solution for real-world applications and also easily extendable to open-domain setting.
## 6 Related Works
Multihop question answering requires a model to perform reasoning over multiple pieces of information, utilizing multiple sources and inferring relationships between them to provide a correct answer to a given question. There have been various approaches and datasets proposed for training QA systems, such as HotpotQA (Yang et al., 2018), IIRC(Ferguson et al., 2020) and Musique (Trivedi et al., 2022).
In the Hotpot-QA full-wiki setting, the task is to find relevant facts from all Wikipedia articles and then use them to complete the multi-hop QA task. Retrieval models play an important role in this setting, such as DPR (Karpukhin et al., 2020), which focuses on retrieving relevant information in the semantic space. Other methods, such as Entities-centric (Das et al., 2019), and Golden Retriever (Qi et al., 2019), use entities mentioned or reformulated in query keywords to retrieve the next hop
Figure 4: Qualitative Analysis of Disconnected Reasoning in Hotpot-QA. Correct/Incorrect hops from entity spans to Passage titles for different cases are shown. In the first two cases, disconnected reasoning by Path-FiD leads to incorrect final answer while SeqGraph gets the path and answer correct. The third case shows Path-FiD getting the final answer right despite the reasoning path being disconnected while SeqGraph gets the connected reasoning path right.
document. Additionally, PathRetriever Asai et al. (2020) and HopRetriever Li et al. (2020) use RNN to select documents to form a paragraph-level reasoning path iteratively. The above methods mainly focus on the open-domain setting (full-wiki) and improve the retriever's performance and do not address the disconnected reasoning problem.
Multiple techniques Jiang and Bansal (2019); Lee et al. (2021); Ye et al. (2021) to counter disconnected reasoning operate at the dataset level, using adversarial training, adding extra annotations or using dataset augmentations to get a balanced train set and prevent the model from cheating.
We highlight differences between our approach and other related works on Hotpot-QA-distractor and other works that combine language models with graphs below :
**Generative approaches**: Our generative-FiD approach differs from others using KG/GNN Ju et al. (2022); Yu et al. (2022) as we use an entity-passage graph with Wikipedia hyperlinks. Also, our focus is primarily on the distractor setting of multi-hop QA, while other baselines Ju et al. (2022); Yu et al. (2022) are either single-hop or improving retrieval in open-domain setting
**Pipeline vs single-stage**: Other baselines Tu et al. (2019); Chen et al. (2019); Qiu et al. (2019); Wang et al. (2021); Li et al. (2023) use a pipeline approach with distinct encoder models in the reasoning process, while we use a single-stage, one-shot prediction process without assumptions on the number of hops.
**Graph construction**: Other methods Tu et al. (2019); Qiu et al. (2019) select relevant passages heuristically from among distractors to construct graphs. However, we construct our entity-passage graph on all passages (including distractors) and fuse the representations in the encoder.
While a direct comparison with pipeline-based approaches is not possible or fair, we provide comparisons in Table 3 for completeness.
## 7 Conclusion
In this paper, we propose SeqGraph, an approach that utilizes the structured relationship between passages in the context of multi-hop questions to reduce disconnected reasoning. We construct a localized entity-passage graph using Wikipedia hyperlinks, encode it using a GNN, and fuse the structured representations with the text encoder for predicting a reasoning path. Our approach results in strong performance gains in terms of both answer and support EM/F1 on Hotpot-QA and reduces disconnected reasoning measured using DiRe score. We also obtain state-of-the-art performance on the more challenging Musique benchmark with a 17-point improvement in answer F1 over the current best end-to-end(E2E) model. Experimenting with sophisticated methods of encoding the graph structure and fusing the text and graph representations can be explored in future work.
## Limitations
We identify the following limitations of our work:
Longer Output SequencesWhile outputting the reasoning path as a single short sequence makes the model more interpretable, it increases the challenge of producing a long /coherent sequence when the question is complex (more than 3 hops). Producing a longer sequence also increases the inference time. Simplifying this output while not sacrificing interpretability is a good future direction
Entity IdentificationOur method needs wikipedia outlinks or a entity linker to construct a localized graph for every question. Generalizing this step by pretraining the model to do entity linking Fevry et al. (2020); Sun et al. (2021); Verga et al. (2020) might eliminate the need to use an external module.
|
2308.01876 | Subsurface pulse, crater and ejecta asymmetry from oblique impacts into
granular media | We carry out experiments of 104 m/s velocity oblique impacts into a granular
medium (sand). Impact craters have nearly round rims even at a grazing angle of
about $10^\circ$, however, the strength of seismic pulses excited by the impact
is dependent upon impact angle, and the ratio between uprange and downrange
velocity peaks can be as large as 5, particularly at shallow depths. Crater
slope, an offset between crater center and impact site, crater volume,
azimuthal variation in ejection angle, seismic pulse shapes and subsurface flow
direction are also sensitive to impact angle, but to a much lower degree than
subsurface pulse strength. Uprange and downrange pulse peak amplitudes can be
estimated from the horizontal and vertical components of the momentum imparted
to the medium from the projectile | Bingcheng Suo, A. C. Quillen, Max Neiderbach, Luke O'Brient, Abobakar Sediq Miakhel, Nathan Skerrett, Jérémy Couturier, Victor Lherm, Jiaxin Wang, Hesam Askari, Esteban Wright, Paul Sánchez | 2023-08-03T17:06:18Z | http://arxiv.org/abs/2308.01876v2 | # Subsurface pulse, crater and ejecta asymmetry from oblique impacts into granular media
###### Abstract
We carry out experiments of 104 m/s velocity oblique impacts into a granular medium (sand). Impact craters have nearly round rims even at a grazing angle of about \(10^{\circ}\), however, the strength of seismic pulses excited by the impact is dependent upon impact angle, and the ratio between uprange and downrange velocity peaks can be as large as 5, particularly at shallow depths. Crater slope, an offset between crater center and impact site, crater volume, azimuthal variation in ejection angle, seismic pulse shapes and subsurface flow direction are also sensitive to impact angle, but to a much lower degree than subsurface pulse strength. Uprange and downrange pulse peak amplitudes can be estimated from the horizontal and vertical components of the momentum imparted to the medium from the projectile.
## 1 Introduction
Impacts on astronomical bodies often have projectile velocity vector that is not normal to the surface (Gilbert, 1893; Shoemaker, 1961); with more than 50% occurring at impact angles between 30 and \(60^{\circ}\)(Melosh and Pierazzo, 2000). Nevertheless, due to their cylindrical symmetry, laboratory impact experiments often focus on normal impacts (Tsimring and Volfson, 2005; Goldman and Umbanhowar, 2008; Matsue et al., 2020; Quillen et al., 2022; Cline and Cintala, 2022; Neiderbach et al., 2023). However, crater properties depend upon the impact angle. For example, the ratio of crater volume times substrate density to projectile mass, sometimes called 'crater efficiency', is sensitive to the impact angle (Gault and Wedekind, 1978; Chapman and McKinnon, 1986; Elbeshausen et al., 2013; Michikami et al., 2017; Takizawa and Katsuragi, 2020). Except near grazing angles, oblique impact experiments and simulations show nearly round crater rims with subtle asymmetric deviations in crater shape (Gault and Wedekind, 1978; Anderson et al., 2003, 2004; Wallis et al., 2005; Raducan et al., 2022). Ejecta mass, angle, and velocity distributions are sensitive to the azimuthal angle for oblique impacts (Anderson et al., 2003, 2004; Anderson and Schultz, 2006; Raducan et al., 2022; Luo et al., 2022). Based on experiments into granular media, crater scaling laws for volume, diameter, and axis ratio in the gravity regime have recently been generalized or extended to take into account substrate slope and impact angle (Takizawa and Katsuragi, 2020).
Experimental studies of oblique impacts into a 2-dimensional photoelastic granular medium find that force-chain propagation in the horizontal (lateral) direction is relatively weak compared to that in the vertical direction (Bester et al., 2019). Simulations in two dimensions of low-velocity oblique impacts find that there is a characteristic depth for disturbance within a granular substrate, which can be described with a skin depth (Miklavcic et al., 2022). Numerical simulations of low-velocity oblique impacts into a granular system showed that an empirical drag force model on the projectile is sensitive to impact angle (Wang et al., 2012). Wright et al. (2022, 2020) find that spherical projectiles, hitting sand at low impact velocity, ricochet depending upon the dimensionless Froude number that is related to the \(\pi_{2}\) dimensionless parameter that is used to characterize impact crater dimensions (Housen and Holspapple, 2011; Celic et al., 2022). Numerical simulations of low-g and low-velocity impacts into granular systems support Froude number scaling for oblique impacts (Miklavcic
et al., 2023).
In a hypervelocity regime (\(\gtrsim\) km/s) for impacts into solid rock (Elbeshausen et al., 2013; Michikami et al., 2017; Takizawa and Katsuragi, 2020), and for lower velocity impacts into sand (Takizawa and Katsuragi, 2020), the ellipticity of the crater depends upon both impact angle and impact velocity. Crater size, volume, and shape are also sensitive to impact angle (Gault and Wedekind, 1978; Elbeshausen et al., 2009; Elbeshausen et al., 2013; Takizawa and Katsuragi, 2020). Experiments and simulations measuring the angle of ejecta imply that the subsurface disturbance varies with azimuthal angle and is sensitive to impact angle (Anderson et al., 2003, 2004; Anderson and Schultz, 2006; Raducan et al., 2022). Studies of hypervelocity impacts into solids have shown that asymmetries present in the shock wave produced during an oblique impact persist to late times, as measured in the far field (Dahl and Schultz, 2001). These studies imply that the momentum direction of the projectile affects the subsurface flow and the nature of a seismic disturbance excited by the impact.
High-velocity impact craters into a solid can be divided into three sequential stages (Melosh, 1985; Melosh and Ivanov, 1999). In the first stage, the impact generates a shock that propagates radially outward from the site of impact and compresses material. A rarefaction wave propagates from the free surface, releasing the pressure in the compressed material. This wave reduces the substrate velocity and changes the direction of the velocity vector (Melosh, 1985; Kurosawa, 2019). The deflected particle trajectories are curved so that they point upward toward the surface, generating the excavation flow of the second stage, and forming a transient crater. An empirical analytical model that is frequently used to characterize flow during the excavation phase (e.g., Croft 1981; Anderson and Schultz 2006) is Maxwell's Z-model (Maxwell, 1977). In contrast with the longer time scale involved in crater excavation, energy, and momentum are rapidly transferred via shockwaves to the target material (Melosh and Ivanov, 1999). Lastly in the third stage, the crater is modified through even longer relaxation processes such as slumping and erosion.
With embedded accelerometers, laboratory experiments of normal impacts into granular media have characterized the duration, strength, and decay rate of impact-excited seismic pulses (Yasui et al., 2015; Matsue et al., 2020; Quillen et al., 2022; Neiderbach et al., 2023). At the time of peak acceleration, the seismic pulse vector orientation appears longitudinal (oriented along the radial vector from the impact site) (Yasui et al., 2015; Quillen et al., 2022). However, subsequently when the pulse velocity peaks, the subsurface velocity flow field resembles Maxwell's Z-model, leading to crater excavation (Neiderbach et al., 2023). Seismic pulse duration is significantly shorter than the crater excavation time-scale (Quillen et al., 2022; Neiderbach et al., 2023). Thus, laboratory experiments of normal impacts into granular media suggest that impacts into granular media exhibit three phases analogous to those of hypervelocity impacts into solids.
Subsurface motions excited by laboratory impacts into solids differ from those excited by impacts into granular media in some ways. The duration of excited pulses is only a few microseconds in a solid (e.g, Dahl and Schultz 2001; Guldermeister and Wunnemann 2017) whereas the duration is longer, of order a ms, in a granular medium (e.g., McGarr et al. 1969; Yasui et al. 2015; Matsue et al. 2020; Quillen et al. 2022). Whereas impact-excited seismic pulses traveling in a solid can exhibit a long coda and contain power at high frequencies (McGarr et al., 1969; Dahl and Schultz, 2001), those in granular systems tend to be comprised of a single, rapidly decaying pulse (e.g., McGarr et al. 1969; Yasui et al. 2015; Matsue et al. 2020; Quillen et al. 2022; Neiderbach et al. 2023). The difference in pulse duration in the two settings suggests that pulse duration is sensitive to the speed of sound or P-waves propagating through the medium. An additional dependence of pulse duration on crater size (suggested by Quillen et al. 2022 with a seismic time-scale) would account for the similarity of the pulse durations in granular media in the lab for different experiments that have similar-sized craters.
The detailed study of ejecta in oblique impacts into granular media (Anderson et al., 2003, 2004; Anderson and Schultz, 2006) inferred that there are asymmetries in the subsurface flow. Our goal in this manuscript is to measure and characterize the subsurface motions and their sensitivity to impact angle. Via tracking of ejecta, Anderson et al. (2003, 2004); Anderson and Schultz (2006) proposed variations of the Maxwell Z-model (Maxwell, 1977) for the subsurface flow field excited by oblique impacts. Through direct measurements of subsurface accelerations with accelerometers, we can test this type of flow model.
Even though most impacts occur at moderate impact angles, the paucity of elliptical craters on bodies such as the Moon and Mars implies that quite low impact angles, \(\theta_{I}\lesssim 10^{\circ}\) from horizontal, are required to form elliptical craters (Bottke et al., 2000; Collins et al., 2011). As the angle required for an impact crater to be significantly asymmetric is sensitive to impact energy (Collins et al., 2011; Elbeshausen et al., 2013; Takizawa and Katsuragi, 2020), we are choosing to do experiments at a higher velocity, about 100 m/s, rather than the few m/s of our previous experiments (Quillen et al., 2022). Secondary craters formed by ejecta are typically at a lower velocity than the originating impact (Housen and Holsapple, 2011). Due to the different dynamical source populations, many of the crater-forming impacts on Transneptunian objects, such as (486958) Arrokoth, could have occurred at a few hundred m/s (Mao et al., 2021).
The goal of our experimental study is to better characterize the dynamics of oblique impacts into granular systems by measuring associated subsurface motions. Our study will aid in predicting behavior of impacts on granular surfaces by landers (Maurel et al., 2018; Celik et al., 2019; Ballouz et al., 2021; Thuillet et al., 2021), and im
prove predictions for how impacts disturb the surfaces of astronomical bodies covered in and composed of granular materials. Understanding how seismicity is excited by oblique impacts is important in development of asteroid deflection strategies (e.g., Rivkin et al., 2021). By studying the subsurface motions, we hope to better understand the processes that cause asymmetry in crater shape and ejecta distributions.
Following Elbeshausen et al. (2013); Raducan et al. (2022), we denote the direction toward the projectile launcher as _uprange_ and the opposite direction as _downrange_.
## 2 Experimental Methods
Figure 1 illustrates our experiments. We carry out two types of experiments. In a) we show impact experiments where we record acceleration in 3 axes from 4 embedded accelerometers. In b) we show how we measure crater profiles after impact by scanning a line laser across the crater. The surface is flat prior to impact in both types of experiments, and accelerometers are not embedded in the substrate in the experiments used to measure crater shape. The impact angle \(\theta_{I}\) is shown in Figure 1 a) and is measured from the horizontal direction, so a low \(\theta_{I}\) corresponds to a grazing impact.
### Airsoft projectiles
The impact experiments use \(M_{p}=0.20\) g spherical projectiles (referred to as pellets or BBs) that are launched with an airfoil gun. The projectile propellant is compressed carbon dioxide and comes in disposable cartridges. The BB diameter is \(5.95\pm 0.01\) mm. The BBs are biodegradable and comprised of white polylactic acid (PLA). We have measured the velocity of the projectiles with high-speed video and find that the impact velocity lies within a narrow range; \(v_{imp}=103\) to \(105\) m/s. The velocity of the projectile is fixed and set by the propellant. Consequently, in this study, we do not vary the impact velocity. The properties of the projectiles are summarized in Table 1.
We built a box to house the airsoft gun for two reasons. In an educational setting, it is important that something that looks like a firearm is not visible. Secondly, to map out the subsurface pulse properties with a few accelerometers, we require the experiments to be repeatable. By clamping the airsoft gun within the box, we can ensure that the BBs repeatably hit the same target position at the same angle.
### Videos
During each impact, we filmed with a Krontech Chronos 2.1 high-speed camera. An Arduino controls an actuator that is used to push the airsoft gun trigger so that it fires the BB. The same Arduino is programmed to simultaneously trigger the recording of accelerometer signals by two oscilloscopes and the high-speed video camera.
When doing experiments with the accelerometer array, videos from the high-speed video camera were taken at 6265 frames per second (fps), with image frames 1920 \(\times\) 120 pixels. Since both high-speed video and accelerometers were triggered simultaneously, the time of impact identified in the 6265 fps high-speed camera videos identifies the impact time in the accelerometer data. We estimate that the time of impact is accurate to about 0.2 ms.
Experiments used to measure crater profiles were taken along with high-speed videos at 1000 fps (with image frames 1920 \(\times\) 1080 pixels) giving a wider field of view of the ejecta curtain compare to those taken in experiments with accelerometer arrays. These videos are concatenated and available as supplemental video Obliqueimpacts.mp4. The impact was viewed with a near horizontal camera angle (about \(7^{\circ}\) from horizontal) and was chosen to show the ejecta curtain. These videos were used to verify the impact angle, determine whether the projectile ricocheted and if so, measure the speed and angle of the projectile after rebound. These videos were also used to identify the site of impact with respect to the green laser target that illuminates the sand from above and which was also viewable during the laser scans.
Experiments used to measure crater profiles were filmed (after impact) with a regular video camera (Blackmagic Pocket Cinema Camera 4K) at 60 fps while scanning a line laser across the crater. The camera viewing angle was \(45^{\circ}\) from horizontal. Video frames from this camera are 3840 \(\times\) 2160 pixels. The same camera was also used to photograph the craters from above.
High-speed videos for both sets of experiments were filmed with a bright (100,000 lumen) LED light at about \(45^{\circ}\) from vertical. Crater photographs viewed from above the crater were lit with the same light. To best show the crater rim we lit the crater with nearly horizontal light coming from the downrange side. Laser scan videos were done in ambient room light so that the laser line is clearly visible in each frame.
The pixel scales in the 60 fps, 1000 fps videos and in the photographs are measured from a machined aluminum block placed on the surface that is 25.4mm long and wide.
Experiments measuring crater profiles were carried out early June 2023, and those using accelerometers were carried out during June and July 2023.
### Granular substrate target
We use a galvanized 41.6 liter (11 gallons) washtub with a rim diameter of 50.2 cm and depth of 25 cm to hold
\begin{table}
\begin{tabular}{l l l} \hline Quantity & Symbol & Value \\ \hline Mass & \(M_{p}\) & \(0.20\pm 0.002\) g \\ Radius & \(R_{p}\) & \(2.98\pm 0.005\) mm \\ Density & \(\rho_{p}\) & 1.80 g cm\({}^{-3}\) \\ Speed & \(v_{imp}\) & 104 \(\pm\)1 m/s \\ Composition & Polylactic acid (PLA plastic) \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the Projectiles
our granular material. The tub is filled with sand with a bulk density of \(\rho_{s}=1.5\) g cm\({}^{-3}\). We use sand so that the ratio between projectile to grain radius is large (\(\sim 10\)), but the grains are not as small as in a powder which could be affected by aerodynamics and electrostatic phenomena. The grain semi-major axis mean value is \(a_{s}\approx 0.3\) mm as measured in previous experiments (Wright et al., 2020).
The sand in the tub is raked and leveled prior to every impact experiment to reduce local compaction caused by previous impacts. Rake tongs are 10 cm long and 4 cm apart.
Two green lasers are used to set the impact target point. These two laser lines are mounted above the tub and mark the center of the tub. Two additional red lasers are mounted on the airsoft gun to help us aim it. The impact angle is measured using the red lasers and a large protractor prior to each impact.
### Dimensionless parameters
Experiment parameters are summarized in Table 2 where we include the dimensionless parameters \(\pi_{2}\equiv gR_{p}/v_{imp}^{2}\) and \(\pi_{4}\equiv\rho_{s}/\rho_{p}\). We also compute dimensionless parameter \(\pi_{3}=\frac{Y}{\rho_{s}v_{imp}^{2}}\) using a bulk cohesion material strength of \(Y=500\) Pa based on measurements of regolith by Brisset et al. (2022). The Froude number is \(Fr=\pi_{2}^{-\frac{1}{2}}\). These dimensionless parameters are commonly used in crater scaling relationships (Housen and Holsapple, 2011; Celic et al., 2022). Here \(R_{p},\rho_{p},M_{p}\) refer to projectile radius, density, and mass, \(v_{imp}\) is projectile impact velocity, and \(\rho_{s}\) is mean substrate density. The gravitational acceleration is \(g\). For a normal impact, we measure the distance from rim peak to rim peak giving diameter \(D_{cr}=7\) cm and a radius \(R_{cr}=3.5\) cm. This is used to estimate the time for transient crater formation \(\tau_{ex}=\sqrt{R_{cr}/g}\)(Housen et al., 1983; Melosh, 1985). A crater is in the strength regime if \(\pi_{3}^{1+\mu/2}\pi_{4}^{\nu}/\pi_{2}>1\) with exponents \(\mu\sim 0.4\) typical of granular systems and \(\nu\sim 0.4\)(Housen and Holsapple, 2011). For our experiments \(\pi_{3}^{1.2}\pi_{4}^{0.4}/\pi_{2}\sim 1\) is similar to unity, so our experiments lie near the division between strength and gravity regimes for crater formation (Holsapple, 1993; Scheeres et al., 2010).
### Accelerometers and accelerometer placement
The accelerometers are 5V-ready analog break-out boards and house a \(\pm 16\)g (ADXL326) or \(\pm 3\)g (ADXL335) triple-axis accelerometer Analog devices integrated circuits, as described previously (Neiderbach et al., 2023; Quillen et al., 2022). The dimensions of the accelerometer printed circuit boards (PCBs) are 19 mm \(\times\) 19 mm \(\times\) 3 mm. The accelerations we measured are integrated over this area, not those experienced by individual sand grains. The accelerometer's 1600 Hz bandwidth upper limit for its x and y axes corresponds to a half period of 0.3 ms which is shorter than the width of the acceleration pulses seen in
\begin{table}
\begin{tabular}{l l l} \hline Substrate & sand & \\ Sand grain size & \(a_{s}\) & 0.3 mm \\ Substrate bulk density & \(\rho_{s}\) & 1.5 g cm\({}^{-3}\) \\ Washtub rim radius & \(R_{\rm tub}\) & 25.1 cm \\ Washtub depth & \(H_{\rm tub}\) & 25 cm \\ \(\pi_{2}\) & \(gR_{p}/v_{\rm imp}^{2}\) & \(2.7\times 10^{-6}\) \\ \(Fr\) & \(v_{\rm imp}/\sqrt{gR_{p}}\) & 608 \\ \(\pi_{3}\) (for \(Y=500\) Pa) & \(Y/(\rho_{s}v_{\rm imp}^{2})\) & \(3\times 10^{-5}\) \\ \(\pi_{4}\) & \(\rho_{s}/\rho_{p}\) & 0.83 \\ \(\tau_{ex}\) & \(\sqrt{R_{cr}/g}\) & 60 ms \\ \hline \end{tabular}
\end{table}
Table 2: Physical values and dimensionless numbers for experiments
Figure 1: Illustrations of the experiments. a) We show the accelerometer array and the airsoft gun. There are three oscilloscope channels per accelerometer because we record all three acceleration axes. Accelerometer boards are shown larger than their actual scale. b) We show the laser line that is used to measure crater profiles. A stepper motor is used to slowly scan the laser line across the crater while the video camera records images. During the impact experiments, we also record high-speed video.
our experiments. The bandwidth upper limit on the z-axis is lower at 550 Hz. The bandpass upper limits are frequencies at which the signal amplitude is reduced by 3 dB (the amplitude drops by a factor of 0.5) and approximately equal to the cutoff frequency of a low pass filter. We estimate the rms noise in the signal to be ranging from 0.07 to 0.14 \(m/s^{2}\). The output signals of all three axis outputs of the accelerometers were recorded with two 8-channel digital oscilloscopes (Picoscope model 4824A) with a 200kHz sampling rate.
A long straight metal bar is used to level the surface after raking the sand. The accelerometers are then embedded into the substrate. We orient the accelerometers so that their \(+x\) axes points away from the impact site and their \(+y\) axes point vertically up. This leaves the lower bandwidth \(z\) axis orientated in a direction that should give a weaker signal. Accelerometer calibration values were determined by measuring the voltage along each axis at six different cardinal orientations giving accelerations of \(\pm\)1g due to gravity.
To ensure that the accelerometers are correctly spaced, are at the desired depth, and are correctly oriented, we individually placed each accelerometer in the sand. Tweezers were used to embed the accelerometers. The tweezer prongs are marked at centimeter intervals along their length so we can set the accelerometer depth. The DC voltage levels of each accelerometer were monitored in all three axes during placement to monitor their orientation. We compared the DC voltage levels of the accelerometer signals prior to impact to the calibration values and find that the accelerometers, once embedded, are typically within \(10^{\circ}\) of the desired orientation. We compute velocity as a function of time by numerically integrating the acceleration signal in all three components. The drift rate of velocity pulses, due to the rms noise in acceleration signals and integration errors, is around \(10^{-4}\)\(m/s\) per millisecond. Because of the uncertainties in accelerometer orientation, it is more robust to use acceleration or velocity magnitudes than individual radial or vertical components. Because of the rapid decay of impact excited pulses as a function of on distance from the impact site Quillen et al. (2022); Neiderbach et al. (2023), uncertainties in acceleration or velocity are dominated by a few mm errors in accelerometer placement with respect to the actual site of impact, rather than errors in orientation. Sometimes the projectile hits a wire during a ricochet causing a spurious spike in the accelerometer signals.
Coordinate directions for the experiments differ from those used to describe the accelerometer. We show cylindrical radius, \(R\) from the site of impact and \(z\), giving height above the surface in Figure 1a. Embedded accelerometers have \(z<0\) and the distance in spherical coordinates from the site of impact \(r=\sqrt{R^{2}+z^{2}}\). In both spherical coordinates and cylindrical coordinates, we use \(\phi\) to represent the azimuthal angle. Accelerometer location with respect to the site of impact is measured from the position of the accelerometer integrated circuit which is located at the center of the PCB and is oriented vertically during the experiments.
For each individual impact experiment with accelerometer arrays, 4 accelerometers are embedded in the sand, as shown in Figure 1a. Taking into account 3 axes per accelerometer and 2 trigger channels, we use 14 channels of the total 16 possible with our two 8-channel oscilloscopes.
Each individual impact experiment is done with accelerometers at the same depth, with \(|z|=3,5\) or 7 cm. During each impact experiment we record 4 accelerometers, with 2 uprange and 2 downrange of the impact, as shown in Figure 1a. Each uprange or downrange pair is at two radii, either at \(R=6\) and 10 cm, \(R=7\) and 11 cm, or \(R=8\) and 12 cm. The two positions nearest the impact site are recorded with \(\pm\)16g accelerometers to prevent signals from over-saturation and the further two positions are recorded with \(\pm\)3g accelerometers. Accelerometer positions are summarized in Table 3 in the form of templates. Each row in this table corresponds to a separate impact experiment as 4 accelerometers were used in each experiment. Impact experiments with accelerometers were carried out at 9 different impact angles, with \(\theta_{I}=10^{\circ}\) to and including \(90^{\circ}\) in intervals of \(10^{\circ}\). The list of experiments at each impact angle is summarized in Table 4. At \(\theta_{I}=20,40\) and \(60^{\circ}\) we did experiments at three depths and 6 radii (as summarized with Template 1 in Table 3). At other impact angles, we carried out fewer experiments (following Templates 2 and 3). Template 1 shows a relatively full sampling of subsurface positions at 3 depths and 6 radial distances at both uprange and downrange positions. Sets of experiments using Template 1 are used to examine pulse velocity direction and peak values as a function of position. Accelerometer data from the different impact experiments at the same impact angle are combined to study the sensitivity of pulse amplitude with distance from the impact site, azimuthal angle, and depth. Templates 2 and 3, in addition to Template 1, are used to study the sensitivity of pulse strength and shape at specific positions as a function of impact angle.
### Measuring crater profiles
Experiments used for measuring crater profiles were done without embedding the accelerometers to reduce disturbance of the substrate prior to impact. After the impact we scanned the crater with a moving red line laser, as shown in Figure 1b. The red line laser is mounted to a stepper motor-controlled worm drive and is oriented vertically so that the crater is illuminated with a vertical sheet of red light. We drove the worm drive to move the laser line slowly and smoothly across the crater at 2 mm per second while filming at 60 fps.
To measure crater profiles we use a Cartesian coordinate system with \(x,y\) coordinates in the horizontal plane and \(+z\) upward. The laser line lies in the \(x,z\) plane. As the laser scans, it moves in the \(y\) direction. The \(+x\) direction is uprange, toward the airsoft gun. In each video frame, the horizontal direction is along the x-axis. The
vertical direction in the camera images is converted to the \(z\) coordinate by dividing by the cosine of the camera viewing angle (\(\cos(45^{\circ})\)). The \(z\) coordinate was then shifted so that the undisturbed surface outside the crater has \(z=0\). The \(x\) and \(y\) coordinates are shifted so that they are zero at the site of impact. We began each 60 fps video with a view of the green laser target which is also visible in the high-speed video. We measure the site of impact from the 1000 fps high-speed video and use the distance between the impact site and the green target to estimate the site of impact in the 60 fps video. This comparison allowed us to estimate the site of impact, which is not necessarily the same as the deepest point or center of the crater.
From the 60 fps video of the scanned red laser, we extracted video frames every 1/30 of a second. In each vertical column of each video frame, the location of the maximum red pixel gave us the position of the laser line in that pixel column. We fit a line to the laser line on the left and right sides, outside the crater, and subtracted it to make sure that crater depth was measured from the level of the undisturbed surface. In each frame, we measured crater depth along a particular vertical \(y\) coordinate. The values of the \(y\) coordinate in each frame were determined by measuring the extent of laser line translation from one side of the crater to the other and assuming that the scan rate (set by the worm gear motor) is constant. By combining profiles from each video frame showing the laser line at a different position, we measure the crater depth as a function of the \(x\),\(y\) position.
The local surface slope in degrees was computed from the \(x\) and \(y\) gradients of the \(d(x,y)\) function giving crater depth. With \(\nabla d(x,y)=(\frac{\partial d}{\partial x},\frac{\partial d}{\partial y})\) the slope is \(s=\arctan|\nabla d(x,y)|\).
### Ejecta angles
While filming impacts, the target was brightly illuminated, nevertheless ejecta travels across more than a few pixels during each exposure in the 1000 fps high-speed videos. Because particles move more than a few pixels during a ms, ejecta particles appear as streaks that are aligned in the direction of motion. To measure the orientation of ejecta particle motions we compute local histograms of oriented gradients (HOG). This type of histogram is commonly used in object recognition software (Dalal and Triggs, 2005). In each 22\(\times\)22 pixel square cell (about 3.5\(\times\)3.5 mm) in a single video frame, we compute histograms of oriented gradients with the hog routine that is part of the image processing python package scikit-image. We use unsigned gradients so orientation angles lie between \([-90^{\circ},90^{\circ}]\).
## 3 Experiments measuring crater shape and showing ejecta
We carried out a series of impact experiments, at 8 different impact angles separated by \(10^{\circ}\), to measure crater
\begin{table}
\begin{tabular}{l c c} \hline Impact angles \(\theta_{I}\) & Templates & Figures \\ \hline
10,30,50,70\({}^{\circ}\) & 2,3 & 13, 16 \\
20,40,60\({}^{\circ}\) & 1 & 13 – 17, 19 \\
80\({}^{\circ}\) & 2,3 & 13, 16, 19 \\
90\({}^{\circ}\) & 3 & 13, 16 \\ \hline \end{tabular} At each impact angle, the number of experiments is based on the template, with templates listed in Table 3. An impact experiment was done for each row in Table 3 for the given template.
\end{table}
Table 4: Impact experiments with accelerometers
\begin{table}
\begin{tabular}{l c c c c} \hline Template & \(\pm\)3g & \(\pm\)16g & \(\pm\)16g & \(\pm\)3g \\ \hline Template 1 & (10,-3,0) & (6,-3,0) & (-6,-3,180) & (-10,-3,180) \\ & (11,-3,0) & (7,-3,0) & (-7,-3,180) & (-11,-3,180) \\ & (12,-3,0) & (8,-3,0) & (-8,-3,180) & (-12,-3,180) \\ & (10,-5,0) & (6,-5,0) & (-6,-5,180) & (-10,-5,180) \\ & (11,-5,0) & (7,-5,0) & (-7,-5,180) & (-11,-5,180) \\ & (12,-5,0) & (8,-5,0) & (-8,-5,180) & (-12,-5,180) \\ & (10,-7,0) & (6,-7,0) & (-6,-7,180) & (-10,-7,180) \\ & (11,-7,0) & (7,-7,0) & (-7,-7,180) & (-11,-7,180) \\ & (12,-7,0) & (8,-7,0) & (-8,-7,180) & (-12,-7,180) \\ Template 2 & (10,-3,0) & (6,-3,0) & (-6,-3,180) & (-10,-3,180) \\ Template 3 & (10,-5,0) & (6,-5,0) & (-6,-5,180) & (-10,-5,180) \\ \hline \end{tabular} Notes: In the top row we show which type of accelerometer is used at each position. Each row shows the position for placing the accelerometer \((R,z,\phi)\) in cylindrical coordinates and in units of cm and degrees. The azimuthal angle \(\phi=0\) is a position directly down-range of the impact site and \(\phi=180\) is uprange of the impact site. The site of impact is at \((R,z)=(0,0)\).
\end{table}
Table 3: Accelerometer placement templates \((R,z,\phi)\)
Figure 2: Frames from 1000 fps videos showing the ejecta curtains for different impact angles. In some cases, the projectile is seen ricocheting off the surface. At the top of each frame, we label time from impact. A 4 cm scale bar is shown in the left panels. The projectile came from the right. The impact angle is labeled in the left panels. The angle of the ejecta curtain increases as a function of the impact angle. The ejecta curtains are asymmetric. In each experiment, the site of impact lies on the dotted line.
profiles. These same experiments were used to look at the ejecta curtains. This set of experiments is shown in Figures 2 - 12. At each impact angle, a row of panels in Figure 2, 3, 4, and 6 shows the same impact experiment.
### Ejecta curtains
Frames from 1000 fps videos are shown in Figure 2. Each row in this Figure shows an experiment at a different impact angle. The estimated time of each frame from the moment of impact (with an accuracy of about 1 ms) is shown on the top of each frame. The dashed white lines show the estimated horizontal coordinate of the estimated site of impact. The projectile came from the right, so the uprange is to the right.
As described in section 2.7, in image frames 15 ms after impact, the average direction for ejecta motion was computed using histograms of oriented gradients. These are plotted as tan segments on top of the original video frame in the left column of Figure 3. In the right column of Figure 3, the orientation angle of each segment is shown in color with color-bar on the top. White corresponds to a horizontal orientation.
Figure 2 and 3 illustrate that the ejecta leaves the surface at a higher angle (from horizontal) for near-normal impacts than for grazing impacts. The edge of the ejecta curtains appears more diffuse at grazing impact angles. The ejecta angles are asymmetric with steeper ejecta angle uprange (to the right in Figure 2 and toward the projectile launcher) than downrange (to the left and away from the projectile launcher). The ejecta curtains are more massive on the downrange side. Using particle tracking methods on particles at all azimuthal angles, Anderson et al. (2004) found that ejecta angle was \(\sim 15^{\circ}\) lower downrange for a \(\theta_{I}=30^{\circ}\) impact into the sand than for a normal impact. As our Figures 2 and 3 primarily show ejected material downrange of the impact site, the trend we see of ejecta angle increasing with increasing impact angle is consistent with their study.
### Crater shapes
For the same 8 experiments described in section 3.1, we measured crater profiles. The crater depth profiles are shown in the left column of Figure 4. Local slopes are shown in the middle column of Figure 4. After scanning the crater with the laser line, we took a single photograph of the crater from above. These photographs are shown in the right column in Figure 4.
Features in the crater profiles and slopes are in some cases due to the projectile when it ricochets. For example, a narrow low slope region along the major axis evident in the slope in Figure 4b at \(\theta_{I}=40^{\circ}\) is probably due to the projectile which passed through the ejecta curtain as it ricocheted (see Figure 2). Features in crater morphology associated with ricochet were also seen in simulations of hypervelocity impacts (Elbeshausen et al., 2013).
Figure 5 shows major and minor axis profiles and Figure 6 shows histograms of slope values along the major and
Figure 3: Histograms of oriented gradients are shown in right panels on ejecta curtain images 15 ms after impact. The impact angle is shown on the left panels. Ejecta angle is shallower at grazing impact angles than at near-normal impact angles.
minor axes. In Figure 5 the minor axis profiles for different impact angles are shown in the bottom panel and the major axis profiles are shown in the top panel. Uprange is to the right. Profiles are offset vertically so that they can be compared. In the left panels and with dotted lines, we show a horizontally reflected version of the major axis profile to make it clear that the oblique impact craters are not symmetrical about the site of impact which is approximately at \(x=0\). Shallow slopes on the downrange side of a crater caused by an oblique impact were previously seen in the high velocity impacts into pumice dust by Gault and Wedekind (1978).
To characterize the distribution of slopes within the craters, in Figure 6 we show histograms of slopes. In this figure, each row shows a different impact angle. Dotted and solid lines in the left column show histograms of the slope along the crater major axis uprange and downrange, respectively. The right column shows histograms of slope values taken along the crater minor axis. The histograms show that the surface slopes tend to be higher at higher impact angles. For low impact angles \(\theta_{I}<50^{\circ}\), the uprange side of the crater is steeper than the downrange side.
Figure 5 illustrates that the crater profiles have regions with nearly constant slopes. A cone with a point directed down is a surface with a constant slope, whereas a bowl-shaped surface has a shallow slope at the bottom and steep slopes along its rim. When are impact craters more nearly conical rather than bowl-shaped? We find that for \(\theta_{I}\gtrsim 40^{\circ}\), the craters are more sharp-bottomed while for more grazing impact angles, the bottoms are more round.
Based on shock models for simple (non-complex) craters into a solid, we expect that craters should be bowl-shaped (Melosh, 1989). This is confirmed via experiments of impacts into solids (Turtle et al., 2005). However, a bowl-shaped crater has a high surface slope just inside its rim. In a granular material, if the transient crater is initially bowl-shaped, the surface slope would be above the static angle of repose of the medium, near but within the crater rim. For our sand, the static angle of repose (the angle of the steepest sand pile that is stable) is approximately \(35^{\circ}\). After excavation of a transient crater in a granular system, material near the rim slides downward toward the crater center (as seen in videos from above, E. Wright private communication). The collapse near the rim would give a more conical-shaped crater with a slope near the static angle of repose (Yamamoto et al., 2006). However, we see an impact angle dependent crater slope distribution and uprange/downrange asymmetry in crater slope (as seen in the histograms of Figure 6), so while crater slopes are near the static angle of repose, we also see variations in slope. Perhaps a late phase of deformation (e.g., Neiderbach et al. 2023) further disturbs the substrate material, causing the material to slide further and reducing the slope to below the static angle of repose (e.g., Carrigy 1970). The final slope angle would be more similar to a lower angle, called the dynamic angle of repose, which is measured on a granular system following a landslide (Kleinhans et al., 2011). If this occurs preferentially on the downrange side and for grazing impacts, then we might also account for the shallower slopes seen in these two settings. Alternatively, crater excavation in a granular system may differ from that in a solid, giving a transient crater that is not bowl-shaped, particularly on the downrange side of grazing impacts where we see the shallowest slopes.
In oblique impacts into sand, Anderson et al. (2004) found that the ejecta angle on the uprange side was nearly equal to that of a normal impact. In Figure 6 we see no strong relation between the uprange crater slope and impact angle. Assuming that a higher ejection angle gives a higher crater slope, this is consistent with uprange ejection angle being insensitive to impact angle (and supporting the results by Anderson et al. 2004).
In Figure 7 the red bars extend between the slopes of uprange and downrange median slopes, measured along the crater's major axis. The widths of the bars illustrate that the downrange crater sides tend to be shallower than the downrange sides at a low impact angle, \(\theta_{I}<40^{\circ}\). The figure also shows the weaker sensitivity of the uprange slope (the top of the red bars) to the impact angle. In this plot we have also plotted the crater axis ratio, using the major and minor axes measured from rim peak to rim peak. While normal craters are rounder, there was a significant scatter in the crater axis ratios. No strong variation in crater ellipticity is evident near \(\theta_{I}=50^{\circ}\) which divides impacts that ricocheted from those that did not ricocheted.
### Crater and ricochet measurements
Crater properties measured from the crater profiles ( shown in Figure 4) are listed in Table 5. The crater's major axis is the distance between uprange and downrange rim peaks. The crater axis ratio is the ratio of the crater length to width, measured from rim to rim. The distance \(d_{ai}\) is the distance between the midpoint of uprange and downrange rim peaks and the site of impact (which we estimated using high-speed videos). We list the maximum depth, with zero corresponding to the undisturbed surface level well outside the crater. We measured crater volume \(V_{cr}\) by integrating the depth profile within the zero-level contour just inside the crater rim. Crater efficiency, denoted \(\pi_{V}\), for each impact is computed from the crater volume \(V_{cr}\), substrate density \(\rho_{s}\), and projectile mass \(M_{p}\) (following Housen and Holsapple 2011; Elbeshausen et al. 2009)
\[\pi_{V}\equiv\frac{\rho_{s}V_{cr}}{M_{p}}. \tag{1}\]
High-speed videos from the same impact experiments used to measure crater profiles are used to measure the projectile velocity \(V_{Ric}\) and angle \(\theta_{Ric}\) (from horizontal) after ricochet. Uncertainty in the crater and ricochet measurements are reflected in the specified decimal precision of the measurements listed in Table 5. In Figures 5 - 12 we plot measurements from Table 5.
Figure 4: Crater shapes as a function of impact angle. In a) (the left panels) we show depth with a color bar to the top. In b) (the middle panels) we show the slope with a color bar at the top. In c) (the right panels) we show photographs of the crater, taken from above. Each row shows to a single impact at an impact angle which is shown on the upper left of each panel. The projectile came from the right, so uprange is the +x direction. The site of impact is at the origin and marked with a plus sign in a), and c).
Figure 5: Major and minor axis crater depth profiles. a) Major axis crater profiles, with the projectile originating from the \(+x\) direction. Solid lines show the profile, and dotted ones show a mirror image of the profile, reflected about \(x=0\), the site of impact. Asymmetry in the profiles can be seen by comparing dotted and solid lines in the top panel. Profiles have depth in mm but they are consecutively offset vertically by 5 mm and shown in order of impact angle with higher impact angles on the bottom. Crater symmetry depends upon impact angle with more asymmetric craters at lower impact angle. After impact in the experiment with \(\theta_{I}=60^{\circ}\), the projectile itself lay inside the crater which is why the major axis profile has a bump at \(x\approx-20\) mm. b) Similar to a) except showing minor axis profiles. The key shows the impact angle, \(\theta_{I}\), for profiles in both a) and b).
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Impact angle \(\theta_{I}\) (deg) & 10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 \\ \hline Crater major axis \(2a_{cr}\) (mm) & 56.6 & 64.2 & 62. & 74.1 & 77.9 & 72.8 & 72.7 & 71.4 \\ Crater axis ratio \(a_{cr}/b_{cr}\) & 1.35 & 1.18 & 1.09 & 1.22 & 1.19 & 1.04 & 1.09 & 1.08 \\ Distance \(d_{ai}\) (mm) & 6.3 & 8.2 & 11.1 & 14.9 & 10.5 & 6.8 & 3.8 & 2.6 \\ Maximum crater depth (mm) & 6.8 & 9.5 & 10.5 & 14.3 & 15.3 & 14.7 & 15.5 & 16.7 \\ Crater volume \(V_{cr}\) (cm\({}^{-3}\)) & 3.3 & 7.0 & 8.5 & 13.2 & 15.3 & 14.6 & 13.3 & 14.7 \\ Crater efficiency \(\pi_{V}\) & 25 & 52 & 64 & 99 & 115 & 109 & 100 & 110 \\ Median slope uprange (deg) & 24 & 27 & 28 & 27 & 30 & 29 & 30 & 29 \\ Median slope downrange (deg) & 18 & 22 & 24 & 24 & 30 & 29 & 31 & 30 \\ Ricochet speed \(v_{Ric}\) (m/s) & 83 & 46 & 31 & 7.8 & 0.8 & - & - & - \\ Ricochet angle \(\theta_{Ric}\) (deg) & 10 & 16 & 17.5 & 28 & 12 & - & - & - \\ Horizontal momentum lost \(\Delta p_{x}/(M_{p}v_{imp})\) & 0.12 & 0.51 & 0.58 & 0.70 & 0.64 & 0.50 & 0.34 & 0.17 \\ Vertical momentum change \(\Delta p_{z}/(M_{p}v_{imp})\) & 0.31 & 0.46 & 0.59 & 0.68 & 0.77 & 0.87 & 0.94 & 0.98 \\ Fraction of kinetic energy lost \(\Delta E/(0.5M_{p}v_{imp}^{2})\) & 0.36 & 0.80 & 0.91 & 0.99 & 1 & 1 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 5: Crater and ricochet properties
Figure 6: Histograms of slopes are computed along the major and minor axes of craters made at different impact angles. The left panel shows histograms along the major axis with solid lines showing histograms for the downrange side and dotted lines showing histograms for the uprange side. The right panels show histograms of slopes along the minor axis. Impact angles are labeled on the upper right in each panel of the right column. The surface slopes within the crater are steeper at higher impact angles. At low-impact angles (\(\theta_{I}\) below \(40^{\circ}\)), the slope is lower on the downrange side.
Figure 8 shows crater major and minor axis lengths as a function of impact angle. The same figure shows the maximum crater depth. Figure 9 shows crater volume as a function of impact angle. Crater major and minor axes, depth and volume increase with impact angle, confirming the simulation studies by Elbeshausen et al. (2009) and consistent with experimental work by Gault and Wedekind (1978); Michikami et al. (2017). Gault and Wedekind (1978) found that crater volume is a proportional normal component of the projectile momentum, \(V_{cr}\propto\sin\theta_{I}\). Elbeshausen et al. (2009) fit crater volume to a function
\[V_{cr}(\theta_{I})=V_{cr}(90^{\circ})\sin(\theta_{I})^{\alpha_{V}} \tag{2}\]
and found exponent \(\alpha_{V}\approx 0.89\) for their frictionless simulations. In Figure 9 we have plotted black dotted and dashed blue lines giving a sine dependence of crater volume on impact angle in the form used by Elbeshausen et al. (2009) and in equation 2. The best fit (via least squared minimization), shown with the dashed line, has \(V_{cr}(90^{\circ})=15.8\pm 0.9\), \(\alpha_{V}=0.73\pm 0.03\) with uncertainties based on the diagonals of the covariance matrix scaled by the variance of the residuals. We concur with Elbeshausen et al. (2009) that an exponent \(\alpha_{V}<1\) gives a better approximation for the dependence of crater volume on impact angle.
We compute a volume for a conical shape \(V_{cone}=\frac{\pi}{3}a_{cr}b_{cr}d_{max}\) using the crater semi-major and semi-minor axes \(a_{cr}\), \(b_{cr}\) and maximum depth \(d_{max}\). The conical shape has sides with a constant slope, and a top surface with the area of an ellipse with major and minor axes the same as our craters. The volumes of the associated conical shape computed from our crater measurements are plotted with violet triangles as a function of impact angle in Figure 9. The volume of half an ellipsoid \(\frac{2\pi}{3}a_{cr}b_{cr}d_{max}\) with semi-axes \(a_{cr},b_{cr}\) and \(d_{max}\) is twice as large as \(V_{cone}\). Figure 9 shows that an estimate for the crater volume based on a conical shape is similar to the measured volumes. Such an ellipsoid gives a worse overestimate for the crater volume than that based on a conical shape. This is not surprising as the craters have regions with nearly constant surface slope (see Figure 5).
Ricochet velocity and angle are shown for impact angle \(\theta_{I}\leq 50^{\circ}\) in Figure 10. The velocity is near the impact
Figure 8: Major and minor axis crater lengths as a function of impact angle. These are measured from rim to rim and are plotted as red circles and green squares with an axis scale on the left in cm. Blue triangles show maximum crater depth, with respect to the undisturbed level surface prior to impact. Crater depth is in mm with axis scale on the right. These quantities all increase with impact angle.
Figure 7: Median slopes and crater axis ratio as a function of impact angle. The red bars show the uprange and downrange median values (with the axis scale on the left) for the crater slope computed along the crater major axis. A slope of 0 is a flat surface. Crater slope increases with impact angle and an asymmetry in slope is seen primarily at lower impact angles. Green pentagons show crater axis ratios with axis scale on the right. As expected, near-normal impacts have rounder craters.
Figure 9: Crater volume \(V_{cr}\) in cm\({}^{3}\) as a function of impact angle is shown with cyan crosses. The crater volume is measured from the crater profiles shown in Figure 5. With dotted and dashed lines, we show the function in equation 2 for crater volume with exponent \(\alpha_{V}\) values shown in the key. Violet triangles show the volume of a conical shape \(V_{cone}\) computed from the crater’s semi-major and minor axes and its maximum depth.
velocity at grazing impact angle and drops to near zero at \(\theta_{I}=50^{\circ}\). The Froude number for our impacts is high, so we expect larger impact angles than the \(\sim 30^{\circ}\) limit at \(Fr\sim 15\)(Wright et al., 2020) for ricochet. Only for \(\theta_{I}\gtrsim 50^{\circ}\) does all the projectile energy go into crater formation as at these higher impact angles the projectile does not ricochet. This is reflected in the major and minor axis lengths, crater depth, crater volume and efficiency, \(\pi_{V}\), which level off past \(\theta_{I}\sim 40^{\circ}\), as seen in Figures 8 and 9, above, and Figure 12, below. Crater volume and efficiency could be poorly fit by a curve proportional to \(\sin\theta_{I}\) in part because the projectile does not carry away momentum or energy at \(\theta_{I}>50^{\circ}\).
Using the ricochet velocity and angle we compute the change in the horizontal and vertical components of projectile momentum;
\[\Delta p_{x} =M_{p}v_{imp}\left[\cos(\theta_{I})-\frac{V_{Ric}}{v_{imp}}\cos( \theta_{Ric})\right]\] \[\Delta p_{z} =M_{p}v_{imp}\left[\sin(\theta_{I})+\frac{V_{Ric}}{v_{imp}}\sin( \theta_{Ric})\right]. \tag{3}\]
Here \(\Delta p_{x},\Delta p_{z}\) are the horizontal and vertical components of the momentum from the projectile imparted to the granular substrate upon impact. We compute \(\Delta p_{x},\Delta p_{z}\) using the ricochet velocity and angle measured for experiments where the projectile ricocheted (\(\theta_{I}\leq 50^{\circ}\)) and with values listed in Table 5). These are plotted in Figure 11 in units of \(M_{p}v_{imp}\), the initial projectile momentum, and as a function of impact angle. We also plot the fraction of kinetic energy lost by the projectile to the substrate
\[\frac{\Delta E}{0.5M_{p}v_{imp}^{2}}=1-\frac{v_{Ric}^{2}}{v_{imp}^{2}}. \tag{4}\]
The momentum components and fraction of kinetic energy lost to the substrate are also listed in Table 5.
Figure 11 shows that for grazing impact angles, the vertical component of momentum imparted to the medium is similar to that estimated from the initial projectile's vertical velocity component (the red squares are near the red dotted line for \(\theta_{I}\leq 50^{\circ}\)). However, when the projectile ricochets, the horizontal component of momentum imparted to the medium is well below that estimated using \(\cos\theta_{I}\) (the blue dots are well below the blue solid line at a low impact angle). The projectile lost all its energy to ricochet after \(\theta_{I}=50^{\circ}\) and in that respect, it resembles the crater volume as a function of impact angle plotted in Figure 9. However, the fraction of energy lost by the projectile does not drop as rapidly for \(\theta_{I}<50^{\circ}\) as the crater volume plotted in Figure 9, so energy scaling alone does not account for the dependence of crater volume upon impact angle. Our experiments are not in a hypervelocity regime, so at least some of the decrease in crater efficiency (\(\pi_{V}\); equation 1) at grazing angle compared to normal impacts is due to energy and momentum carried away by the projectile as it ricochets. Nevertheless, we do see a deviation from a sine function (equation 2 with \(\alpha_{V}=1\)) corresponding to a dependence on the z component of momentum, similar to that found by Elbeshausen et al. (2009).
In low velocity impacts the projectile's spin varies while it interacts with a granular medium (Wright et al., 2020). Some of the sensitivity of crater efficiency in our experiments to impact angle could be due to the angular momentum and rotational energy carried by a projectile that ricochets.
Figure 11: Change in horizontal and vertical momentum components of the projectile before and after ricochet. For impact angle \(\theta_{I}>50^{\circ}\) the projectile remained within the impact crater. The solid blue line and dotted red lines show the initial horizontal and vertical components, respectively, of projectile momentum divided by the total projectile momentum \(M_{p}v_{imp}\). These are equal to \(\cos\theta_{I}\) and \(\sin\theta_{I}\). The solid blue circle and solid red square shows horizontal and vertical components of the momentum change of the projectile, taking into account measurements of momentum of the projectile after ricochet for \(\theta_{I}\leq 50^{\circ}\). As solid grey crosses, we plot the fraction of kinetic energy lost by the projectile.
Figure 10: Ricochet angle and velocity as a function of impact angle. The ricochet angle is shown with green squares and with an axis scale on the right. Velocity is shown with violet dots and with an axis scale on the left. Past an impact angle of \(50^{\circ}\), the projectile did not ricochet. Instead, the projectile remained in the crater or was embedded after impact. Ricochet velocity is high at grazing impact angles.
### The difference between the site of impact and the crater center
Figure 12 shows the distance between the impact site and crater center (denoted \(d_{ai}\)) with blue squares. This distance reaches a peak at \(\theta_{I}\sim 40^{\circ}\) and is lower at both higher and lower impact angles. The change in behavior might be related to the variation in the fraction of energy that is imparted to the medium due to ricochet. With red dots Figure 12 also shows crater efficiency \(\pi_{V}\) (defined in equation 1) which plateaus past \(\theta_{I}\sim 50^{\circ}\), supporting this connection.
A difference between the impact point and center of the crater (as measured from the intersection of the crater's major and minor axes) was noted previously by Anderson et al. (2003) in high-velocity oblique experiments into the sand. If a seismic source is generated at the impact point, then variations in both ejection angle and the distance of the resulting crater rim from the point of impact can be linked via a flow model that has azimuthal asymmetry in the strength of a propagating pulse Anderson et al. (2004); Anderson and Schultz (2006). A model with a migrating flow center (Anderson and Schultz, 2006) would be sensitive to the distance \(d_{ai}\) between the crater center and the site of the impact that we measured and shown in Figure 12.
## 4 Subsurface seismic pulses
In this section, we discuss measurements from experiments using the accelerometers.
### Asymmetry in seismic pulse strengths
In Figure 13 we compare the strength of seismic pulses, which is measured from peak accelerations, downrange of
Figure 12: Crater efficiency (red dots, axis scale on left) and distance between impact point and crater center (blue squares, axis scale on the right) as a function of impact angle. Crater efficiency is the dimensionless quantity defined in equation 1. Past an impact angle of \(\theta_{I}=40^{\circ}\) the distance between the impact site and the center of the crater \(d_{ai}\) decreases to near zero for a normal impact. The distance \(d_{ai}\) is also low at grazing angles.
Figure 13: a) We show peak acceleration magnitudes as a function of impact angle. The left panel shows peak values for accelerometers located uprange of the impact site. The middle panel shows peak values for accelerometers located downrange of the impact site. The scale for the left and middle panels is on the left vertical axis. The right panel, with a vertical axis scale on the right, shows the ratio of downrange to uprange peak accelerations. The peak acceleration values are shown at 4 different locations with radius \(R\) and depth \(|z|\) listed in the key. b) Similar to a) except for peak velocity magnitudes. There is a strong asymmetry in pulse strengths, with pulse strengths much higher on the downrange side than on the uprange side of impact. This is particularly noticeable in the velocities of the shallow accelerometers. The scatter in the ratio is primarily due to scatter in the weaker accelerometer signals on the uprange side.
the impact site to those uprange of the impact site. We plot peak velocity and acceleration magnitudes as a function of impact angle for accelerometers at 4 different locations. Except for near-normal impacts, pulse peak heights, seen in acceleration (Figure 13a) and in velocity (Figure 13b) are higher on the downrange side than on the uprange side. The ratio of uprange to downrange peak velocity is particularly high \(|v|_{pk,down}/|v|_{pk,up}\sim 5\) at low impact angle for the shallow accelerometers at \(|z|=3\) cm. For deeper accelerometers at \(|z|=5\) cm, the ratio of uprange and downrange peak velocity and peak acceleration are a maximum at intermediate impact angles.
As shown in Figure 11, the horizontal component of momentum that is imparted to the medium is relatively low at grazing impact angles because of the projectile ricochets. Nevertheless, the ejecta curtain and crater profiles are most asymmetric at a grazing impact angle. The ratio of up and downrange pulse strengths is highest at the shallower accelerometers, suggesting that the energy in the subsurface flow field is more strongly concentrated at shallower depths for grazing impacts than for near-normal impacts. The shock model by Kurosawa (2019) could predict such an effect, as at a shallow depth, post-rarefaction streamlines with a deflected velocity vector can converge onto locations that previously had radial streamlines, enhancing the pulse strength.
For different impact angles we measured small (less than 20% percent) variations in crater slope and axis ratio (as shown in Figure 7), ejecta angle (visible in Figures 2 and 3, and crater shape, as seen in Figure 4. However here we find much larger (factors of 2 to 5) differences in the strengths of the uprange and downrange seismic pulses. The accelerometer positions are well outside the crater radius so the asymmetry in pulse strength persists as it travels.
Large (factor of 2) asymmetry in seismic stress was previously measured with piezoelectric sensors by Dahl and Schultz (2001) for oblique hypervelocity impacts into solid aluminum. Like Dahl and Schultz (2001), we see a significant asymmetry between uprange and downrange seismic pulse strengths in oblique impacts, though our experiments are into the sand and our impact velocity is lower than theirs. They measured pulses on either side of a solid aluminum block caused by a 6 km/s impact.
The momentum flux carried by a pulse depends on its velocity amplitude. Perhaps the uprange/downrange subsurface pulse height asymmetry seen in our oblique impact experiments is caused by the momentum direction of the projectile. This would imply that the vertically downward propagating pulse should be stronger than the laterally propagating one for a normal impact. Downward propagating pulses were measured to be about a factor of 2 stronger than laterally propagating pulses in lower velocity normal impact experiments into millet (Quillen et al., 2022), confirming this expectation. The strong uprange/downrange asymmetry in pulse strength seen in our oblique impact experiments suggests that the projectile momentum vector influences the subsurface flow field. Given that it may be difficult to infer or constrain the projectile direction from the crater shape, the strong subsurface pulse strength asymmetry is remarkable.
### Subsurface ray angles
We examine the subsurface flow field at impact angles of \(\theta_{I}=20\), 40, and \(60^{\circ}\). For these impact angles, we have many impact experiments giving us information at a number of subsurface accelerometer positions. From the signals from each accelerometer, the direction of the velocity at uprange and downrange positions (at \(\phi=0\) or \(\pi\)) is computed using the ratio of the \(R\) and \(z\) velocity components in cylindrical coordinates. Ray angles are shown with vectors in Figure 14. Each column is at a different time after impact and each row is at a different impact angle. The vector lengths are related to the value of the pulse velocity in m/s. A gold arrow on the bottom of each panel is shown to present the 0.01 m/s scale. Points and vectors are shown at the position of each accelerometer. The horizontal axes show radius \(R\) but with positive values corresponding to downrange positions and negative values corresponding to uprange positions. The vertical axes show depth. For each accelerometer signal, we computed the maximum velocity amplitude \(|v|_{max}\) during the experiment. The colors of the vectors depend on the velocity divided by this maximum value. The color bar on the right relates color to pulse velocity. The projectile velocity direction is shown with a red arrow at the position of impact. The little blue arrows shown near the origin in the top row show the direction of the projectile after it ricocheted in the impact angle \(\theta_{I}=20^{\circ}\) impacts. The length of this arrow is scaled with respect to the red arrow showing projectile velocity prior to impact. The red arrow, showing the impact direction, is not on the same scale as the subsurface velocity vectors. We include a supplemental video, denoted ray_angles, which shows the ray angles evolving in time.
Note that at each impact angle, we used accelerometer data from 9 separate impact experiments (via Template 1 in Table 3) to construct Figure 14 as only 4 accelerometers were used in each individual impact experiment. We infer that experiments at the same impact angle are similar because neighboring accelerometers have similar pulse strengths and directions.
Figure 14 shows a strong asymmetry in the strength of subsurface motions. Pulse velocities are larger for more normal impacts, which is consistent with their larger crater volumes. At later times velocities are higher at shallower depths. As was true for normal impacts (Neiderbach et al., 2023), velocities are initially nearly radial (as shown in the leftmost column) and pointed upward at later times (as shown in the rightmost column). At later times (the rightmost column) the velocity is primarily high in the shallower accelerometers. The ray angles suggest that the pulse initially propagates radially, and then tilts upward. Streamlines change direction, tilting toward the surface.
This type of time-dependent flow field was predicted with a shock model by Kurosawa (2019). Figure 14 suggests that impacts in granular systems exhibit similar behavior. Velocity appears to accumulate near the surface at later times. Within the crater radius, the flow would eventually launch the ejecta curtain. The accelerometers in our experiments are outside the crater radius, so particles are not lofted off the surface past \(R\sim 3.5\) cm, instead the flow seen in Figure 14 later causes the surface to move and deform, as described by Neiderbach et al. (2023).
### The Maxwell Z-model
As the Maxwell Z-model Maxwell (1977) was used by Anderson et al. (2004) to describe the subsurface excavation flow field (for oblique impacts), based on measured ejecta angles, we introduce it here. We will discuss this model in context with the downrange/uprange asymmetry in pulse peak strengths (mentioned previously in section 4.1) and the subsurface ray angles (shown in the previous section).
A Maxwell Z-model has a flow velocity
\[u_{r}(r,\vartheta,t) =\frac{a(t)}{r^{2}}\] \[u_{\vartheta}(r,\vartheta,t) =\frac{a(t)}{r^{2}}(Z-2)\frac{\sin\vartheta}{1+\cos\vartheta}, \tag{5}\]
where we have given velocity components in spherical coordinates \((r,\varphi,\phi)\). Here \(\vartheta=0\) along the negative \(z\) axis below the surface. The velocity field satisfies \(\nabla\cdot\mathbf{u}=0\) so is incompressible. The prefactor \(a(t)\) specifies the time dependence of the velocity field.
Streamlines are described with the radius \(R_{s}\) where the streamline intersects the surface. The radius of a streamline with \(R_{s}\) as a function of \(R_{s}\) and \(\vartheta\) is
\[r(\vartheta,R_{s})=R_{s}(1-\cos\vartheta)^{\frac{1}{2-2}}. \tag{6}\]
With exponent \(Z=2\), the flow is radial; the velocity vector \(\mathbf{u}\propto\hat{\mathbf{r}}\) where \(\hat{\mathbf{r}}\) is the radial unit vector.
At the surface where \(\vartheta=\pi/2\), the horizontal and vertical velocity components are
\[u_{r}(r,\pi/2,t) =\frac{a(t)}{r^{2}}\] \[u_{z}=u_{\vartheta}(r,\pi/2,t) =(Z-2)u_{r}. \tag{7}\]
The ratio between \(u_{r}\) and \(u_{z}\) at the surface gives the ejecta angle
\[\tan\vartheta_{ej}=Z-2. \tag{8}\]
The Maxwell Z-model has the nice property that a single parameter \(Z\) gives a direct connection between subsurface flow, ejecta angle, and the radial decay of the flow field. This connection was leveraged by Anderson et al. (2004) to relate measured ejecta angles to subsurface flow models. Unfortunately, the Z-model does not take into account the time dependence of the flow. If the ejection
Figure 14: Subsurface velocity flow fields for different impact angles. Each column shows a different time after impact and each row shows a different impact angle. The length of the blue arrows shows the velocity magnitude at the position of each accelerometer. The scale of these arrows is shown with the small gold arrows on the bottom of each panel with a length corresponding to 0.01 m/s. The blue velocity vectors have a shade that depends on the velocity magnitude with a color bar on the right that gives the ratio of velocity with respect to the peak value (during the experiment) of each accelerometer signal. The impact point is shown with a red dot. The initial projectile momentum is indicated by the red arrows at the site of impact. Accelerometer placement positions are shown as black dots. The thick grey line on top of each panel shows the level substrate surface prior to impact. The velocities are higher on the downrange side (on the right) and at later times vectors increasingly point upward.
angle or the velocity angle at subsurface positions varies with time, the model must be modified (e.g., Anderson and Schultz, 2006).
Equation 8 implies that ejecta angle \(\vartheta_{ej}\) for a Maxwell Z-model is not sensitive to the prefactor \(a(t)\), only to the exponent \(Z\). Anderson et al. (2004) explored a Z-model with an azimuthally varying exponent;
\[Z(\phi)=Z_{0}(1+A_{\phi}\cos\phi). \tag{9}\]
Using equation 8, variations in the ejecta angle of 20% between \(\phi=0\) and \(\phi=\pi\) corresponding to up and downrange, would require a model amplitude \(A_{\phi}\sim 0.1\). How large a velocity asymmetry would this give? The ratio of downrange to uprange velocity radial velocity component
\[\frac{u_{r}(r,\vartheta,\pi)}{u_{r}(r,\vartheta,0)} \sim\frac{r^{Z_{0}(1+A_{\phi})}}{r^{Z_{0}(1-A_{\phi})}}\] \[=r^{2Z_{0}A_{\phi}}.\]
For \(Z_{0}\sim 3\) and \(A_{\phi}\sim 0.1\) this gives \(\frac{u_{r}(r,\vartheta,\pi)}{u_{r}(r,\vartheta,0)}\propto r^{0.6}\). We expect that the ratio would increase with increasing radius. Due to the power law dependence of the flow model on \(r\), radial variations exceed angle variations in the velocity vector so we expect that the downrange to uprange ratio of velocity amplitude should approximately scale the same way as the downrange to uprange ratio of the radial velocity components. However, the ratio of peak velocity amplitudes, as shown in Figure 13, decreases with increasing radius, suggesting that the subsurface flow is more symmetric as it travels away from the impact, rather than becoming more asymmetric with increasing radius. Plots similar to those shown in Figure 13 but using peak radial velocity and accelerometer components, were similar to those shown in Figure 13.
Note that in Figure 13 we have plotted peak acceleration and velocity amplitudes. However, these peaks do not occur at the same time when comparing two accelerometers at different positions in the same experiment. Furthermore, the peaks do not necessarily occur at the same time when comparing two accelerometers at the same position but from experiments at different impact angles. Unfortunately, the Maxwell Z-model does not take into account the time dependence of pulse propagation, making it more challenging to adopt it when subsurface pulse strength and direction are time-dependent (though see Kurosawa, 2019 who related shock structure to the subsequent excavation flow for impacts into solids).
In summary, the Maxwell Z-model predicts that flow velocity decays rapidly with distance from the impact point. As the exponent \(Z\) also determines the crater ejecta angle, a downrange/uprange asymmetry in the ejecta angle in oblique impacts can be interpreted in terms of azimuthal variation in the exponent \(Z\)(Anderson et al., 2004). As the associated flow field is likely to be decaying rapidly with distance from the site of impact \(r\) we would expect an associated difference in the strength of uprange and downrange pulses. We find that downrange pulse peak velocities can be 2 to 5 times larger than uprange pulse peak velocities for oblique impacts (as shown in Figure 13) confirming this expectation. However, pulse strength asymmetry seems to decay with distance, contrary to what would be expected with a simple variant of the Maxwell Z-model (described with equation 9).
### Decay of pulse peak velocities
In this section, we examine whether the rate that pulse strengths decay is different on the uprange and downrange sides. We use accelerometer signals for experiments with
Figure 15: Peak velocity amplitude as a function of distance \(r\) from the site of impact. Each panel shows experiments at a different impact angle, \(\theta_{I}=20\), 40, and \(60^{\circ}\) from top to bottom. Down-range data points are represented as purple squares while uprange data points are show with gold triangles. Each point is a measurement from a single accelerometer. Power-law fits are shown with dashed lines in the color matching the points they fit. The coefficients and exponents of the power-law model are shown in the keys. The vertical scales are the same in all three panels. We find that there is no significant difference between the radial decay rate (given by the exponents of the fit) between uprange and downrange pulse peaks. There is no strong dependence of the radial decay rate of pulse strength on impact angle.
impact angles \(\theta_{I}=\) 20, 40, and \(60^{\circ}\) as these were covered with the largest number of accelerometer positions. As done in previous studies (e.g., Quillen et al., 2022; Neiderbach et al., 2023) we compute the magnitude of the vector as a function of time in each accelerometer and then select the peak value. This is done for each accelerometer. The position of the accelerometer is taken from the accelerometer placements listed in Table 3, giving the distance \(r=\sqrt{R^{2}+z^{2}}\) of the accelerometer from the site of impact. The resulting peak velocities \(|v|_{pk}\) are shown in Figure 15 as a function of distance from the impact site. Each panel shows data for a different impact angle. The violet squares and gold triangles show velocities downrange and uprange, respectively, of the impact site.
To the data points shown in Figure 15 we fit power-law functions in the form \(|v|_{pk}(r)=Br^{\beta}\), to find coefficients \(B\) and \(\beta\) at each impact angle. Via least squares minimization, we fit uprange and downrange points separately. The best-fitting power-law curves are shown with dashed lines, with gold and violet lines giving uprange and downrange fits, respectively. The exponents \(\beta\) and scaling factors \(B\) for each fit are printed in the key on the lower left side of each panel. Uncertainties in the coefficients are estimated from the scatter in the points from the best-fitting line. Figure 15 shows that there is no significant difference between uprange and downrange pulse strength decay rates. The uprange and downrange exponents are similar at all three impact angles. The asymmetry in the pulse strengths is seen in the differences in the \(B\) coefficients of the fits. We find that the pulse strength is more strongly dependent upon the azimuthal angle than the pulse decay rate. This implies that the prefactor of a Maxwell Z-model for the flow should be dependent upon the azimuthal angle \(\phi\). As the ejecta angle also depends on the impact angle, an associated Maxwell Z-model might also require a \(\phi\) dependent exponent.
The exponents measured from the fits are near -3. This is near but somewhat steeper than the -2.5 exponent predicted for the spherically symmetric propagation model by Quillen et al. (2022) (giving \(v_{pk}\propto r^{-2.5},a_{pk}\propto r^{-3}\)) and is shallower than the exponents (which are lower than -3.5) measured for the decay of surface particle velocity by Neiderbach et al. (2023) (listed in their Table 4). The -2.5 exponent is predicted for a momentum-conserving pulse that broadens with duration \(\Delta t\propto t^{-1/2}\) as it travels and assuming a constant propagation velocity \(v_{P}\)(Quillen et al., 2022). With no pulse broadening a similar model would give \(v_{pk}\propto r^{-3}\), as found here and would also give \(a_{pk}\propto r^{-3}\) which is consistent with normal impact experiments (Yasui et al., 2015; Matsue et al., 2020; Quillen et al., 2022).
We also examined the peak radial component of velocity \(v_{r}\) (in spherical coordinates) as a function of distance from impact \(r\). However, the scatter in these points was larger than that in \(|v|_{pk}\) (shown in Figure 15) so the fits to these points were less certain.
Figure 16: Cylindrical radial acceleration and velocity components as a function of time and for different impact angles. In each panel, we show an uprange and a downrange accelerometer at the same depth and radial distance from the impact point. The different color lines show different impact angles, with key in the upper right panel. The solid lines indicate downrange pulses while the dashed lines represent those uprange. To facilitate comparison between pulse shapes, peaks are aligned in each panel by offsetting the x axis so that peaks occur at \(t=0\) ms, and pulse profiles are offset vertically. The top two panels show accelerations and the bottom two panels show velocities. The left two panels show accelerometers at \(R=6\) cm and the right tow panels column show accelerometers at \(R=8\) cm.
Figure 17: Seismic pulse velocity and velocity direction as a function of time. a) In the left column we show velocity amplitude as a function of time from accelerometers at \(R=6\) cm (thick line in blue) and at \(R=10\) cm (thick line in red) with \(R\) the radius in cylindrical coordinates from the site of impact. The right column shows the velocity angle computed from the same accelerometers. An angle of zero corresponds to a horizontal direction and \(90^{\circ}\) corresponds to moving upward. The blue lines are slightly thicker and above the red ones in the left column. In the right column, the red lines are on top of a above the blue lines. Lines have shade related to the strength of the velocity amplitude. Here the accelerometers are downrange of the impact site and at a depth of 3 cm. The top row shows accelerometers from a single impact experiment at an impact angle of \(\theta_{I}=20^{\circ}\). In the middle and bottom rows \(\theta_{I}=40^{\circ}\) and \(60^{\circ}\), respectively. The small arrows on the top right panel show the direction of motion at three angles. b) Similar to a) except for accelerometers uprange of the impact site. c) Similar to a) except the accelerometers are at a depth of 5 cm and are uprange.
### Pulse shape and duration, and ray angles
In this subsection, we examine the time-dependent behavior of the subsurface motions.
Figure 16 shows the cylindrical radial component of acceleration \(a_{R}\) and velocity \(v_{R}\) as a function of time from accelerometers at the same depth (5 cm) and for experiments at different impact angles. The top row shows \(a_{R}\) and the bottom row shows \(v_{R}\). The left column shows signals from accelerometers at cylindrical radius \(R=6\) cm whereas those in the right column are at 8 cm. Solid lines and dotted lines show quantities from accelerometers downrange and uprange of the impact site, respectively. Signals are offset vertically (by 20 m/s\({}^{2}\) for acceleration and by 0.02 m/s for velocity) so that they can be compared, with impact angles labeled in the key in the upper right panel. To facilitate comparison between pulses, we also shifted the horizontal positions of each pulse so that the peaks occur at \(t=0\).
The pulse durations, measured from the accelerations in Figure 16, range from 0.3 to 0.65 ms (FWHM). These durations are short, in comparison to the time for crater excavation, listed in Table 5. The time for a pressure wave to cross the projectile is \(\lesssim 6\mu\)s, (based on the elastic modulus of a few GPa for PLA plastic; Farah et al. 2016) and is more than an order of magnitude shorter than our pulse durations. Pulse duration is approximately consistent with a seismic source time \(t_{s}\sim R_{cr}/v_{P}\sim 0.7\) ms estimated from a crater radius of 3.5 cm and a propagation speed of \(v_{P}=50\) m/s (see discussions by Gudkova et al. 2011; Quillen et al. 2022 on possible scaling relations for pulse duration). There is no strong dependence of pulse duration on impact angle. Pulse strength is weaker at a lower impact angle, consistent with the dependence of crater volume and crater efficiency on impact angle (as shown in Figures 9 and 12) which were measured from the crater profiles.
Figure 16 shows that uprange pulses tend to be weaker than downrange pulses, as expected from the asymmetries in the peak values shown in Figure 13. We attribute the difference between the uprange and downrange accelerometer in the normal impact to errors in the accelerometer placement with respect to the site of impact.
Material is launched off the surface if the vertical component of acceleration exceeds 1g. In the top two panels of Figure 16, we see that pulse peak acceleration pulse strengths are well above 1g, even though the accelerometers are outside the crater radius where surface material does not join the ejecta curtain. The crater excavation time can be estimated from the ejecta curtain snapshots in Figure 2 and exceeds 20 ms. This length of time is at least an order of magnitude longer than the seismic pulse durations. We infer that seismic pulse acceleration must be reduced near the surface. Indeed, for normal impacts, accelerometers placed on the surface showed weaker motions that took place during a longer time interval, compared to those even a few cm deep. The longer duration surface motions outside the crater were closer to the crater excavation time (Neiderbach et al., 2023).
In Figure 17 we compare pulse shapes and directions as a function of time from pairs of accelerometers that are recorded in the same impact experiment. In each subfigure, each row shows a different impact angle, from top to bottom \(\theta_{I}=20\), 40, and 60\({}^{\circ}\). In Figures 17 a) and b) accelerometers are at a depth of 3 cm and in Figures and 17 c) and d) accelerometers are deeper, at a depth of 5 cm. The left columns show velocity amplitude and the right columns show the direction of the velocity vector with an angle of 0\({}^{\circ}\) corresponding to a horizontal vector. Vector directions are illustrated in the top right panel of Figure 17a. The vector directions are predominantly above horizontal and also above the direction of \(\hat{r}\) from the site of impact which gives a direction of -27 and -40\({}^{\circ}\) at \(R=6\) and \(|z|=3\) and 5 cm respectively, and -17 and -27\({}^{\circ}\) at \(R=10\) and \(|z|=3\) and 5 cm, respectively. In Figure 17 the horizontal time axes are the same in all panels. Thicker blue lines show accelerometers at \(R=6\) cm and thinner red lines show those at \(R=10\) cm from the site of impact. Lines are shaded according to the strength of the velocity magnitude.
A comparison between Figures 17 a) and c) and between b) and d) show that pulses are longer duration nearer the surface. A comparison between the top and bottom panels of a) shows that near the surface pulses are particularly long for grazing impacts. This suggests that the induced flow field is shallower for the grazing impacts than the nearly normal ones. In all subfigures, by comparing \(R=6\) cm pulses with those at \(R=10\) cm that are more distant from the site of impact and at the same depth, we see that pulses are smoother (less triangular) further from the site of impact. Smoothing of pulse shape was interpreted in terms of a diffusive model for momentum by Quillen et al. (2022) but only in the context of pulses propagating in one dimension.
Examination of any of the subfigures in Figures 17 shows that the pulse peaks are later at \(R=10\) cm than at \(R=6\) cm. However, the delay between the pulse peaks is longer at a shallower depth (comparing a) to c) or comparing b) to d). The speed of pulse travel may be slower at shallower depths. For these pulses, the pressure amplitude is only higher than hydrostatic pressure within about 6 cm of the site of impact and a depth of about 5 cm (as estimated via \(P_{pk}\sim\rho_{s}|v|_{pk}v_{P}\)) Quillen et al. 2022). A pulse travel speed of about 50 m/s in the same medium was estimated by Quillen et al. (2022). At 50 m/s it takes only about 1 ms to travel about 4 cm (the distance between the accelerometers) and this is approximately consistent with the delays between the deeper accelerometers seen in c) and d). Comparison of Figures 17 a) and b) or b) and d) implies that there is no significant difference between uprange and downrange pulse arrival times.
The velocity ray angles (shown in the right columns in each subfigure) tend to increase with time, but primarily for accelerometers more distant from the site of impact.
This tendency for upward flow at later times is also visible in Figure 14 showing ray angles. Differences in velocity directions among different radial distances seem larger on the downrange side than on the uprange side of impact.
A Maxwell Z-model gives a fixed angle for flow at each subsurface location. The variations in ray angle as a function of time shown in the right columns of Figure 17 imply that a static Maxwell Z-model would not describe the directions. To match the ray angles with a variant of the Maxwell Z-model, a time-dependent exponent or flow center (or possibly both) would be required. While the Maxwell Z-model roughly characterizes flow directions in excavation flows, it would be difficult to modify it to match the time-dependent phenomena seen here that are also a function of impact angle.
In Figure 17 vertical scales for the left panels are not the same because the pulse heights differ. The pulse shapes in the different subfigures are similar, suggesting that the primary difference between uprange and downrange pulses and between flows generated by different impact angles is in the pulse amplitudes. Perhaps a model for the time-dependent flow field for a single impact could be extended to match a larger set of oblique impacts by adjusting the amplitude of the velocity field.
In Figure 18, we illustrate the phenomena seen in our experiments. The projectile launches a compressive pulse that propagates away from the site of impact. Because the medium exerts a drag force on the projectile, the pulse is stronger on the downrange side than on the uprange side, as illustrated by the different hues. The arcs are darker red on the downrange side indicating that the pulse is stronger on this side. As the pressure from the pulse is released, the velocity in the pulse changes direction and points upward. The streamlines are shown in green. When the pulse reaches the surface, it launches ejecta, but with more ejecta launched on the downrange side than on the uprange side. Because the pulse is stronger on the downrange side, excavation takes longer on that side. The resulting crater has a center that is offset from the site of impact. Outside the crater radius, the flow is similar but upon reaching the surface, there is plastic deformation rather than ejecta launched (Neiderbach et al., 2023). Seismic energy is dissipated at the surface and not reflected back into the medium.
The illustration of Figure 18 does not show a depth or time dependence for the flow field center, which also could be present. Nor does it capture complex near-surface behavior where short-duration pulses broaden to better match the crater excavation time (Neiderbach et al., 2023). This illustration combines phenomena seen in our experiments and in measurements of ejecta curtains of oblique impacts (Anderson et al., 2003, 2004; Anderson and Schultz, 2006) with ideas from the shock and flow field model by Kurosawa (2019) for an impact into a solid.
### Scaling subsurface pulse amplitude with projectile momentum
In the top panel of Figure 19 we plot peak velocity amplitudes as a function of distance from impact site \(r\) for the impact experiments with impact angle \(\theta_{I}=20,40,60\) and \(80^{\circ}\). Except for the impacts at \(80^{\circ}\), these are the same values shown in Figure 15 but they are plotted together on the same plot. Uprange accelerometer locations are shown with dots or circles, whereas downrange locations are shown with squares. A key for all panels is shown in the middle panel. The x-axis shows the log of \(r/R_{cr}\) where we used \(R_{cr}=3.5\) cm based on the crater radius for a normal impact.
In the top panel, we plot \(|v|_{pk}/C_{0}\) where the normalization factor
\[C_{0}=\frac{M_{p}v_{imp}}{\rho_{s}2\pi R_{cr}^{3}} \tag{10}\]
is a dimensional estimate for the peak pulse velocity at the crater radius that is based on the projectile momentum (equation 50 by Quillen et al. 2022 with dimensionless coefficient \(B_{\text{eff}}=1\)). In the middle panel of Figure 19, we normalize the peak velocities with
\[C(\Delta\mathbf{p})=\frac{1}{\rho_{s}2\pi R_{cr}^{3}}\times\begin{cases} \Delta p_{z},&\text{if uprange}\\ \Delta p_{z}+\Delta p_{x},&\text{if downrange}\end{cases}. \tag{11}\]
This replaces projectile momentum magnitude with a function that depends on \(\Delta p_{x}\) and \(\Delta p_{y}\), the components of the momentum imparted by the projectile into the substrate (see equation 3). These are listed in Table 5 and take into account the momentum that is carried away if the projectile ricochets. For a normal impact \(M_{p}v_{imp}=\Delta p_{z}\) and
Figure 18: An illustration of an excited pulse and an excavation flow caused by an oblique impact. A compressive seismic pulse is launched by the impact, but it is stronger on the downrange side than on the uprange side. This phase is shown in red, orange and yellow and with darker hues representing a stronger pulse amplitude. As the pressure is released, the direction of flow tilts toward the surface, and the flow resembles a Maxwell Z-model. This phase of the flow is shown in blue and green. When the pulse reaches the surface it launches ejecta but more ejecta is launched on the downrange side. The resulting crater is lopsided with the crater center offset from the site of impact.
the normalization coefficient reduces to that used in equation 10; \(B(\Delta\mathbf{p})=C_{0}\). The ansatz made in choosing the normalization factors in equation 11 is that the strength of the uprange pulses and flow field is set by the vertical component of the momentum that is imparted to the medium, but the downrange pulse strengths are influenced by both horizontal and vertical components. In the bottom panel of Figure 19, we renormalize with a factor that depends on the projectile momentum and impact angle (and does not take into account possible ricochet)
\[C^{\prime}(\theta_{I})=C_{0}\times\begin{cases}\sin\theta_{I},&\text{if uprange} \\ \sin\theta_{I}+\cos\theta_{I},&\text{if downrange}\end{cases}. \tag{12}\]
Our simple choices of normalization reduce the scatter in the second and third panels in Figure 19.
The dotted line in the second panel of Figure 19 shows the line
\[|v|_{pk}(r)=4C(\Delta\mathbf{p})\left(\frac{r}{R_{cr}}\right)^{-3}. \tag{13}\]
The dotted line in the third panel is similar, showing
\[|v|_{pk}(r)=4C^{\prime}(\theta_{I})\left(\frac{r}{R_{cr}}\right)^{-3}. \tag{14}\]
The exponent is slightly steeper than the -2.5 exponent used by Quillen et al. (2022). A line with a slope of -2.5 would be about as good a match to the points shown here if the scaling factor is lower, 2.4 instead of 4. The standard deviations of residuals from the dotted lines in the second and third panels of Figure 19 are equivalent. The coefficient of 4 in equations 13 and 14 is similar to that used for matching normal impacts Quillen et al. (2022). Equations 13 and 14 give an estimate for subsurface peak pulse velocity up or downrange from the impact site. Here \(R_{cr}\) is the crater radius for a normal impact and can be estimated using crater scaling laws.
If the projectile does not ricochet, it is straightforward to estimate the momentum transferred to the medium at the time of impact \(\Delta\mathbf{p}\). However, at grazing angles, equation 13 is not useful unless the momentum of the projectile after ricochet can be predicted. A number of recent studies have focused on whether or not projectiles ricochet Wright et al. (2020, 2022); Miklavcic et al. (2022, 2023), but only a few studies have measured the fraction of momentum carried away by the projectile when it does ricochet (e.g., Wright et al., 2022). Nevertheless, we suspect that projectile ricochet affects the strength and angular dependence of the excited subsurface seismic disturbance, particularly in the velocity regime studied here. In the absence of models for ricochet, equation 14 serves and is as good a match to the peak velocities as a function of \(r\). A model that predicts pulse peak velocities as a function of azimuthal angle and depth as well as \(r\) might further reduce the scatter.
Figure 19: Peak velocity amplitude as a function of distance from the site of impact. In all panels, we plot points for experiments at impact angle \(\theta_{I}=20,40,60\) and \(80^{\circ}\). Squares show downrange accelerometer locations and circles show uprange locations. In the top panel, the peak velocity is normalized with the momentum of the projectile using equation 10. In the middle panel, the peak velocity is normalized with a factor that depends upon the components of momentum imparted to the medium and takes into account the momentum carried away by ricochet (given in equation 11). The bottom panel, the peak velocity is normalized with a factor that only depends on the initial projectile momentum and impact angle. For normalization in the middle and bottom panels, we assume that the uprange pulse velocity peak only depends on the z-component of momentum but the downrange peak velocity depends on both horizontal and vertical momentum components. The scatter is reduced in the middle and bottom panels. The dotted lines show a power law with an exponent of -3.
## 5 Formation of the Sky Crater on Arrokoth
(486958) Arrokoth (formerly 2014 MU69) is a bilobate cold classical Transneptunian object that was observed as part of the New Horizons extended mission (Spencer et al., 2020). We discuss the response of the body to an impact that formed Arrokoth's largest crater, the Sky Crater (formerly "Maryland") on Weeyo, the smaller of Arrokoth's two lobes. The impact that formed the Sky crater is estimated to be at a speed of about 0.5 km/s and with a projectile diameter of about 1 km (McKinnon et al., 2022), (see their Figure 3). The body's mass density is not well constrained but a low value of 500 kg m\({}^{-3}\) is considered plausible (e.g., Spencer et al., 2020). The low density suggests that the body is porous (Spencer et al., 2020). Images of the crater suggest that it has a conical shape, typical of craters in granular media. As Arrokoth could be a granular system that experienced an impact at a velocity similar to our experiments, we discuss the formation of the Sky Crater in the context of what we have learned from studying oblique impacts.
Stress associated with the impact that formed the Sky Crater could have caused the narrow neck between Weeyo and Wenu, the two lobes, to break (Hirabayashi et al., 2020), however, McKinnon et al. (2022) argued that this was unlikely. By taking into account the direction-dependent strength and decay rate of an oblique impact-generated seismic pulse, we can independently estimate the stress in Arrokoth's neck to reexamine this issue.
Following Keane et al. (2022), because of Arrokoth's unusual bilobate shape, a Cartesian coordinate system is used to describe its features. The z-axis is aligned with Arrokoth's spin axis with a positive direction defined by the right-hand rule. The x-axis is defined to be perpendicular to the z-axis and aligned with Arrokoth's long axis and with a positive x pointed toward the larger lobe. The surface slope at locations on Arrokoth's surface is defined as the angle between the surface normal vector and the acceleration vector due to self-gravity and rotation (centrifugal acceleration). In the Sky Crater, the surface slope is higher on the -y side of the crater, as shown in Figure 8 by Keane et al. (2022), and illustrated with the red region in Figure 20.
We consider the possibility that variations within the Sky crater's slope are related to impact angle. As steeper crater slopes tend to be found on the uprange side of an oblique impact (as discussed in section 3.2), the high slopes on the -y side of Sky crater would suggest that the bearing direction of the impactor was in approximately the +y direction, as shown with the green arrow in Figure 20 (and it came from the -y direction), giving an impact that decreased the body's spin.
Crater slope is also sensitive to the substrate's initial slope (Takizawa and Katsuragi, 2020). Because Arrokoth has a flattened shape (is quite thin in the \(\pm z\) direction), points with high \(|z|\) values, in the center of the smaller lobe, are downhill of the points at the outer edge of this lobe. If the projectile comes from the upslope direction, it bulldozes material in front of it, making the downrange crater side steeper than the uprange side (Takizawa and Katsuragi, 2020). If the crater slope asymmetry is due to the initial substrate slope, because the -y side of the crater is steeper, the projectile could have originated from the shallower and higher side, coming from the +y side, agreeing with the dotted blue arrow for the projectile shown in Figure 20. Independent of whether the steeper side of the crater is due to the initial slope of the surface or because the impact was oblique, we suspect that the projectile came along a tangential direction.
The distance of crater center from the neck is about \(L=11\) km. The crater diameter is about 7 km in diameter with a depth of about 0.5 km (Spencer et al., 2020). McKinnon et al. (2022) estimate a momentum of the projectile is \(\Delta p=5\times 10^{13}\) kg m s\({}^{-1}\). They estimate the travel time to the neck as \(\Delta t=L/v_{P}=100\) s, using a travel speed of 100 m/s adopted by Cooper et al. (1974); McKinnon et al. (2022). They estimate the stress on the neck as \(\sigma\sim\frac{\Delta p}{\Delta tA_{N}}\sim 15\) kPa where \(A_{N}=30.5\) km\({}^{2}\) is the cross sectional area at the neck. This potentially gives a stress higher than a few kPa which could exceed the tensile or compressive strength at the neck. Based on the total kinetic energy that could be imparted to Weeyo, McKinnon et al. (2022) argued that the impact would only crush material at the neck.
We consider a seismic pulse propagating from the crater toward Arrokoth's neck. In our experiments, we see little energy reflected from the surface, rather pulse energy seems to go into launching ejecta, or plastic deformation outside the crater radius (Neiderbach et al., 2023). Thus
Figure 20: Illustration of the Sky crater on Arrokoth. One side of the crater, shown in red, has steeper surface slopes than the other, as shown in Figure 8 by Keane et al. (2022). If the steeper side of the crater is due to the impact angle, we suspect the projectile came from the right. If the steeper side is due to the initial surface slope, the projectile could have come from the left. We follow Keane et al. (2022) for the Cartesian coordinate system.
the pressure in the pulse would primarily depend upon the distance from the site of impact. We do not expect focusing of seismic energy at Arrokoth's neck. We approximate the vertical and horizontal components of projectile momentum transferred to Weeyo \(\Delta p_{y}=\Delta p_{z}\sim\sqrt{2}\Delta p\) assuming a \(45^{\circ}\) impact angle. Using Eq. 11 and Eq. 13, we estimate that the pulse peak velocity magnitude at the neck region is \(|v|_{pk}\sim 6.7\) cm/s. The peak pressure at the neck we estimate from the peak velocity, \(P_{pk}\sim\rho_{s}v_{P}v_{pk}\)(Quillen et al., 2022). Applying a P-wave pulse travel speed of \(v_{P}=100\) m/s adopted by McKinnon et al. (2022), we obtain pulse peak pressure \(P_{pk}\) at the neck to be \(\sim 3\) kPa. This is lower than that estimated by McKinnon et al. (2022) as we have taken into account attenuation, based on the decay rates of pulses seen in our experiments. However, this stress value exceeds both tensile and compressive strength estimates for granular systems (Brisset et al., 2022). The impact could have caused deformation throughout Weeyo if it was a granular system.
Because of major axis slope asymmetry, we can estimate the bearing of the impactor that made the Sky Crater. However, it is more difficult to estimate the impact angle because the slopes are likely to be sensitive to the static and dynamic angles of repose of the granular medium and we don't know how the angles of repose of Arrokoth's material compares to our laboratory sand. Furthermore, crater shape would be affected by inhomogeneity in the material properties, and Arrokoth's crater could be shallower than those in our lab because it has had more time to slump and relax and because much of the ejecta curtain escaped (Mao et al., 2021) rather than added to the crater rim.
We support the finding by McKinnon et al. (2022) that the Sky Crater impact would at most partly crush or plastically deform material in Arrokoth's neck. We have primarily discussed the formation of the Sky Crater to illustrate some of the issues involved in interpreting subsurface pulse propagation in astronomical bodies in context with what our experiments show in granular systems under laboratory conditions.
## 6 Summary and Discussion
We have carried out a series of experiments of oblique impacts of airsoft BBs at a projectile velocity of about 104 m/s into the sand. Even at grazing impact angles as low as \(10^{\circ}\), the craters are nearly round. Evidence that the crater was formed by an oblique impact with an impact angle below \(45^{\circ}\) can be inferred from variations in crater slope, with an uprange surface slope about \(10^{\circ}\) higher than that downrange. An oblique impact could also be inferred from variations in rim height and ejecta distribution, as the downrange side has a higher rim and a thicker ejecta blanket. The sensitivity of ejecta angle with azimuthal direction is consistent with studies tracking ejecta (Anderson et al., 2004; Anderson and Schultz, 2006). While differences in crater slope, volume, and ejecta are subtle, seismic pulses detected below the surface with accelerometers exhibit a remarkably large asymmetry between uprange and downrange positions. We find that pulse peak velocity has a ratio of downrange to uprange amplitude as large as 5. This ratio is particularly large at shallow depths.
We confirm prior studies (Elbeshausen et al., 2009; Takizawa and Katsuragi, 2020) finding that crater efficiency (as measured from the crater volume) is lower for grazing impacts than normal impacts, but in our experiments, this is in part due to the energy carried away by the projectile as it ricochets at impact angles below \(\theta_{I}\sim 50^{\circ}\).
We considered a Maxwell Z-model for the velocity peak values at different subsurface locations. However, we find that time-dependent angle and velocity amplitude variations make it difficult for a simple modification of the Maxwell Z-model to match our subsurface pulse properties. Pulse shape (as a function of time) and direction are remarkably similar among experiments (shown in Figure 17), suggesting that it may be possible to rescale subsurface velocity amplitudes and relate time-dependent models for flow at one impact angle to others. We succeeded in reducing scatter in plots of peak velocity amplitude versus distance from impact site via a normalization factor that depends on the projectile momentum and its direction.
Our picture for the subsurface impact excited seismic pulse and generated excavation flow in granular systems, shown in Figure 18, is remarkably similar to that developed for impact craters into solids, e.g., Melosh (1985); Kurosawa (2019). However, analogies for shocks and rarefaction waves are challenging to describe in a granular system. The transition between elastic behavior typical of a solid, which we describe as subsurface pulse propagation and granular flow, which we describe as an excavation flow, perhaps could be described with a non-linear continuum model (e.g., Agarwal et al., 2021), with discrete element simulations (e.g., Miklavcic et al., 2023; Sanchez et al., 2022) or modifications of semi-analytical models developed for impacts into fluids (Lherm and Deguen, 2023).
Impact-generated motions in granular systems are particularly challenging to model as they encompass elastic wave propagation through a granular medium and granular flow. Numerical models for them are potentially powerful as they can cover low-g environments that are not accessible in our lab. Ricochet in granular systems likely scales to low-g environments using the dimensionless Froude number (Wright et al., 2022; Miklavcic et al., 2023). In contrast, we suspect that pulse propagation in granular systems is sensitive to the pressure amplitude in the pulse (Quillen et al., 2022; Sanchez et al., 2022). In low-strength materials, crater excavation should scale with the dimensionless \(\pi_{2}\) parameter (Housen and Holsapple, 2011) which is related to the Froude number. The transition between subsurface seismic pulse propagation and crater excavation combines these two different physical scaling scenarios. Recent simulations of granular systems have been successful at predicting ricochet (Miklavcic et al., 2022,
2023) and propagation of seismic pulses through granular columns (Sanchez et al., 2022). Similar numerical studies, developed to match subsurface response, could improve upon our understanding of how pulses travel and how flow is driven by these pulses in granular systems so that we can extend and improve software used for modeling impacts on asteroids and other bodies in the Solar system (e.g., Raducan et al. 2022).
The largest uncertainties in our experiments are due to differences between the intended and actual accelerometer location with respect to the impact site. Future experiments could improve upon techniques for accelerometer emplacement and increase the number of accelerometers so that the full 3-dimensional flow field in a single experiment can be characterized. Of particular interest for future experiments are regions where the flow is least understood. This might be where the transition from elastic phenomena to granular flow takes place, near and just below the surface where the transition from narrow subsurface seismic pulses becomes a slower crater excavation flow. As astrophysical objects are not homogeneous, future studies could also explore subsurface motions excited by impacts in polydisperse granular media.
## Acknowledgements
We are grateful to Mokin Lee for discussing his work on simulations of oblique impacts.
This work has been supported by NASA grant 80NSSC21K0143.
## Data availability and Supplemental Videos
Datasets and analysis scripts related to this article can be found at [https://github.com/URGranularLab/Oblique_impact](https://github.com/URGranularLab/Oblique_impact), hosted at GitHub.
|
2302.05493 | Enhancing Quantum Algorithms for Quadratic Unconstrained Binary
Optimization via Integer Programming | To date, research in quantum computation promises potential for outperforming
classical heuristics in combinatorial optimization. However, when aiming at
provable optimality, one has to rely on classical exact methods like integer
programming. State-of-the-art integer programming algorithms can compute strong
relaxation bounds even for hard instances, but may have to enumerate a large
number of subproblems for determining an optimum solution. If the potential of
quantum computing realizes, it can be expected that in particular finding
high-quality solutions for hard problems can be done fast. Still, near-future
quantum hardware considerably limits the size of treatable problems. In this
work, we go one step into integrating the potentials of quantum and classical
techniques for combinatorial optimization. We propose a hybrid heuristic for
the weighted maximum-cut problem or, equivalently, for quadratic unconstrained
binary optimization. The heuristic employs a linear programming relaxation,
rendering it well-suited for integration into exact branch-and-cut algorithms.
For large instances, we reduce the problem size according to a linear
relaxation such that the reduced problem can be handled by quantum machines of
limited size. Moreover, we improve the applicability of QAOA, a parameterized
quantum algorithm, by deriving optimal parameters for special instances which
motivates a parameter estimate for arbitrary instances. We present numerous
computational results from real quantum hardware. | Friedrich Wagner, Jonas Nüßlein, Frauke Liers | 2023-02-10T20:12:53Z | http://arxiv.org/abs/2302.05493v3 | # Enhancing Quantum Algorithms for Maximum Cut via Integer Programming
###### Abstract
To date, quantum computation promises potential for future speed-ups in combinatorial optimization, when compared to classical integer programming algorithms that - next to determining strong relaxations - typically also include some clever enumerative part. Integer programming methods can efficiently compute strong bounds even for large instances however, may have to enumerate a large number of subproblems for determining an optimum solution. If the potential of quantum computing realizes, it can be expected that in particular searching large solution spaces can be done fast. However, near-future quantum hardware considerably limits the size of treatable problems and is also prone to noise. In this work, we go one step into integrating the potentials of quantum and classical techniques for integer optimization. We propose a quantum-classical hybrid algorithm for the maximum cut problem on weighted graphs. The algorithm relies on a classical linear programming relaxation based on odd-cycle inequalities. For instances that are too large to be solved on available quantum machines, we reduce the problem size according to the solution of the linear programming relaxation. As a result, the reduced problem can be solved by quantum computers of limited size. Returned solutions are extended to feasible solutions of the original instance. Moreover, we improve the applicability of QAOA, a well-known parameterized quantum algorithm by deriving optimal parameters for special instances of the weighted maximum cut problem which motivates a parameter estimate for arbitrary instances. We present numerous computational results from real quantum hardware for instances motivated by physics with up to 100 nodes which exceeds the available quantum hardware capacities. Although all considered instances can be solved easily by classical computers, they provide a proof-of-principle for the proposed method when quantum hardware improves.
**Keywords:** Integer Programming, Combinatorial Optimization, Quantum Computation
## 1 Introduction
Mixed-integer programming looks back upon a long history of successfully developing methods and algorithms. Although being NP-hard, even large instances can be solved to global optimality within reasonable time by modern algorithms and implementations. Nonetheless, there exist practically relevant instances which exceed the capabilities of state-of-the-art solvers. Here, recent progress in quantum computation promises a potential advantage. Although it is unclear whether this promise will realize and many questions remain open at the moment, quantum computation is a highly relevant topic that is currently studied in many research groups. To date, current quantum hardware offers only small memory and is prone to noise. However, should large-scale and reliable quantum hardware become available, speed-up in searching large solution spaces can be expected. Apart from solving relaxations, also enumeration is a building-block of mixed-integer programming
methods which typically use branch-and-bound algorithms. The need to enumerate a large number of subproblems can become a weakness in classical integer programming. The latter performs very well on instances where upper and lower bounds are strong as branching can then be avoided as much as possible. In this work, we take a step towards combining the potential of both worlds by providing a hybrid algorithm for the maximum cut problem (MaxCut). The algorithm solves the well-studied cycle relaxation of MaxCut in order to shrink the problem graph such that it can be handled by quantum hardware. The proposed method benefits from decades of developing sophisticated (integer) linear programming techniques for MaxCut which we briefly summarize here.
Linear programming and MaxCut.Given a weighted graph, MaxCut asks for a partition of the nodes such that the weight of connecting edges is maximized. MaxCut is one of Karp's 21 NP-complete problems [25]. Thus, polynomial time algorithms are known for special cases only, cf. e.g. [16, 13, 29]. The work of Barahona et al. [4] paved the way for a long series of successful applications of linear and semi-definite programming methods for MaxCut. For a comprehensive book, we refer the interested reader to [11]. A major reason for this success is the development of sophisticated branch-and-bound and branch-and-cut methods that allow the solution of even large MaxCut relaxations, cf. e.g. [2, 3, 27, 33, 5, 24, 32, 9]. These relaxations are typically strong, i.e. they yield a tight upper bound on the optimal cut value, making them valuable for branch and cut algorithms. Strong relaxations of MaxCut have proven useful even for problems with additional constraints ([8]).
Quantum computation and MaxCut.Significant technological progress in quantum computation (QC) hardware has been achieved in the last decade, improving both digital (cf. e.g. [1], [20]) and analog quantum computers (cf. e.g. [6]). The availability of quantum hardware has driven researchers to seek for practical quantum advantage in various fields of applications. One of the most prominent applications of QC is combinatorial optimization ([18]), although clear advantages have not yet been shown ([23]). Here, commonly used algorithms are the quantum approximate optimization algorithm (QAOA) in digital QC ([12]), and quantum annealing (QA) in analog QC ([19]). Although from a theoretical point of view, both algorithms are applicable to a broad range of combinatorial optimization problems, research is mainly focused on quadratic unconstrained binary optimization (QUBO) which is equivalent to MaxCut (cf. [22, 3, 10]). The reason is, that many current quantum hardware platforms, both digital and analog, have a natural connection to QUBO. Therefore, the usual approach is to model the problem of interest as a QUBO problem ([30]). It is expected that, even if QC achieves experimental advantages, such restrictions due to quantum hardware persist.
Our contribution.In this work, we integrate both quantum and classical computation for MaxCut in a hybrid algorithm. The main idea is, that the former can produce high quality solutions on instances of limited size, while the latter can compute strong relaxations for large instances. More specifically, we present an algorithm to reduce the number of variables in MaxCut by imposing correlations between them. Those correlations are derived from the well-known cycle relaxation. The reduced problem is then solved on near-term quantum hardware. Since quantum algorithms in general return sub-optimal solutions, no optimality-guarantee can be given. However, depending on the application, this can be a viable trade-off for large instances. Additionally, we improve the applicability of the quantum algorithm used in our experiments - QAOA for weighted MaxCut - by deriving optimal parameters for triangle-free, regular graphs with weights following a binary distribution. Furthermore, this leads to a new estimate for good parameter values for arbitrary instances.
We present experimental results from actual quantum hardware, showing the applicability of the proposed method on example instances inspired by spin glass physics. The quantum processor in our experiments has 27 qubits of which we use at most 10 due to noise effects. This limits the size of the reduced problem to a maximum of 10 vertices. Due to the hybrid algorithm that first reduces the instance size correspondingly, we solve instances with up to 100 nodes within 60 s of
total computation time. For quantum computing, these instances exceed the currently available resources. We emphasize, that all instances considered in this work can be solved easily by purely classical methods which can handle problems with over 10,000 vertices in less than a minute, cf. [9]. Owing to the limitations of quantum hardware, larger instances would not give deeper insights in the quantum part since the algorithm outcome would mainly be determined by the classical part. Our experiments thus yield a proof-of-principle, encouraging the application of the proposed method when quantum hardware advances.
**Related work.** In the context of QC, a similar size-reduction procedure has been proposed in [7]. There, however, the motivation is quite different. The authors reduce large-scale problems by iteratively solving them approximately by QC in order to reach regimes where classical exact methods can be applied.
In the history of linear programming approaches for weighted MaxCut, size reduction by fixing variables was already applied in the early work of Barahona et al. in [2]. More recent techniques for fixing of variables were developed in [26] and [32]. However, the authors focus on calculating _persistent_ correlations, that is, correlations that are provably part of an optimal solution. Furthermore, shrinking graph-problems was successfully applied in [13, 28, 29, 5].
**Structure.** The remainder of this paper is organized as follows: In Section 2 we formally introduce the MaxCut problem and QAOA. Section 3 describes our algorithm in detail. In Section 4, we derive the novel result on optimal QAOA-parameters. Section 5 presents various experimental results. We conclude with a summary and indicate further directions of research in Section 6.
First, we need to introduce some prerequisites necessary for the upcoming sections.
## 2 Preliminaries
**MaxCut models.** Given an undirected graph \(G=(V,E)\) and a vertex subset \(W\subseteq V\), the edge set \(\delta(W):=\{uv\in E\mid u\in W,v\not\in W\}\) is called a _cut_ of \(G\). For edge weights \(w\in\mathbb{R}^{|E|}\), the weight of a cut \(\delta(W)\) is defined as \(\sum_{e\in\delta(W)}w_{e}\). The MaxCut problem asks for a cut of maximum weight.
An edge subset \(C=\{\,v_{0}v_{1},v_{1}v_{2},\ldots,v_{k}v_{0}\,\}\subseteq E\) is called a _cycle_ if \(v_{i}\neq v_{j}\) for \(i\neq j\). Clearly, a cut and a cycle coincide in an even number of edges. Algebraically, this observation can be modeled by the so-called _odd-cycle inequalities_. If \(C\) is a cycle and \(x\in\{\,0,1\,\}^{|E|}\) is the edge incidence-vector of a cut, it holds
\[\sum_{e\in Q}x_{e}-\sum_{e\in C\setminus Q}x_{e}\leq|Q|-1\quad \forall Q\subseteq C,\ |Q|\ \text{odd}\,.\]
In fact, the odd-cycle inequalities for all cycles \(C\) are not only necessary but also sufficient to define a cut. Thus, a widely used integer linear programming formulation (cf. e.g. [2, 3, 24, 32]) of MaxCut is
\[\max \sum_{e\in E}w_{e}x_{e}\] (1a) s.t. \[\sum_{e\in Q}x_{e}-\sum_{e\in C\setminus Q}x_{e}\leq|Q|-1\quad \forall Q\subseteq C,\ |Q|\ \text{odd},\ \forall C\subseteq E\ \text{cycle} \tag{1b}\] \[0\leq x_{e}\leq 1\quad\forall e\in E\] (1c) \[x_{e}\in\{0,1\}\quad\forall e\in E. \tag{1d}\]
The _cut polytope_ is the convex hull of all cut incidence-vectors,
\[P_{\text{CUT}}\coloneqq\text{conv}\{x\in\mathbb{R}^{|E|}|(\text{1b})-(\text{1 d})\}\.\]
The model (1a)-(1c) is called the _cycle relaxation_ of MaxCut. In general, a solution to the cycle relaxation yields an upper bound on the optimal cut value. However, it is known that the cycle
relaxation has integer optimal solutions for all weights \(w_{e}\in\mathbb{R}^{|E|}\) if and only if \(G\) has no \(K_{5}\) minor, cf. [4]. Although the MaxCut problem is in general NP-hard, optimizing the cycle relaxation can be done in polynomial time via _odd-cycle separation_. First, only (1a) and (1c) is optimized. Given an optimal solution \(x^{*}\), the odd-cycle separation algorithms decides whether \(x^{*}\) satisfies all odd-cycle inequalities. If not, it returns a violated one which is added to the model and the procedure is repeated. Otherwise, \(x^{*}\) is optimal for the cycle relaxation. Barahona and Majhoub ([3]) give a polynomial-time algorithm for odd-cycle separation based on shortest paths. Typically, one seeks for odd-cycle inequalities that belong to _chordless_ cycles as they define facets of the cut polytope, cf. [4, 24].
For complete graphs, the odd-cycle inequalities take the form
\[x_{ij}-x_{jk}-x_{ki} \leq 0\quad\forall i,j,k\in V\text{ pairwise different} \tag{2a}\] \[x_{jk}+x_{ki}+x_{ij} \leq 2\quad\forall i,j,k\in V\text{ pairwise different}. \tag{2b}\]
A key ingredient for our algorithm is, that a solution to the cycle relaxation can be computed efficiently. We thus use a optimum relaxation solution to reduce the size of the MaxCut instance such that it can be handled by near-term quantum computers. To this end, we iteratively identify two vertices incident to an edge with a single super-vertex in case the corresponding optimum relaxation-solution for this particular edge is close to either \(0\) or \(1\). This process of vertex identification is known as _shrinking_, cf. [28, 5]. Details are explained in Section 3.
Current quantum computation for combinatorial optimization mainly focuses on QUBO problems. A QUBO problem with \(n\) variables can be mapped to an equivalent MaxCut problem on \(n+1\) vertices (cf. [22, 3, 10]). Thus, we mainly explain our method for the MaxCut problem, keeping in mind that the transformation to a QUBO problem could be applied as well. A QUBO formulation of MaxCut is
\[\max\,C(x)=\sum_{\{i,j\}\in E}w_{ij}(x_{i}+x_{j}-2x_{i}x_{j}) \tag{3a}\] \[\text{s.t. }x_{i}\in\{0,1\}\quad\forall i\in V. \tag{3b}\]
We use this formulation in the quantum part of our experiments.
**QAOA.** QAOA is a quantum-classical hybrid algorithm, originally proposed by Fahri et al. in [12]. Since then, QAOA has received great attention, which led to the development of more sophisticated versions and variants, see e.g. [7] and [15]. QAOA computes approximate solutions of arbitrary unconstrained, binary optimization problems defined by a cost function \(C:\{0,1\}^{n}\to\mathbb{Q},\ x\mapsto C(x)\). Now, the goal is to find an \(x^{*}\) maximizing the cost function.
QAOA is a parameterized algorithm with real-valued parameters \(\boldsymbol{\gamma}=(\gamma_{1},...,\gamma_{p})\) and \(\boldsymbol{\beta}=(\beta_{1},...,\beta_{p})\). The hyper-parameter \(p\), called _depth_, controls the complexity of the algorithm. QAOA prepares a discrete probability distribution \(P_{\boldsymbol{\gamma},\boldsymbol{\beta}}(x)\) over the solutions \(x\in\{0,1\}^{n}\). Running QAOA once draws a single sample from the distribution. The distribution \(P_{\boldsymbol{\gamma},\boldsymbol{\beta}}(x)\) is parameter-dependent, and one seeks for parameters that yield high probabilities for high cost solutions. Usually, this is done by classically maximizing the expectation value
\[F(\boldsymbol{\beta},\boldsymbol{\gamma})=\sum_{x\in\{0,1\}^{n}}P_{ \boldsymbol{\gamma},\boldsymbol{\beta}}(x)C(x). \tag{4}\]
\(F(\boldsymbol{\beta},\boldsymbol{\gamma})\) is estimated by an average over a finite sample
\[\langle C\rangle(\boldsymbol{\beta},\boldsymbol{\gamma})=\frac{1}{N}\sum_{i=1} ^{N}C(x_{i})\, \tag{5}\]
where \(N\) is the total number of samples and \(x_{i}\) is the \(i\)-th sample.
In the experiments, we use QAOA with depth \(p=1\) on the QUBO model (3a)-(3b). In Appendix B, we give a quick recap on the implementation details of QAOA. Having introduced the necessary prerequisites, we now describe the proposed quantum-classical algorithm in more detail.
## 3 Algorithm description
In this section, we describe the proposed hybrid algorithm for MaxCut problems. It can be divided into four major steps. First, _correlations_ between vertex pairs are computed from an optimum odd-cycle relaxation solution. Here, the closeness of a relaxation variable to an integral value is interpreted as a tendency of the corresponding vertex pair to lie in equal or opposite partitions in an optimal cut. Second, the problem size is reduced by imposing correlations, that is, vertex pairs with large absolute correlations are identified. Third, the shrinked problem is solved by QAOA. Finally, a feasible solution to the original problem is reconstructed by undoing the shrinking operations appropriately.
###### Computing Correlations.
To reduce problem size of an instance too large to be solved by current quantum hardware, the algorithm relies on correlations. A correlation between a vertex pair quantifies the tendency of the vertices pair being in equal or opposite partitions in an optimal cut. More formally, for a subset \(S\subseteq V\times V\) of vertex pairs, correlations are a set \(\{\,b_{ij}\mid ij\in S\,\}\) where \(b_{ij}\in[-1,1]\). Correlations are called _optimal_, if there is an optimal cut \(\delta(W)\) such that \(b_{ij}=1\) (\(b_{ij}=-1\)) if \(i\) and \(j\) lie in equal (opposite) partitions in \(\delta(W)\). In general, the closeness of \(b_{ij}\) to \(1\) (\(-1\)) is interpreted as the tendency of \(i\) and \(j\) lying in equal (opposite) partitions.
In principle, any method to deduce correlations can be used in the algorithm. However, we compute correlations from a solution \(x^{*}\) to the odd-cycle relaxation by
\[b_{ij}\coloneqq 1-2x^{*}_{ij}\in[-1,1]. \tag{6}\]
It is well known and can also be seen in our numerical experiments, that relaxation solutions indeed often resemble correlations from an optimal integer solution.
###### Shrinking.
We reduce problem size by identifying vertex pairs which have a large absolute correlation. This process is illustrated in Fig. 1. First, we describe the process of shrinking a single pair of vertices. To this end, let \(b_{ij}\) be a correlation and define
\[\sigma\coloneqq\operatorname{sign}(b_{ij})\,.\]
If \(\sigma=1\) (\(\sigma=-1\)), we enforce \(i\) and \(j\) to lie in equal (opposite) partitions. Solving MaxCut on \(G=(V,E)\) with this additional constraint is equivalent to solving MaxCut on a graph \(G^{\prime}=(V^{\prime},E^{\prime})\), where \(V^{\prime}=V\setminus\{i\}\). For this reduction, edge weights need to be adjusted. For \(u\in V\), denote by \(\mathcal{N}(u)\coloneqq\{\,v\in V\mid uv\in E\,\}\) the neighborhood of \(u\). For all \(k\in\mathcal{N}(i)\), define new weights by
\[w^{\prime}_{jk}\coloneqq\begin{cases}\sigma w_{ik}&\text{ if }jk\not\in E\\ w_{jk}+\sigma w_{ik}&\text{ if }jk\in E\,\end{cases}\]
Figure 1: Sketch for vertex shrinking. (a): Vertices \(i\) and \(j\) are to be identified, where \(\sigma\in\{-1,1\}\) defines whether \(i\) and \(j\) lie in the equal or opposite partitions. Vertex \(k\) is a neighbor of \(i\). (b): In the shrinked MaxCut instance, vertex \(j\) is a super-vertex containing \(i\) and \(j\) with adjusted edge weights. In the case where edge \(kj\) is not present in (a), it is constructed as shown in (b) with \(w_{kj}=0\).
compare Fig. 1. All other edge weights remain unchanged. Vertex \(j\) now represents a super-vertex containing vertices \(i\) and \(j\). Multiple edges are replaced by a single edge. Thus, the reduced MaxCut instance is defined on \(G^{\prime}=(V^{\prime},E^{\prime})\) with
\[E^{\prime}=\left(E\cup\{jk:k\in\mathcal{N}(i)\}\right)\setminus\left\{ik:k\in \mathcal{N}(i)\right\}\,,\]
and \(V^{\prime}=V\setminus\{i\}\). Any cut in \(G^{\prime}\) can be translated to a cut in \(G\) if \(\sigma\) is known.
This shrinking process is iterated until a target problem-size is reached. We shrink in descending order of the absolute values \(|b_{ij}|\). If two vertices to be shrinked have already been identified to the same super-vertex in a previous iteration, the shrinking step is skipped. For the mapping from a solution of the shrinked problem back to a solution of the original instance, it is necessary to keep track of the vertex identifications.
In this work, we use QAOA for solving the shrinked problem. However, any suitable method for this task can in principle be substituted in our algorithm.
The overall algorithm has three desirable properties which can easily be verified. First, it returns an optimal solution if shrinking with optimal correlations and if the shrinked problem is solved to optimality. Second, when shrinking with arbitrary correlations, the cost of the returned solution can only increases with the size of the shrinked problem, if the shrinked problem is solved to optimality. Third, for a fixed shrinked problem, the cost of the returned solution increases with the solution quality of the shrinked problem.
Having described the classical algorithmic framework, we now turn to the quantum part.
## 4 QAOA-Parameter Estimate for Weighted MaxCut
We apply depth-1-QAOA to solve the shrinked MaxCut problem. The solution quality returned by QAOA heavily depends on the parameters \(\gamma,\beta\). Therefore, deriving good parameters is crucial for its success. The authors of [34] give an analytical expression for the expectation value \(F(\gamma,\beta)\) as defined in (4) for depth-1-QAOA applied to unweighted MaxCut.
In this section, we extend their results to efficiently derive good parameters for depth-1-QAOA, when applied to weighted MaxCut. The following statements might serve as a starting point for a further classical parameter optimization. For all instances considered in this work, however, the parameter estimate performs well enough to be used without any further parameter optimization.
Next, generalizing from [34], we state the result for weighted MaxCut.
**Lemma 4.1**.: _Let \(G=(V,E)\), \(w\in\mathbb{R}^{|E|}\) be a weighted graph. Let \(\gamma,\beta\in\mathbb{R}\) and \(F(\gamma,\beta)\) be defined as in (4). Further, for \(u,v\in V\), let \(\mathcal{N}_{u}(v)\) be the set of neighbours of \(v\), excluding \(u\) and denote by \(\Lambda(u,v)\) the set of common neighbours of \(u\) and \(v\). Then it holds_
\[F(\gamma,\beta)=\sum_{uv\in E}f_{uv}(\gamma,\beta), \tag{7}\]
_where_
\[f_{uv}(\gamma,\beta) =w_{uv}\Bigg{[}\frac{1}{2}+\frac{1}{4}\sin(4\beta)\sin(\gamma w_ {uv})\left(\prod_{s\in\mathcal{N}_{v}(u)}\cos(\gamma w_{us})+\prod_{t\in \mathcal{N}_{u}(v)}\cos(\gamma w_{vt})\right)\] \[-\frac{1}{2}\sin^{2}(2\beta)\sum_{\begin{subarray}{c}N\subseteq \Lambda(u,v)\\ |N|=1,3,5,\ldots\end{subarray}}\prod_{s\in\mathcal{N}_{v}(u)\setminus N}\cos (\gamma w_{us})\prod_{t\in\mathcal{N}_{u}(v)\setminus N}\cos(\gamma w_{vt}) \prod_{r\in N}\sin(\gamma w_{ur})\sin(\gamma w_{vr})\Bigg{]}. \tag{8}\]
Proof.: See Appendix C.
An often studied class of instances consists of weights that are chosen following a binary distribution in \(\{-a,a\},\ a>0\). Then, for specific graph topologies, maximizers of \(F(\gamma,\beta)\) can be derived analytically from Lemma 4.1, as we show next.
**Corollary 4.2**.: _For triangle-free, \(d\)-regular graphs with weights taking values in \(\{-a,a\},\ a>0\), (7) is maximized for_
\[\gamma=\frac{1}{a}\arctan\left(\frac{1}{\sqrt{d-1}}\right)\,\quad\beta= \frac{\pi}{8}. \tag{9}\]
Proof.: Using \(\Lambda(u,v)=0\), \(|\mathcal{N}_{u}(v)|=|\mathcal{N}_{u}(v)|=d-1\), \(w_{ij}\in\{-a,a\}\), (8) simplifies to
\[f_{uv}(\gamma,\beta) =w_{uv}\Bigg{[}\frac{1}{2}+\frac{1}{2}\sin(4\beta)\sin(\gamma w_{ uv})\cos^{d-1}(\gamma a)\Bigg{]} \tag{10}\] \[=\frac{w_{uv}}{2}+\frac{a}{2}\sin(4\beta)\sin(\gamma a)\cos^{d-1 }(\gamma a)\.\]
By differentiation w.r.t. \(\beta\) and \(\gamma\), it is easily verified that (9) maximizes (10). Since the maximizers (9) do not depend on \(uv\), they also maximize (7).
Although possibly not always being the best choice, Corollary 4.2 motivates the following parameter guess for arbitrary weighted graphs:
\[\bar{\gamma}=\frac{1}{\bar{a}}\arctan\left(\frac{1}{\sqrt{\bar{d}-1}}\right)\,\quad\bar{\beta}=\frac{\pi}{8}. \tag{11}\]
Here, \(\bar{a}\) is the mean of absolute weight values,
\[\bar{a}\coloneqq\frac{1}{|E|}\sum_{uv\in E}|w_{uv}|\,\]
and \(\bar{d}\) is the average node degree. For triangle-free, regular graphs with weights in \(\{-a,a\}\), (11) reduces to (9).
As mentioned above, our numerical experiment show a good performance of parameters (11) such that we use them without further optimization. This reduces the runtime of QAOA on quantum hardware significantly compared to the standard approach where a classical parameter optimization via e.g. gradient descent is performed. Here, repeated estimation of \(F(\gamma,\beta)\) is necessary, which is not the case when keeping parameters fixed.
Having introduced the algorithmic framework, we are now ready to discuss experimental results from our implementations.
## 5 Experimental Results
In this section, we present various computational results for the proposed algorithm. All implementations are done in Python. We use the open-source quantum-software development-kit _Qiskit_[31] for quantum circuit construction, ideal quantum simulation as well as quantum hardware communication. For graph operations we use the package _networkx_[17]. Integer models and relaxations are solved via the Python interface of the solver _Gurobi_[14]. Quantum hardware experiments are performed on the quantum backend _ibmq_ehningen_[21] which has 27 superconducting qubits. However, in our experiments we use at most 10 qubits since the influence of noise becomes prohibitively large for higher qubit numbers.
### QAOA Parameter Prediction via (11)
In this section, we evaluate the performance of the quantum part of our algorithm, which is a depth-1-QAOA with predetermined parameters as stated in (11). We compare the quality of these parameters to the best possible parameter choice. As a performance metric, we measure the average value of the produced cut size on different weighted MaxCut instances. For each instance, we evaluate the quantum algorithm on an ideal device, i.e. we numerically evaluate \(F(\gamma,\beta)\) via
(8). Additionally, we perform experiments on the quantum hardware. Here, we evaluate (5), with a sample size of \(N=1024\) and \(C\) as defined in (3a). Parameters \((\gamma,\beta)\) are chosen from a grid on \([0,\pi/2]\times[0,\pi/2]\) with step size \(0.1\). Limiting the parameter search space is eligible because of symmetry relations in QAOA, cf. [36].
Owing to the limitations of quantum hardware, we restrict the size of our test instances to the minimum which still allows for resembling the desired characteristics. Thus, all instances can be solved by inspection. We consider two sets of instances. The first set is constructed solely to investigate the quality of the predetermined parameter choice (11). Here, we construct instances fulfilling all or only some of the assumptions under which these parameters are provably optimal for an ideal quantum device. Recall from Corollary 4.2, that these assumptions are:
1. The graph is regular.
2. The graph is triangle-free.
3. The weights are chosen from the set \(\{-a,a\}\) for some \(a>0\).
Instances from the first set are depicted in Fig. 2. All graphs have only 4 vertices in order to keep the influences of noise in quantum hardware small. As topologies, we choose a ring, a star, a ring with chord and a complete graph. Weights are drawn either uniformly at random from \(\pm 1\) or from a normal distribution. These weight distributions are motivated from spin glass physics cf. e.g. [27].
The second set contains MaxCut instances which result from shrinking a \(2\times 3\) grid with \(\pm 1\) weights, an instance considered in the next section, where we evaluate the combination of shrinking and QAOA.
In Table 1, results are summarized. Instance names of the fist set correspond to sub-figures in Fig. 2. Instance names of the second set are of the form "\(\mathrm{2x3}\mathrm{g}ns\)". Here, \(n\) denotes the number of vertices in the shrinked graph and \(s\) marks whether the cycle relaxation was used for shrinking ("c") or random edges were shrinked ("r"), see Section 5.2 for details. If random shrinking and cycle-relaxation-shrinking led to the same instance, "rc" is used. For each instance, we mark which of assumptions (i)-(iii) are met. Furthermore we measure the relative deviation of \(F(\bar{\gamma},\bar{\beta})\) from the true maximum of \(F(\gamma,\beta)\),
\[\frac{\max F(\gamma,\beta)-F(\bar{\gamma},\bar{\beta})}{\max F(\gamma,\beta)- \min F(\gamma,\beta)}\in[0,1]\.\]
Thus, a value of \(0\) means that indeed the true maximum is hit, whereas \(1\) means that we hit the minimum instead. Accordingly, for the real quantum experiment, we measure
\[\frac{\max\langle C\rangle(\gamma,\beta)-\langle C\rangle(\bar{\gamma},\bar{ \beta})}{\max\langle C\rangle(\gamma,\beta)-\min\langle C\rangle(\gamma,\beta )}\in[0,1]\.\]
As expected, the deviation of \(F(\bar{\gamma},\bar{\beta})\) from the true optimal value of \(F(\gamma,\beta)\) increases when the instance violates more assumptions from (i)-(iii). The maximum observed deviation is \(10\%\) on instance f which violates all assumptions. From an integer-programming point of view, \(10\%\) might seem a large deviation. However, we stress that we do not compare single solution values but expectation values as QAOA is a randomized algorithm. The best solution in a reasonably-sized sample will be (much) better than the expected value. In fact, in all experiments we also sampled an optimal solution.
The same qualitative behavior is observed for \(\langle C\rangle(\bar{\gamma},\bar{\beta})\), measured with real quantum hardware. Quantitatively, the values from quantum hardware always lie slightly above the corresponding values from ideal simulation. This means, that our parameter estimate performs slightly worse on real hardware than in theory. Of course, in real quantum hardware, many physical effects influence the outcome of QAOA which are all not considered in the derivation of (11). With this in mind, it is even more encouraging that our parameter estimate hits the true optimum within a maximum observed deviation of at most \(13\%\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Instance & Regular & Triangle-free & \(w_{ij}\in\{\,-a,a\,\}\) & \(\frac{\max F(\gamma,\beta)\!-\!F(\bar{\gamma},\bar{\beta})}{\max F(\gamma,\beta) \!-\!\min F(\gamma,\bar{\beta})}\) & \(\frac{\max\langle C\rangle(\gamma,\beta)\!-\!\langle C\rangle(\bar{\gamma}, \bar{\beta})}{\max\langle C\rangle(\gamma,\beta)\!-\!\min\langle C\rangle( \gamma,\beta)}\) \\ \hline a & ✓ & ✓ & ✓ & 0.0 & 0.03 \\ b & ✗ & ✓ & ✓ & 0.0 & 0.02 \\ c & ✓ & ✗ & ✓ & 0.007 & 0.02 \\ d & ✗ & ✗ & ✓ & 0.005 & 0.09 \\ e & ✓ & ✗ & ✗ & 0.08 & 0.08 \\ f & ✗ & ✗ & ✗ & 0.10 & 0.13 \\
2x3g2r & ✓ & ✓ & ✓ & 0.0 & 0.0 \\
2x3g3rc & ✓ & ✗ & ✓ & 0.06 & 0.06 \\
2x3g4c & ✓ & ✓ & ✗ & 0.0 & 0.0 \\
2x3g4r & ✗ & ✓ & ✓ & 0.0 & 0.01 \\
2x3g5c & ✗ & ✗ & ✗ & 0.0 & 0.04 \\
2x3g5c & ✗ & ✗ & ✓ & 0.0 & 0.05 \\
2x3g6rc & ✗ & ✗ & ✓ & 0.0 & 0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results for evaluation of QAOA-parameter-estimate. In column “Instance”, a - f refer to Fig. 2. In columns “Regular”, “Triangle-free” and “\(w_{ij}\in\{\,-a,a\,\}\)” fulfilled assumptions of Cor. 4.2 are marked. The second last column gives the relative deviation of \(F(\bar{\gamma},\bar{\beta})\) from the true optimal value. The values are calculated by (8). The last column gives the deviation of \(\langle C\rangle(\bar{\gamma},\bar{\beta})\) from the true optimum. Here, real quantum hardware was used.
Figure 2: Instances for the parameter-estimate evaluation.
Considering the instances "2x3\(gns\)", resulting from shrinking the \(2\times 3\) grid, observed deviations are typically less than for instances a - f. This further motivates the use of QAOA with parameters (11) when combined with shrinking, as investigated in the next section.
To further illustrate the differences between ideal simulation and real experiment, \(F(\gamma,\beta)\) and \(\langle C\rangle(\gamma,\beta)\) are visualized in Fig. 3. We observe both qualitative and quantitative differences. Maxima and minima in the simulation do not coincide completely with maxima and minima in the experiment. However, clearly the data from the experiment is quite similar to the simulation. Furthermore, the absolute values in the experiment are usually smaller than in the simulation. The observed differences between simulation and experiment are due to noise effects in quantum hardware. Analogue figures for other instances appear in Appendix A.1.
To summarize, our results show that the parameter estimate (11) performs well, even on instances that do not satisfy the assumptions where it is provably optimal. These results encourage us to use of QAOA without further parameter optimization in the upcoming section where we combine shrinking with QAOA to solve MaxCut instances too large to be handled by quantum hardware alone.
### Combining Shrinking with QAOA
In this section, we combine shrinking with QAOA to solve various weighted MaxCut instances. As in the previous section, instance sizes are kept small due to limitations of quantum hardware. Graph topologies and weight distributions are motivated by spin glass physics cf. e.g. [27]. We emphasize that all instances can be solved by classical integer programming quickly. Thus, the experiments in this section should be considered as proof-of-principle rather than performance-benchmarks.
For each instance, we run the shrinking algorithm from Section. 3 with different settings. First, we alter the number of shrinked vertices. In general, the potential, i.e. the best possible cut value, will degrade when shrinking more vertices since there might not exist an optimal solution with the imposed correlations. This is the case if and only if the imposed correlations are not optimal. In this case, even when the shrinked problem is solved to optimality, the recreated solution might not be optimal. However, shrinking more vertices reduces the size of the subproblem which might lead to better solutions of the subproblem, especially when using near-future quantum-hardware.
Second, we employ different procedures for computing correlations. This allows to investigate the influence of the correlation quality on the overall performance. Two different methods are used:
Figure 3: Visualized results for QAOA parameter-estimate on instance “a” from Fig. 2. The x- and y-axis represent values of the parameters \(\beta\) and \(\gamma\), respectively. The red cross marks the estimate in (11). The color encodes the expectation value (a) or the average (b) of the cut size. In (a), we mimic an ideal quantum device by evaluation of (8). In (b), values are results from the quantum hardware. Here, every pixel represents the average taken over \(1,024\) samples.
1. Correlations inferred from the odd-cycle relaxation, defined in (6).
2. All correlations are zero which results in shrinking random vertex pairs with \(\sigma=1\).
When computing the odd-cycle relaxation on sparse graphs, we model a complete graph and assign zero weights to non-present edges. Thus, all odd-cycle inequalities belong to triangles and are of the form (2a)-(2b). We remark, that this relaxation has the same objective value as the sparse odd-cycle relaxation since the polyhedron defined by (1b),(1c) is a projection of the polyhedron (2a)-(2b) along a direction orthogonal to the cost vector in (1a). We work with the dense cycle-relaxation for two reasons. First, the dense formulation has variables for all vertex pairs, not only for edges. This allows to shrink vertices not connected by an edge. Second, this formulation can be implemented straightforwardly. Of course, this implementation is not efficient, compared to state-of-the-art separation approaches. However, since we aim at a proof-of-concept this approach is suitable for our numerical experiments.
Furthermore, we apply different solution methods for the shrinked problem to investigate the influence of the sub-problem solution quality on the overall performance. The sub-problem is solved in four different ways:
1. We solve the shrinked problem to optimality by mixed integer programming. This yields an upper bound on the performance of our algorithm.
2. We solve the shrinked problem by a depth-1-QAOA executed \(10,000\) times on an ideal quantum simulator with parameters predetermined by (11). This produces many solutions to the shrinked problem. Therefore, we average the cut value of the \(10,000\) recreated solutions to the original problem and use this average cut size as the measure of performance. That is, we evaluate (5) with \(N=10,000\) and \(C\) being the recreated cut value.
3. We solve the problem exactly as in 2, but with real quantum hardware instead of an ideal quantum simulator.
4. We solve the shrinked problem randomly by flipping a coin for each vertex, i.e. we assign each vertex to either partition with probability \(1/2\). Here, we also use the average recreated cut value as a metric. When noise in the real quantum machine becomes large (called _decoherence_), the quantum computer effectively performs the coin-flipping-heuristic.
Table 2 summarizes the instance data. As in the previous section, all instances are motivated by spin glass physics, cf. [27]. Having described the experimental setup, we now discuss the results. The smallest instance we evaluated is 2x3b, a \(2\times 3\) grid with weights taken uniform at random from \(\{-1,+1\}\). Although this instance can be solved by inspection, it still allows to analyze the performance of our algorithmic framework.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Instance & Graph & \(|V|\) & \(|E|\) & Weights \\ \hline
2x3b & \(2\times 3\) grid & 6 & 7 & \(\pm 1\) \\ k6b & \(K_{6}\) & 6 & 15 & \(\pm 1\) \\ k6n & \(K_{6}\) & 6 & 15 & normal \\
3x3b & \(3\times 3\) grid & 9 & 12 & \(\pm 1\) \\ k10b & \(K_{10}\) & 10 & 45 & \(\pm 1\) \\ k10n & \(K_{10}\) & 10 & 45 & normal \\
4x4b & \(4\times 4\) grid & 16 & 24 & \(\pm 1\) \\ t2g10\_5555 & \(10\times 10\) toroidal grid & 100 & 200 & normal \\ ising2.5-100\_5555 & \(K_{100}\) & 100 & 4950 & normal, decaying \\ \hline \hline \end{tabular}
\end{table}
Table 2: Instance data for shrinking-algorithm evaluation. \(|V|\) is the number of vertices, \(|E|\) is the number of edges. In column “Weights”, \(\pm 1\) abbreviates the uniform distribution on \(\{-1,1\}\), “normal” abbreviates the standard normal distribution and “normal, decaying” stands for a normal distribution decaying exponentially in distance.
Results are visualized in Fig. 4a. Here, we plot the number of shrinked vertices versus the approximation ratio, i.e. the (average) produced cut value divided by the optimum value. Different lines correspond to different settings as discussed above.
First, we analyze the approximation ratio for different correlations when the shrinked problem is solved to optimality, marked as solid lines in Fig. 4a. We note that shrinking with the odd-cycle relaxation (blue) does not lead to sub-optimal solutions. This means that the inferred correlations (6) are indeed optimal. As expected, this is not the case when shrinking randomly (red). Here, shrinking decreases potential, which means that there does not exist an optimal solution with the imposed correlations.
Now, we turn to the approximation ratio when solving the shrinked problem on the ideal quantum simulator, marked by dashed-dotted lines in Fig. 4a. As expected, when shrinking with optimal correlations inferred from the odd-cycle relaxation (blue), the approximation ratio monotonically increases with vertex deletions. A smaller sub-problem can be better approximated by the quantum algorithm. Rather interestingly, an increase in approximation ratio from 1 to 2 deleted vertices can also be observed when shrinking randomly (red). Here, the better approximability of the sub-problem by the quantum algorithm over-compensates the degrade in potential caused by shrinking sub-optimally.
Presumably, the most interesting case is when the shrinked problem is solved on the real quantum machine, marked by dashed lines in Fig. 4a. The approximation ratio qualitatively follows the ideal simulation for both, optimal correlations (blue) and random shrinking (red). However, due to noise effects, the quantum hardware always performs worse than the simulation. Notably, we observe a maximum approximation-ratio for 2 deleted vertices. Here, the trade-off between potential degrade due to sub-optimal shrinking and performance gain due to increased approximability is optimal. Finally, we note a significant improvement of QAOA compared to the coin-flipping-heuristic (dotted lines), when shrinking one or more vertices. Without shrinking, i.e. zero shrinked vertices, we observe that noise effects take over, and QAOA effectively flips a coin for every vertex.
The second instance, k6b, is a fully connected graph \(K_{6}\) with weights taken uniformly at random from \(\{-1,+1\}\). The results are visualized in Fig. 4b, conclusions are similar to the \(2\times 3\) grid. Most importantly, we observe a clear maximum in the approximation ratio at 2 deleted vertices for random shrinking when running QAOA on the quantum machine (red dashed line). Again, there is an optimal trade-off between potential degrade due to sub-optimal shrinking and approximability gain due to size-reduction.
Corresponding figures for instances k6n and 3x3b appear in Appendix A.2. Also there, the key observations are, that shrinking with the cycle relaxation preserved optimality when solving the sub-problem to optimality, and that maxima in the approximation ratio for QAOA on real hardware are present. On all instances discussed so far, solving the cycle relaxation took far less than one second while the quantum runtime was roughly 2 s.
We also considered two larger instances from literature, available in [35]. The first is "t2g10_5555", a \(10\times 10\) toroidal grid with 100 vertices and normal distributed weights. The second is "ising2.5-100_5555", the fully connected graph \(K_{100}\) on 100 vertices with weights decaying exponentially with distance. Both instances are an order of magnitude larger than the capabilities of current hardware, however still easy to handle for classical integer-programming. Here, solving the cycle relaxation took less than one minute, while the runtime on the quantum machine was roughly 5 s. Due to space limitation we do not provide figures for the remaining instances from Table 2. Conclusion are similar to the previously discussed instances.
Summarizing, the results indicate that linear programming solutions are indeed strong enough to significantly reduce problem size without degrading solution quality. In our experiments, shrinking with the odd-cycle relaxation always preserved optimality when solving the shrinked problem to optimality. This is beneficial for quantum computation on noisy hardware of limited size. When shrinking optimally, the performance of QAOA always increased with the number of deleted vertices. Interestingly, when shrinking sub-optimally, we observed an optimal trade-off between potential-loss and increased approximability of the reduced problem. By combining linear programming with QAOA, we were able to solve MaxCut instances from literature an order of magnitude
Figure 4: Results for the shrinking-algorithm evaluation. Shown is the approximation ratio, defined as the (average) produced cut size divided by the optimal cut size, versus the number of shrinked vertices. In the legend, “Random” stands for random shrinking while “Odd-cycle” stands for correlations given by (6). “IP”, “QAOA simulated”, “QAOA” and “Coin” refer to different subproblem solution methods as discussed in the main body.
larger than the capabilities of current quantum hardware. Although our experiments are only proof-of-principle, they encourage the combination of quantum algorithms with classical linear programming, when quantum hardware approaches regimes where classical exact algorithms fail due to excessive runtime.
## 6 Conclusion and Outlook
In this work, we proposed a hybrid algorithm for the maximum cut problem, combining classical linear programming and quantum approximate optimization. We use the well-known technique of graph shrinking to reduce problem size such that it can be handled by quantum hardware of limited size. To this end, we shrink according to an optimum of the cycle relaxation of the cut polytope. Furthermore, we improved the applicability of QAOA for weighted MaxCut, which builds the quantum part in the hybrid algorithm, by deriving optimal parameters for instances on regular and triangle-free graphs with weights following a binary distribution. This result motivates a parameter estimate for arbitrary instances.
Our experiments give a proof-of-principle for the applicability of the proposed methods. Although all considered instances can be handled easily by classical computers, the results indicate a possible benefit for integer programming when QC improves. First, the proposed QAOA parameter estimate works well in practice. This improves the applicability of QAOA since it renders classical parameter optimization unnecessary. Second, when combining shrinking with QAOA, we observed that linear programming can shrink problem size significantly without losing optimality. Furthermore, we observed that shrinking is indeed beneficial for QAOA when executed on current quantum hardware. Of course, a more thorough evaluation on a wider range of instances is needed to investigate the performance in more detail.
A direction of future research is the incorporation of characteristics of quantum algorithms and quantum hardware in the process of shrinking. It is known, that QAOA performs worse on certain types of graphs, e.g. bipartite graphs. From a hardware perspective, sparse graphs simplify the implementation of QAOA. In the process of shrinking, one can try to avoid or produce such specific graph characteristics. Moreover, other techniques for deriving (optimal) correlations exist in literature. Their performance in our framework needs to be further studied. Another field of ongoing research is the quantum part which may be replaced by other variants of QAOA or even by different algorithmic paradigms, e.g. quantum annealing.
|
2305.10987 | SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking
Neural Networks | Spiking Neural Networks (SNNs) have attracted recent interest due to their
energy efficiency and biological plausibility. However, the performance of SNNs
still lags behind traditional Artificial Neural Networks (ANNs), as there is no
consensus on the best learning algorithm for SNNs. Best-performing SNNs are
based on ANN to SNN conversion or learning with spike-based backpropagation
through surrogate gradients. The focus of recent research has been on
developing and testing different learning strategies, with hand-tailored
architectures and parameter tuning. Neuroevolution (NE), has proven successful
as a way to automatically design ANNs and tune parameters, but its applications
to SNNs are still at an early stage. DENSER is a NE framework for the automatic
design and parametrization of ANNs, based on the principles of Genetic
Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we
propose SPENSER, a NE framework for SNN generation based on DENSER, for image
classification on the MNIST and Fashion-MNIST datasets. SPENSER generates
competitive performing networks with a test accuracy of 99.42% and 91.65%
respectively. | Henrique Branquinho, Nuno Lourenço, Ernesto Costa | 2023-05-18T14:06:37Z | http://arxiv.org/abs/2305.10987v1 | # SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks
###### Abstract.
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility. However, the performance of SNNs still lags behind traditional Artificial Neural Networks (ANNs), as there is no consensus on the best learning algorithm for SNNs. Best-performing SNNs are based on ANN to SNN conversion or learning with spike-based backpropagation through surrogate gradients. The focus of recent research has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning. Neuroevolution (NE), has proven successful as a way to automatically design ANNs and tune parameters, but its applications to SNNs are still at an early stage. DENSER is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we propose SPENSER, a NE framework for SNN generation based on DENSER, for image classification on the MNIST and Fashion-MNIST datasets. SPENSER generates competitive performing networks with a test accuracy of 99.42% and 91.65% respectively.
spiking neural networks, neuroevolution, DENSER, computer vision +
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
To the best of our knowledge, this is the first work focusing on evolving SNNs trained with BPTT for image classification, including not only different architectures but different neuronal dynamics and optimizers in the search space. The main contribution of this paper is the preliminary validation of neuroevolution through SPENSER in the automatic generation of competitively performing CSNNs. The main focus of the paper is on the performance of the generated networks in terms of accuracy.
The remainder of this paper is structured as follows: Section 2 provides a review of important concepts regarding SNNs; Section 3 covers related work regarding evolutionary approaches for SNNs; Section 4 describes SPENSER; Section 5 describes the experimental setup; Section 6 analyses the experimental results, covering the evolutionary search and the testing performance of the generated models; Section 7 provides some final remarks and suggested guidelines for future research.
## 2. Spiking Neural Networks
Spiking Neural Networks (SNNs) are a class of neural network models built with spiking neurons where information is encoded in the timing and frequency of discrete events called spikes (or action potentials) over time (Srivastava et al., 2017). Spiking neurons can be characterized by a membrane potential \(V(t)\) and activation threshold \(V_{thresh}\). The weighted sum of inputs of the neuron increases the membrane potential over time. When the membrane potential reaches its activation threshold, a spike is generated (fired) and propagated to subsequent connections. In a feed-forward network, inputs are presented to the network in the form of spike trains (timed sequences of spikes) over \(T\) time steps, during which time spikes are accumulated and propagated throughout the network up to the output neurons.
There are a number of spiking neuron models that vary in biological plausibility and computational cost, such as the more realistic and computationally expensive Hodgkin-Huxley (Hodgkin and Huxley, 1983), to the more simplistic and computationally lighter models such as the Izhikevich (Izhikevich, 1983), Integrate-and-Fire (IF) (Ling and Fang, 2017) and Leaky Integrate-and-Fire (LIF) (Ling and Fang, 2017). We refer to Long and Fang (Long and Fang, 2017) for an in-depth review of existing spiking neuron models and their behaviour.
The LIF neuron is the most commonly used in the literature due to its simplicity and low computational cost. The LIF neuron can be modulated as a simple parallel Resistor-Capacitor (RC) circuit with a "leaky" resistor:
\[C\frac{dV}{dt}=-g_{L}(V(t)-E_{L})+I(t) \tag{1}\]
In Eq. 1, \(C\) is a capacitor, \(g_{L}\) is the "leaky" resistor (conductor), \(E_{L}\) is the resting potential and \(I(t)\) is the current source (synaptic input) that charges up the capacitor to increase the membrane potential \(V(t)\). Solving this differential equation through Euler method (demonstration in (Kirkpatrick, 1983)), we can calculate a neuron's membrane potential at a given timestep \(t\) as:
\[V[t]=\beta V[t-1]+WX[t]-Act[t-1]V_{thresh} \tag{2}\]
In Eq. 2, \(\beta\) is the decay rate of the membrane potential, \(X[t]\) is the input vector (corresponding to \(I(t)\)), \(W\) is the vector of input weights, and \(Act[t]\) is the activation function. The activation function can be defined as follows:
\[Act[t]=\left\{\begin{array}{ll}1,&\text{if }V[t]>V_{thresh}\\ 0,&otherwise\end{array}\right\} \tag{3}\]
A LIF neuron's membrane potential naturally decays to its resting state over time if no input is received (\(\beta V[t-1]\)). The potential increases when a spike is received from incoming connections, proportionally to the connection's weight (\(WX[t]\)). When the membrane potential \(V(t)\) surpasses the activation threshold \(V_{thresh}\) a spike is emitted and propagated to outgoing connections and the membrane's potential resets (\(-Act[t-1]V_{thresh}\)). Resetting the membrane's potential can be done either by subtraction, as is done in the presented example, where \(V_{thresh}\) is subtracted at the onset of a spike; or to zero, where the membrane potential is set to \(0\) after a spike. A refractory period is usually taken into account where a neuron's potential remains at rest after spiking in spite of incoming spikes. The decay rate and threshold can be static or trainable.
Existing frameworks such as _snntorch_(Kirkpatrick, 1983) allow for the development of SNNs by integration of spiking neuron layers in standard ANN architectures such as Convolutional Neural Networks, by simply replacing the activation layer with a spiking neuron layer.
### Information Coding
Spiking systems rely on discrete events to propagate information, so the question arises as to how this information is encoded. We focus on two encoding strategies: rate coding and temporal coding. In **rate coding**, information is encoded in the frequency of firing rates. This is the case in the communication between photoreceptor cells and the visual cortex, where brighter inputs generate higher frequency firing rates as opposed to darker inputs and respectively lower frequency firing rates (Kirkpatrick, 1983). ANNs rely on rate coding of information, as each neuron's output is meant to represent an average firing rate. In **temporal coding**, information is encoded in the precise timing of spikes. A photoreceptor system with temporal coding would encode a bright input as an early spike and a dark input as a last spike. When considering the output of an SNN for a classification task, the predicted class would either be: the one with the highest firing frequency, using rate coding; the one that fires first, using temporal coding.
Temporal coding is advantageous in terms of speed and power consumption, as fewer spikes are needed to convey information, resulting in more sparse events which translate to fewer memory accesses and computation. On the other hand, rate coding is advantageous in terms of error tolerance, as the timing constraint is relaxed to the overall firing rate, and promoting learning, as the absence of spikes can lead to the "dead neuron" problem, where no learning takes place as there is no spike in the forward pass. Increased spiking activity prevents the "dead neuron" problem.
### Learning
Learning in SNNs remains one of the biggest challenges in the community due to the non-differentiability of the activation function of spiking neurons (Eq. 3), which does not allow for the direct transposition of the error backpropagation algorithm.
Commonly used learning strategies include unsupervised learning through Spike-Timing-Dependent Plasticity (STDP) (Ling and Fang, 2017), offline conversion from trained ANNs to SNNs (also known as shadow
training) (Krizhevsky et al., 2014; Krizhevsky et al., 2014), and supervised learning through backpropagation either using spike times (Beng et al., 2015) or adaptations of the activation function to a continuous-valued function (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). In this work, we focus on the latter, by training SNNs using backpropagation through time (BPTT) and surrogate gradients.
BPTT is an application of the backpropagation algorithm to the unrolled computational graph over time, usually applied to Recurrent Neural Networks (RNNs) (Krizhevsky et al., 2014). In order to bypass the non-differentiability of the spiking neuron's activation function, one can use surrogate gradients, by approximating the activation function with continuous functions centered at the activation threshold during the backward pass of backpropagation (Srivastava et al., 2014).
In this experimental study, we considered two surrogate gradient functions available in _sntorch_(Krizhevsky et al., 2014):
* **Fast-Sigmoid** \[Act\approx\frac{V}{1+k|V|}\] (4)
* Shifted arc-tan function \[Act\approx\frac{1}{\pi}arctan(\pi V\frac{\alpha}{2})\] (5)
Regarding the loss function, there are a number of choices available depending on the output encoding of the network (rate vs temporal), that calculate the loss based on spikes or on membrane potential. For this experimental study, we considered rate encoding for inputs and outputs, and as such, chose the **Mean Square Error Spike Count Loss** (adapted from (Srivastava et al., 2014)). The spike counts of both correct and incorrect classes are specified as targets as a proportion of the total number of time steps (for example, the correct class should fire 80% of the time and the incorrect classes should only fire 10%). The target firing rates are not required to sum to 100%. After a complete forward pass, the mean square error between the actual (\(\sum_{t=0}^{T}Act[t]\)) and target (\(\hat{Act}\)) spike counts of each class \(C\) is calculated and summed together (Eq.6).
\[\mathcal{L}=\frac{1}{T}\sum_{j=0}^{C-1}(\sum_{t=0}^{T}Act_{j}[t]-\hat{Act}_{j })^{2} \tag{6}\]
## 3. Related Work
Recent works blending EC and SNNs are mostly focused on evolving a network's weights, using evolutionary approaches as a learning strategy (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014).
Schuman et al. (Schuman et al., 2015) proposed Evolutionary Optimization for Neuromorphic Systems, aiming to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization. However, they focus on simple machine learning classification tasks and scalability is unclear. Elbrecht and Schuman (Elbrecht and Schuman, 2015) used HyperNeat (Krizhevsky et al., 2014) to evolve SNNs focusing on the same classification tasks. Grammatical Evolution (GE) has also been used previously by Lopez-Vazquez et al. (Lopez-Vazquez et al., 2015) to evolve SNNs for simple classification tasks.
The current state of the art in the automatic design of CSNN architectures are the works of Kim et al. (Kim et al., 2017) and AutoSNN by Na et al. (Na et al., 2019). Both works focus on Neural Architecture Search (NAS), with an evolutionary search component implemented in AutoNN, and attain state-of-the-art performances in the CIFAR-10, CIFAR-100 (Krizhevsky et al., 2014), and TinyImageNet datasets. However, both works fix the generated networks' hyperparameters such as LIF neuron parameters and learning optimizer. Our work differs from these works by incorporating these properties in the search space.
## 4. Spenser
SPENSER (SPiking Evolutionary Network StructurEd Representation) is a general-purpose evolutionary-based framework for the automatic design of SNNs, based on DENSER (Beng et al., 2015; Beng et al., 2015), combining the principles of Genetic Algorithms (GA) (Krizhevsky et al., 2014) and Dynamical Structured Grammatical Evolution (DSGE) (Krizhevsky et al., 2014; Krizhevsky et al., 2014). SPENSER works on a two-level basis, separating the GA and the DSGE level, which allows for the modeling of the overall network structure at the GA level while leaving the network layer's specifications for the DSGE (Figure 1). The use of a grammar is what makes SPENSER a general-purpose framework, as one solely needs to change the grammar to handle different network and layer types, problems and parameters range.
The GA level encodes the macrostructure representing the sequence of evolutionary units that form the network. Each unit corresponds to a nonterminal from the grammar that is later expanded through DSGE. With this representation, we can encode not only the network's layers as evolutionary units but also the optimizer and data augmentation. Furthermore, by assigning each evolutionary unit to a grammar nonterminal, we can encode prior knowledge and bound the overall network architecture.
The DSGE level is responsible for the specification of each layer's type and parameters, working independently from the GA level. DSGE represents an individual's genotype as a set of expansion choices for each expansion rule in the grammar. Starting from a nonterminal unit from the GA level, DSGE follows the expansions set in the individual's genotype until all symbols in the phenotype are nonterminals. Rules for the layer types and parameters are represented as a Context-Free Grammar (CFG), making it easier to adapt the framework to different types of networks, layers and problem domains.
An example encoding to build CSNNs could be defined by Grammar 1 and the following GA macro structure:
\[[(features,1,10),(classification,1,3),\] \[(output,1,1),(learning,1,1)]\]
The numbers in each macro unit represent the minimum and maximum number of units that can be incorporated into the network. With this example, the \(features\) block encodes layers for feature extraction, and therefore we can generate networks with convolutional and pooling layers, followed by 1 to 3 fully connected layers from the \(classification\) units. The activation layers are restricted to LIF nodes with different surrogate gradient options. The \(learning\) unit represents the optimizer used for learning and its parameters. The \(output\) unit encodes the network's output layer. Numeric parameters are defined by their type, the number of parameters to generate, and the range of possible values.
Regarding variation operators, SPENSER relies on mutations on both levels. At the GA level, individuals can be mutated by adding, replicating, or removing genes i.e. layers. At the DSGE level, mutation changes the layers' parameters by grammatical mutation (replacing grammatical expansions), integer mutation (replacing an integer parameter with a uniformly generated random one), and float mutation (modifying a float parameter through Gaussian perturbation). SPENSER follows a \((1+\lambda)\) evolutionary strategy where the parent individual for the next generation is chosen by highest fitness and mutated to generate the offspring. This evolutionary strategy was chosen due to the computational demands of the network training process, which limits the population size in regard to execution time.
## 5. Experimental Setup
For this experimental study, we evolved and tested networks on the MNIST (Krizhevsky et al., 2017) and Fashion-MNIST (Zhu et al., 2017) datasets, available through the Torchvision library of Pytorch. All images were converted to grayscale and their original size was kept (28x28). In order to apply SNNs to these datasets, the images were converted to spike trains using rate coding. The pixel values are normalized between 0 and 1 and each pixel value is used as a probability in a Binomial distribution, which is then sampled from to generate spike trains of length \(T\) time steps. No data augmentation was used. We considered different time steps for each dataset according to their complexity.
Datasets were split in three subsets: EvoTrain, Fitness and Test. The Test split is the one provided by Torchvision. The EvoTrain and Fitness splits are a 70/30 split of the original Train split. Each independent run generates different EvoTrain and Fitness splits. Table 1 summarises the chosen time steps and the number of samples per split for each dataset.
As this is a preliminary study to validate SPENSER, we settled on one-pass training of individuals as a trade-off between speed and accuracy. During the evolutionary search, individuals are trained on the EvoTrain split for 1 epoch and tested against the Fitness split for fitness assignment. After the evolutionary search is complete, the best individual is further trained for 50 epochs on the entire Train set, and tested against the Test set for accuracy assessment.
We used _smtorch_(Krizhevsky et al., 2017) to assemble, train and evaluate SNNs based on rate coding. Individuals are trained using BPTT and the chosen loss function was the Mean Square Error Spike Count described in Section 2.2, with a target spiking proportion of 100% for the correct class and 0% for the incorrect class. The predicted class for a given instance is calculated based on the highest spike count of the output neurons. Accuracy is used as the fitness metric during the evolutionary search and as the final performance assessment of the best found individuals.
The macro structure of individuals for the GA level was set as:
\[[(features,1,6),(classification,1,4),\] \[(output,1,1),(learning,1,1)]\]
Because we are dealing with an image recognition problem, we defined a grammar that contains primitives allowing for the construction of CSNNs, as shown in Grammar 2. Following is a brief description of the grammar.
\(features\) units can be expanded to either Convolutional + Activation, Convolutional + Pooling + Activation, or Dropout layers. Convolutional layers are defined by the number of filters, filter shape, stride, padding and bias. Pooling layers are defined by the pooling type (max or average) and the kernel size. \(classification\) units can be expanded to either Fully-Connected + Activation or Dropout layers. Fully-Connected layers are defined by the number of units. Dropout layers are defined by the dropout rate. The \(output\) unit is set as a Fully-Connected + Activation where the number of units is fixed to the number of classes. Activation layers are currently limited to LIF neurons. LIF neurons are defined by the decay rate \(\beta\), the activation threshold \(V_{\textit{thresh}}\), and the reset mechanism
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**Train**} & \\ \cline{2-5} & **Time Steps (T)** & EvoTrain & Fitness & **Test** \\ \hline
**MNIST** & 10 & \multirow{2}{*}{42000} & \multirow{2}{*}{18000} & \multirow{2}{*}{10000} \\
**F-MNIST** & 25 & & & \\ \hline \end{tabular}
\end{table}
Table 1. Time steps and number of samples per split for each dataset (MNIST and Fashion-MNIST).
Figure 1. Individual generation by SPENSER. The first line represents the GA level where the macrostructure of the network is defined (this individual has 2 _features_ units and 2 _classification_ units). The second line represents the specification of a _classification_ unit through DSGE. Each number in the DSGE level represents the index of the chosen expansion rule for the current non-terminal. The last line is the resulting phenotype of the layer in question (Branquinho et al., 2017).
(subtraction or zero). Furthermore, they are also defined by the surrogate gradient function, which in this case can be either the ATan or the Fast-Sigmoid functions described in Section 2.2. The _learning_ unit encodes the optimizer and can be expanded to either Stochastic Gradient Descent, Adam, or RMSProp. We increased the probability of choosing feature extraction layers over dropout for \(features\) units (Grammar 2, line 1).
Regarding SPENSER's main hyper-parameters, we followed the recommendations of (B
different have small variations, showcasing SPENSER's robustness in generating high-performing networks.
We compared the best attained test accuracy with other works that also trained hand-tailored networks through spike based back-propagation. A comparison of test results is presented in Tab. 4. Albeit not surpassing the state-of-the-art, networks generated by SPENSER are head-to-head with the best-performing networks in the literature.
In order to validate our choice of one epoch training for fitness assessment, we also trained the best networks found in the first generation of each run for another 50 epochs and tested their performance on the Test set. Fig. 5 displays violin plots for the test accuracy of the best individuals from generation 1 and generation 2. The results are shown in Tab. 4. The results are shown in Tab. 5. The results are shown in Tab. 6. The results are shown in Tab. 7. The results are shown in Tab. 8. The results are shown in Tab. 9. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab.
200. It is clear that the networks' performance is dependent on the architecture rather than training epochs and that the networks evolved by SPENSER perform better than random initialization.
We hypothesize that a big limitation in this experimental study was the choice of the loss function's parameters, as it does not follow the literature's recommendations (Kim et al., 2019). By setting the target firing rate of incorrect classes to 0%, we might be suppressing output activity which is important to distinguish between closely distanced inputs. Furthermore, this experimental setup is sluggish, as training with BPTT is slower than in traditional ANNs and highly memory intensive. Kim et al. (Kim et al., 2019) have achieved impressive results without training the generated networks during the search phase, by estimating their future performance based on spike activation patterns across different data samples, and we believe this might be an important improvement to our framework. With faster experiments, we can focus on increasing diversity and coverage of the search space, so that SPENSER can yield better individuals.
## 7. Final Remarks
In this paper we propose SPENSER, a NE framework to automatically design CSNNs. SPENSER is able to generate competitive performing networks for image classification at the level of the state of the art, without human parametrization of the network's architecture and parameters. SPENSER generated networks with competitive results, attaining 99.42% accuracy on the MNIST (Kim et al., 2019) and 91.65% accuracy on the Fashion-MNIST (Kim et al., 2019) datasets. Current limitations rely on the execution time, due to the computationally intensive BPTT learning algorithm and the memory requirements. Furthermore, we believe the configuration of the loss function played a role in suppressing output activity and potentially decreasing accuracy.
### Future Work
In the future, we plan on:
* Experiment with different loss functions / encode the loss function as an evolvable macro parameter;
* Perform a more in-depth study of the preferred choices during evolution and observable patterns in the best-performing individuals. This could be relevant in uncovering novel optimal architectures and parameters;
* Experiment with different learning algorithms.
* Implement skip connections and back connections.
* Apply regularisation methods to prevent vanishing and exploding gradients.
###### Acknowledgements.
This research was supported by the Portuguese Recovery and Resilience Plan (PRR) through project C645008882-00000055, Center for Responsible AI, by the FCT - Foundation for Science and Technology, IP./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit - UIDB/00326/2020 or project code UIDP/00326/2020. The first author is partially funded by FCT - Foundation for Science and Technology, Portugal, under the grant 2022.11314.BD.
|
2310.14957 | XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series
Classification | Despite the growing body of work on explainable machine learning in time
series classification (TSC), it remains unclear how to evaluate different
explainability methods. Resorting to qualitative assessment and user studies to
evaluate explainers for TSC is difficult since humans have difficulties
understanding the underlying information contained in time series data.
Therefore, a systematic review and quantitative comparison of explanation
methods to confirm their correctness becomes crucial. While steps to
standardized evaluations were taken for tabular, image, and textual data,
benchmarking explainability methods on time series is challenging due to a)
traditional metrics not being directly applicable, b) implementation and
adaption of traditional metrics for time series in the literature vary, and c)
varying baseline implementations. This paper proposes XTSC-Bench, a
benchmarking tool providing standardized datasets, models, and metrics for
evaluating explanation methods on TSC. We analyze 3 perturbation-, 6 gradient-
and 2 example-based explanation methods to TSC showing that improvements in the
explainers' robustness and reliability are necessary, especially for
multivariate data. | Jacqueline Höllig, Steffen Thoma, Florian Grimm | 2023-10-23T14:00:02Z | http://arxiv.org/abs/2310.14957v1 | # XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series Classification
###### Abstract
Despite the growing body of work on explainable machine learning in time series classification (TSC), it remains unclear how to evaluate different explainability methods. Resorting to qualitative assessment and user studies to evaluate explainers for TSC is difficult since humans have difficulties understanding the underlying information contained in time series data. Therefore, a systematic review and quantitative comparison of explanation methods to confirm their correctness becomes crucial. While steps to standardized evaluations were taken for tabular, image, and textual data, benchmarking explainability methods on time series is challenging due to a) traditional metrics not being directly applicable, b) implementation and adaption of traditional metrics for time series in the literature vary, and c) varying baseline implementations. This paper proposes XTSC-Bench, a benchmarking tool providing standardized datasets, models, and metrics for evaluating explanation methods on TSC. We analyze 3 perturbation-, 6 gradient- and 2 example-based explanation methods to TSC showing that improvements in the explainers' robustness and reliability are necessary, especially for multivariate data.
Explainable AI, Time Series Classification, XAI Metrics
## I Introduction
As the use of machine learning models, especially deep learning, increases in various domains ranging from health care [1] to predictive maintenance [2], the need for reliable model explanations is also growing. An increasing number of methods providing a variety of explanation types (e.g., example-based methods like counterfactuals [3], or feature attribution methods like SHAP [4]) on different data types (e.g., images [5], tabular data [6]) are available. However, measuring the performance of such explanation methods is still a challenge. There are no generally agreed upon-metrics measuring the quality of explanations, and comparisons between different implementations and metrics are difficult (e.g., [7, 8, 9]). The first steps to standardize the metrics notion and implementation have been taken by different frameworks implementing explainability algorithms (e.g., Captum [10], AIX360 [11]) and Quantus [12], a framework dedicated to the evaluation of explanations. The main focus of those frameworks is to provide explainability to image, tabular, and textual classification tasks. Although time series classification (TSC) is a ubiquitous task, it has been neglected. Due to the different structure and properties of time-ordered data, the application of non-time-specific explanation algorithms is not advisable, leading to a new subfield in Explainable Artificial Intelligence (XAI) - Explainable Time Series Classification (XTSC) [13].
While the first step to standardize the explanation benchmarking process for the time series domain has been taken by TSInterpret [14] - a framework implementing explanation methods for time series classification in a unified interface - standardized metrics for evaluating the quality of explanation methods are still missing [15]. Similar to the explanation methods implemented in the different explainability frameworks, transferring metrics to the time series domain is complex. Using metrics from traditional frameworks (e.g., [12]) can lead to erroneous assumptions in the time series domain. This lack of specific and standardized metric and baseline implementations lead to a high variety of proposed metrics, metric implementations, proposed baselines, and baseline implementations.
In this paper, we propose XTSC-Bench, a benchmarking tool implementing a variety of metrics for a standardized and systematic evaluation of explainers for TSC. Its connection to TSInterpret [14] ensures a unified implementation of benchmarking algorithms. We utilize XTSC-Bench to evaluate 3 gradient- and 6 perturbation-based feature importance methods and 2 example-based approaches. Our contribution is twofold:
* A thorough investigation of existing approaches.
* An easy-to-use benchmarking tool compatible with TSInterpret [14].
## II Related Work
Several researchers stress the need for formal evaluation metrics and a more systematic evaluation of explainability methods [29][15]. For image, tabular, and textual data, a standard is slowly emerging [38][39] with easy-to-adopt frameworks and the inclusion of some general metrics into explanation frameworks (e.g., [11, 10]) as well as a framework dedicated to quantization [12]. Nonetheless, due to the relative newness of explainability to Deep Learning for TSC1, standardization for benchmarking explainability algorithms on time series is still missing. Table I shows the evaluation settings of various explanation algorithms for TSC. While the data basis is mostly standardized, i.e., most algorithms use a subset of data included in the UCR [17] or UEA
Archive [34], all of the algorithms included in Table I rely on comparing the newly developed algorithm with a time series unspecific algorithm. However, it has been shown that time-series unspecific explanation algorithms are not able to capture the time component sufficiently as they rely heavily on independent feature assumption and cannot uncouple the feature and time domain [22]. Although most metrics used in these evaluations have the same evaluation target, i.e., faithfulness, robustness, and reliability2, their definitions and implementations differ. Often, metrics are directly transferred from image classification [33]. However, many metrics rely on replacing input parts with uninformative information (e.g., to measure if the explanation method shows the same behavior as the classifier) or on comparisons to segmentation masks (e.g., to measure if the explanation method was able to localize relevant features). While providing uninformative features is trivial for e.g., images by replacing parts of an image with black or white pixels [38], replacing features with standard techniques (class means or zeros) might be relevant information in time series.
Footnote 2: Proximity, sparsity, diversity and plausibility are counterfactual-specific evaluation metrics and therefore not applicable to all explainer types.
## III Problem Definition
We study a supervised TSC problem. Let \(x=[x_{11},...,x_{NT}]\in\mbox{I$\!$R}^{N\times T}\) be a uni- or multivariate time series, where \(T\) is the number of time steps, and \(N\) is the number of features. Let \(x_{i,t}\) be the input feature \(i\) at time \(t\). Similarly, let \(X_{:,t}\in\mbox{I$\!$R}^{N}\) and \(X_{i,:}\in\mbox{I$\!$R}^{T}\) be the feature vector at time \(t\), and the time vector for feature \(i\), respectively. \(Y\) denotes the output, and \(f:x\to Y\) is a classification model returning a probability distribution vector over classes \(Y=[y_{1},...,y_{C}]\), where \(C\) is the total number of classes (i.e., outputs) and \(y_{i}\) the probability of \(x\) belonging to class \(i\). An explanation method \(E_{f}\) finds an explanation \(E_{f}(X)\in\mbox{I$\!$R}^{N\times T}\). In the case of feature attribution methods, the explainer \(E_{f}\) assigns an attribution \(a_{it}\) to explain the importance of a feature \(i\) a time step \(t\), resulting in \(E_{f}(X)=(a_{11},...,a_{NT})\). For example-based methods \(E_{f}\) provides an example with the same prediction or a counterexample resulting in \(E_{f}(X)=(x^{\prime}_{11},...,x^{\prime}_{NT})\).
For an Explainer \(E_{f}\) to provide good explanations, those explanations need to be:
* Reliable: An explanation should be centered around the region of interest, the ground truth \(GT\). \[E_{f}(x)\simeq GT\]
* Faithful: The explanation algorithm \(E_{f}\) should replicate the models \(f\) behavior. \[E_{f}(x)\sim f(x)\]
* Robust: Similar inputs should result in similar explanations. \[E_{f}(x)\approx E_{f}(x+\epsilon)\]
* Complex: Explanations using a smaller number of features are preferred. It is assumed that explanations using a large number of features are difficult for the user to understand [26]. \[\min\mathds{1}_{E_{f}(x)>0}\]
Figure 1 visualizes the implications of the requirements above on explanations obtained from an gradient-based explainer (an explanation based on the classifiers gradient estimations) and a perturbation-based explainer (an explanation based on observing the influence of input modifications). The top images show the original time series \(x\) with an explanation \(E(x)\) visualized as a heatmap. The middle image shows the perturbed time series \(x+\epsilon\) with the explanation \(E(x+\epsilon)\). The bottom image shows the known ground truth \(GT\). In case of this specific time series: The complexity is high for Figure 0(a), resulting from the many attributions (highlights). For Figure 0(b) the complexity is low. Although Figure 0(b) performs better on complexity taking the ground truth \(GT\) into account, the attributions obtained on the sample are inconsistent with \(GT\). The explanation in Figure 0(b) is more robust than Figure 0(a) as the explanations \(E(x)\) and \(E(x+\epsilon)\) are identical. Faithfulness quantifies the consistency between the decision-making process of \(f\) and the explanations \(E\). The consistency of Figure 0(a) is higher than the one from Figure 0(b) as Figure 0(a) relies on the gradients of \(f\) while Figure 0(b) fits a surrogate model. Overall, in this case, although Figure 0(b) performs better on complexity and robustness than Figure 0(a), due to the limited reliability (i.e., consistency with \(GT\)), Figure 0(a) should be the preferred explainer.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Exploiting** & **Dataset** & **Metrics** & **Target** & **Baseline** & **TS-Baseline** \\ \hline
**LMFIST [16]** & UCR Archive [17] & Faithfulness, Unsurprisingly [16] & **F-G** [2] & **-** \\ \hline
**LMF [16]** & UCR Archive [17] & Previously, Sparsity, Flexibility [19] & W-G** [2] & **NINC** [18] \\ \hline
**TSEM [20]** & UCR Archive [17] & Previously, Sparsity, Flexibility [19] & W-G** [2] & **NIN** [18], COMPLEX** [21] \\ \hline
**TSR [22]** & Synthetic Data [21] & Reliability [22] & Quick[22] & **-** \\ & & & & **Failure Orthaining [2] & **-** \\ \hline COMT [21] & Ripa, Triemannis, & Consistency [23], Faithfulness [6], Requirements [27] & SIMF** [34] & **-** \\ \hline LASTS [28] & UCR Archive & Faithfulness [29, Sparsity, Flexibility [19] & **-** & SIMF** [34] & **-** \\ \hline SETS [32] & Saler Planet Dataset & Proximity, Sparsity, Flexibility [19] & **-** & **NIN** [18], COMPLEX** [21] \\ \hline TSlight [33] & UEA Domes [34] & Faithfulness [18] & **-** & **NIN** [18] & **-** \\ \hline TSW [37] & & Complexity, Faithfulness, Robustness [7] & -** & - \\ \hline \end{tabular}
\end{table} TABLE I: Evaluation settings of explainers for TSC. Bold are the explanation algorithms evaluated in Section V. If no metric source is provided, the paper authors did not specify which notation was used. The * indicates that only a subset of the dataset was used.
## IV XTSC-Bench: A Benchmarking Tool
The goal of XTSC-Bench is to provide a simple and standardized framework to allow users to apply and evaluate different State-of-the-Art explanation models in a standardized and replicable way on the notions of complexity, reliability, robustness, and faithfulness. Figure 2 visualizes the architecture. The benchmarking tool is split according to Section III into different classes for benchmarking reliability, faithfulness robustness, and complexity. As some of the notions (e.g., reliability) rely on a fairly accurate definition of an explanation ground truth \(GT\) or the iterative masking of parts of the original input with known uninformative features, we include uni- and multivariate synthetic data and pre-trained models in the benchmarking tool (see Section IV-A). Each class follows the evaluation interface, providing a method _evaluate_ and a method _evaluate_synthetic_. The function _evaluate_ allows the usage of non-synthetic data and models as well as the evaluation of a single explanation on-the-fly. For all metrics we use a wrapper build around Quantus [12] and added some time-specific tweaks.
### _Synthetic Data and Pretrained Models_
XTSC-Bench provides 60 uni- and 60 multivariate synthetic datasets with 50 time steps generated according to Ismail et al. [22]3. The 'base' dataset is generated based on various time series processes (Gaussian, Autoregressive, Continuous Autoregressive, Gaussian Process, Harmonic, NARMA and Pseudo Periodic). For each 'base' dataset obtained from the time series process, multiple synthetic datasets are obtained by adding various Informative Features ranging from Rare Features (less than 5% of features) and time steps (less than 5% of time steps) mimicking an anomaly detection task to boxes covering over 30% of features and time steps (see Figure 3). A binary label is added for each dataset (time process \(\times\) informative feature) by highlighting informative features with the addition of a constant for positive classes and subtraction for negative classes. For all synthetic uni- and multivariate datasets, we train a 1D-Convolutional Network with ResNet Architecture (CNN) and Long Short Term Memory (LSTM) with a hidden layer of size 10. We train the networks with a cross-entropy loss for 500 epochs with a patience of 20 and Adam with a learning rate of 0.001. The trained networks are also provided in XTSC-Bench.
Footnote 3: Find details on the Data Generation in [22] or in the Appendix B in our GitHub Repository: [https://github.com/JHoelli/XTSC-Bench/blob/main/Appendix.pdf](https://github.com/JHoelli/XTSC-Bench/blob/main/Appendix.pdf).
### _Robustness_
Robustness measures the stability of an explanation method's output subjected to a slight input perturbation \(\bar{x}=x+\epsilon\) under the assumption that the model's output approximately stays the same \(f(x)\approx f(\bar{x})\). Small, unmeaningful changes around \(x\) should lead to a consistent explanation. XTSC-Bench employs two metrics measuring the robustness of an explanation algorithm \(E\):
* Max Sensitivity [38] measures the maximum change in the explanation with a small perturbation of the input \(x\). \(r\) denotes the input neighborhood ratio. \[\small{Sens_{max}(E,f,x,r)=max_{x-\varepsilon\times r}||E_{f}(\bar{x})-E_{f}(x )||}\] (1)
* Average Sensitivity [38] denotes the average sensitivity in the neighborhood of \(x\) with \(\bar{x}-x\leq r\). \[\small{Sens_{mean}(E,f,x,r)=\frac{1}{|\bar{x}|}\sum||E_{f}(\bar{x})-E_{f}(x )||}\] (2)
### _Faithfulness_
Faithfulness quantifies the consistency between the prediction model \(f\) and explanation model \(E\). Most faithfulness metrics rely on so called reference baselines consisting of non-informative features. In literature, those reference baselines are often training data means or zeros (e.g., [24]). However, for time series data those baselines might contain information (e.g., 0 might be an informative anomaly). Therefore, on the proposed synthetic data the reference baseline \(\tilde{x}\) is sampled from the generation process. XTSC-Bench employs faithfulness correlation [38] to measure the correlation between the sum of attributions \(\sum_{s\in S}E_{f}(x_{x_{s}=\tilde{x}_{s}})\) and the difference in output \(f(x)-f(x_{x_{s}=\tilde{x}_{s}})\) when setting those features to a reference baseline \(x_{x_{s}=\tilde{x}_{s}}\). \(S\) is a subset of input features, \(\tilde{x}_{S}\) denotes a
Fig. 1: Visualization of metric implications on a sample explanation \(E(x)\).
Fig. 3: Visualization of Informative Features types. The rectangle indicates the informative features.
Fig. 2: Architecture of XTSC-Bench.
subset of the reference baseline \(\tilde{x}\) and \(x_{s}\) the corresponding subset for the original instance \(x\).4
Footnote 4: In case of using our benchmarking tool with non-synthetic data we provide the possibility to provide a custom baseline. As default, baselining is done uniformly.
\[Fairth(f,E,x)=corr(\sum_{s\in S}E_{f}\left(x_{x_{s}=\tilde{x}_{s}}\right),f(x)-f (x_{x_{s}=\tilde{x}_{s}})) \tag{3}\]
### _Complexity_
Complexity [38] measures the number of features used in an explanation with a fractional contribution distribution \(\mathbb{P}_{g}\): the fractional contribution of feature \(x_{i}\) to the total magnitude of the attribution: \(\mathbb{P}_{g}(i)=\frac{E_{f}(x)_{i}}{\sum_{E_{f}(x)}};\mathbb{P}_{g}\in\{ \mathbb{P}_{g}(1)\ldots,\mathbb{P}_{g}(d)\}\). The maximum value of complexity is \(log(|E_{f}(x)|)\), where \(|.|\) denotes the vector length.
\[cpx(f;E,x)=-\sum_{i=1}^{d}\mathbb{P}_{g}(i)ln(\mathbb{P}_{g}(i)) \tag{4}\]
### _Reliability_
Explanation methods should distinguish important from unimportant features at each time step and note changes over time. "Major" parts of an explanation should lie inside the ground truth mask \(GT(x)\). XTSC-Bench includes the ground truth based measures relevance rank accuracy and relevance mask accuracy from [40].
* Relevance Rank Accuracy [40]: The relevance rank accuracy measures how much of the high intensity relevance lies within the ground truth. We sort the top \(K\) values of \(E_{f}(x)\) in decreasing order \(X_{topK}=\{x_{1},...,x_{k}|E_{f}(x)_{1}>...>E_{f}(x)_{K}\}\). \[RACC=\frac{|X_{topK}\cap GT(x)|}{|GT(x)|}\] (5)
* Relevance Mass Accuracy [40]: The relevance mass accuracy is computed as the ratio of the sum of the Explanation values lying within the ground truth mask over the sum of all values. \[MACC=\frac{\sum_{E_{f}(x)_{i}\in GT(x)}E_{f}(x)_{i}}{\sum E_{f}(x)}\] (6)
## V Empirical Evaluation
This section compares the performance of 6 gradient-with 3 perturbation-based feature attribution methods and 2 example-based methods across Recurrent Neural Networks and Temporal Convolutional Networks for both the multi- and univariate synthetic time series (Section IV-A). The results are reported on a before unseen test set. As gradient-based methods, we include Saliency (GRAD) [41], Gradient Shap (GS) [4], and Smooth Gradient (SG) [5] with and without Temporal Saliency Rescaling (TSR) [22]. As perturbation-based, we include Feature Occlusion (FO) [25] with and without Temporal Saliency Rescaling (TSR) [22] and LEFTIST [16], an approach based on Lime adapted to time series. TSEvo [20], and Native Guide (NG) [18] represent the example-based methods. For all methods, we use the implementation in TSInterpret [14]. By employing XTSC-Bench, we evaluate the explainers' capabilities on complexity, reliability, robustness, and faithfulness for all classifiers with an accuracy of over 90%5. Additional information regarding the setting and the results can be found in our GitHub6.
Footnote 6: [https://github.com/HJoelli/XTSC-Bench](https://github.com/HJoelli/XTSC-Bench)
Figure 4 summarizes the explainer-wise results on complexity, reliability, faithfulness, and robustness, averaged over all datasets and classifier models. On complexity (Figure 4a and Figure 4b), gradient- and perturbation-based methods provide less complex explanations than example-based methods. The results obtained by TSR contain slightly fewer attributions than the plain gradient- and perturbation-based methods, indicating that the explanations obtained after Temporal Saliency Rescaling are slightly easier to grasp. Averaging the obtained relevance scores on both the feature and time domain with TSR leads to a complexity decrease by eliminating areas with less relevance (e.g., single and small relevance scores on certain time steps).
The reliability (Figure 4c and Figure 4d) on univariate data is higher than on multivariate data showing a decreasing capability of centering the explanation around the, in this case, known ground truth of all explainers with increasing data complexity. The on average lower relevance mask than rank indicates that while relevant features are found, the contribution of the found informative features to the overall relevance is low. Interestingly for both dataset types, the plain gradient- and perturbation-based methods (without TSR) perform slightly better on the Relevance Rank. On Relevance Mass, the difference between TSR and the plain approaches diverge (e.g., on univariate GRAD, TSR results in an improvement, on univariate GS, TSR results in a deterioration).
The faithfulness (Figure 4e and Figure 4f) of the explainers to the classification models' behavior is similar for most explainers on the uni- and multivariate data. Least faithful is LEFTIST, as LEFTIST is the only approach relying on a local surrogate model instead of frequent classifier calls or the classifiers' inner workings (i.e., gradients).
The results on robustness (Figure 4g and Figure 4h) indicate that on univariate data, perturbation-based approaches are less sensitive to small changes than example-based approaches. This results from perturbation-based approaches only relying on the perturbation function (which is constantly the same) and the classification model's output, while gradient-based approaches rely on a model's inner workings that possibly change with varying the input.
Summarizing the results, no clear indication can be given on which explanation approaches should be preferred. No approach was able to dominate the plain gradient, and perturbation-based methods, which are included as baselines. Both, traditional and time-series specific explainers show potential for improvement in all aspects. With increasing data complexity (univariate vs. multivariate), the metric performances diverge further, indicating a need for less complex, more reliable, and robust explainers, especially for multivariate time series classification.
[MISSING_PAGE_POST]
## VI Conclusion
In this work, we propose XTSC-Bench, a benchmarking tool for the standardized evaluation of explainers for time series classifiers. XTSC-Bench aims to dissolve existing ambiguities and enable more comparability by providing synthetic datasets with informative features, from analogies to anomaly detection to moving features, trained models for the synthetic data, and options to evaluate custom data. A first empirical evaluation of the explainers implemented in TSInterpret [14] showed that the current time series explainers leave potential for improvement, especially in providing reliable explanations for multivariate TSC.
|
2306.09986 | Experimental storage of photonic polarization entanglement in a
broadband loop-based quantum memory | We describe an experiment in which one member of a polarization-entangled
photon pair is stored in an active "loop and switch" type quantum memory
device, while the other propagates through a passive optical delay line. A
comparison of Bell's inequality tests performed before and after the storage is
used to investigate the ability of the memory to maintain entanglement, and
demonstrate a rudimentary entanglement distribution protocol. The entangled
photons are produced by a conventional Spontaneous Parametric Down Conversion
source with center wavelengths at 780 nm and bandwidths of $\sim$10 THz, while
the memory has an even wider operational bandwidth that is enabled by the
weakly dispersive nature of the Pockels effect used for
polarization-insensitive switching in the loop-based quantum memory platform. | C. J. Evans, C. M. Nunn, S. W. L. Cheng, J. D. Franson, T. B. Pittman | 2023-06-16T17:33:35Z | http://arxiv.org/abs/2306.09986v2 | # Experimental storage of photonic polarization entanglement
###### Abstract
We describe an experiment in which one member of a polarization-entangled photon pair is stored in an active Cyclical Quantum Memory (CQM) device, while the other propagates through a passive optical delay line. A comparison of Bell's inequality tests performed before and after the storage is used to investigate the ability of the CQM to maintain entanglement, and demonstrate a rudimentary entanglement distribution protocol. The entangled photons are produced by a conventional Spontaneous Parametric Down Conversion source with center wavelengths at 780 nm and bandwidths of \(\sim\)10 THz, while the CQM has an even wider operational bandwidth that is enabled by the weakly dispersive nature of the Pockels effect used for active switching in a loop-based quantum memory platform.
A promising approach to long-distance quantum communication involves protocols in which two distant quantum memories become entangled by a central Bell state measurement (BSM) performed on two emitted photons [1; 2]. The reverse process, in which two entangled photons emitted from a central source are stored in two distant quantum memories, enables modified protocols that offer advantages in certain quantum network settings [3]. Recent experimental progress in this direction includes the storage of entangled photons with MHz bandwidths in atomic ensemble quantum memories [4; 5; 6; 7; 8; 9], MHz - GHz bandwidths in solid-state quantum memories [10; 11; 12; 13; 14; 15; 16; 17; 18], and THz bandwidths in a diamond quantum memory [19]. In each active approach, the coupling of a photon from a freely propagating mode to the storage mode (typically a collective matter excitation) is accomplished by an externally applied control field that is managed by the user for storage and release of the photons. In addition to bandwidth, key figures of merit include memory efficiency, storage time, accessibility, and output state fidelity. This leads to various trade-offs that can be optimized by different approaches for different applications, and motivates the need for investigations of entanglement storage in additional quantum memory platforms.
Here we investigate entanglement storage in a broadband "loop and switch"-type quantum memory with high output state fidelity, but relatively low efficiency and discrete time (rather than continuous) accessibility [20]. In this Cyclical Quantum Memory (CQM) platform, the coupling between the input/output mode and the storage mode (here, a free-space optical storage loop) is implemented by a user-applied DC control field via the Pockels effect. Because this Pockels effect is only weakly dispersive, the CQM platform possesses ultra-wide bandwidth that can be matched to that of the entangled photons produced by robust and practical conventional Spontaneous Parametric Down Conversion (SPDC) sources (typically \(\sim\) 10 THz). Indeed, broadband CQM-type platforms with SPDC sources have recently been used to demonstrate enhanced single-photon production [21], measurement-device-independent quantum key distribution [22], switching and storage of Fock states [23; 24], as well as the generation of multiphoton entangled states [25; 26]. Closely related work includes the manipulation of single-photon streams [27; 28; 29] and continuous-variable entanglement [30] in CQM-like systems.
In the present work on entangled photon storage with this platform, the primary experimental challenge is the non-deterministic nature of the SPDC process, which emits the entangled photons at random (and unknown) times. This precludes triggered switching of the photon pairs into the memories, necessitating random attempts at storage and, consequently, low overall data rates in the experiment. These data rates are further reduced by the intrinsic loss in the CQM's (\(\sim\)22% per
Figure 1: Conceptual overview of two types of measurements using the broadband pulsed-SPDC and CQM platform: (a) Initial high data rate alignment and calibration measurements in which the CQM is triggered by the detection of photon 1, and (b) lower data rate entanglement storage measurements in which the CQM is periodically triggered. The red circles denote photons, while the dashed purple lines represents entanglement. \(D_{i}\) and \(\theta_{i}\) (\(i\)=1,2) are detectors and polarizers used for various Bell-test measurements.
cycle), which currently hinders long-term storage in our setup. In this proof-of-concept demonstration, we overcome these technical problems to some extent by (1) using pulsed SPDC to restrict the possible emission of a photon pair to well-known time intervals, (2) replacing one of the active CQM's with a much lower loss delay line (fiber spool) serving as a "passive quantum memory" with a fixed storage time, and (3) performing initial alignment and calibration of the system at higher data rates by triggering the active CQM upon detection of the passively stored photon.
Figure 1 provides a conceptual overview of these 3 ideas, as well as a summary of the relevant timing parameters and detection system used for Bell-test measurements to verify the stored entanglement. The SPDC source is pumped by a 100 MHz pulse-train, with pair production rates on the order of 10 kHz (i.e. an average of 1 pair every 100 \(\mu\)s), while the CQM has a round-trip cycle time of \(\Delta\tau\approx 27\) ns and the experiments involve active storage for up to \(n=20\) cycles (\(\sim\)0.5 \(\mu\)s). Two single photon detectors (\(D_{1}\) and \(D_{2}\)) preceded by polarizers are used to test Bell's inequalities in the system. For the initial testing step shown in Figure 1(a), the detection of photon 1 is used to trigger the CQM for storage of photon 2, which is then actively released after \(n\) cycles. This preliminary test does not represent the storage of entanglement, but allows us to calibrate the setup at higher data rates. Note in Figure 1(a) that photon 2 is delayed by \(\Delta T_{2}\sim 320\) ns to compensate for the latency in the detection and CQM switching process [20; 32].
Next, in Figure 1(b), the CQM is periodically triggered by a signal derived from the pump pulse train, which enables the full demonstration of entanglement storage for those cases in which a photon pair is randomly produced at the correct time. In our experiment, we balance the trade-off between a desire for long storage times (i.e. large \(n\)) and high data rates (i.e. higher frequency triggering) to demonstrate entanglement storage for up to \(n=6\) cycles (\(\sim\)162 ns). Note in Figure (b) that the "passive quantum memory" for photon 1 is fixed at a comparable storage time of \(\Delta T_{2}\sim 165\) ns. Note also that difficulties associated with failed storage attempts and loss in the CQM are largely overcome by the post-selective nature of the Bell-tests used to study the ability of the system to store entanglement; only attempts in which both \(D_{1}\) and \(D_{2}\) register a photon are recorded [33].
Figure 2 shows a schematic of the complete experimental setup. For convenience, the figure highlights five different shaded regions corresponding to the key aspects of the experiment. The SPDC source consists of a 0.7 mm thick BBO crystal pumped by a 100 MHz pulse train at 390 nm derived from the frequency-doubled output of mode-locked fiber laser (Menlo Systems C-Fiber 780; pulse widths \(\sim\) 100 fs), and produces photon pairs with central wavelengths of 780 nm. Interference filters with a bandwidth of 25 nm are used to define the photon bandwidths ( \(\sim\)10 THz). We use Type-I non-collinear
Figure 2: Schematic of the experimental apparatus. Translatable delay wedge prisms (DW) are used to adjust the timing of the down-converted photons, and a 50:50 fiber coupler with fiber polarization controllers (FPC) is used to realize entangled states of the form \(|\psi^{-}\rangle~{}=~{}1/\sqrt{2}(|H_{1}V_{2}\rangle-e^{i\phi}|V_{1}H_{2}\rangle)\) using the Shih-Alley technique [31]. Phase shifters \(\phi_{aux}\) and \(\phi_{1}\) are used to compensate for the combination of \(\phi\) and any net birefringent phase shifts in fiber spools \(\Delta T_{1}\), \(\Delta T_{2}\), and the CQM itself for the “before storage” and “after storage” Bell inequality tests. (SHG: second harmonic generation, FC: fiber coupling lenses, IF: 25 nm bandwidth interference filters).
SPDC and the Shih-Alley (SA) technique at a 50/50 beamsplitter [31] to post-select entangled states of the form \(|\psi^{-}\rangle=1/\sqrt{2}(|H_{1}V_{2}\rangle-|V_{1}H_{2}\rangle)\), where \(H\) and \(V\) denote horizontally and vertically polarized photons and the subscripts correspond to output channels 1 and 2. Photon 1 is detected in the direct output of the SA beam-splitter, while photon 2 is sent to the CQM which, in our laboratory, is located on a second optical table roughly 6 m from the source.
Complete details of the operational technique of the CQM are provided in reference [20]. To summarize, it consists of a high-speed Pockels cell (PC) placed in a Sagnac-like interferometer formed by a polarizing beamsplitter (PBS), two broadband mirrors, and a lengthy "out and back" delay arm. Incident photons are delocalized into two counter propagating \(H\) and \(V\) polarization components that are repeatedly "flipped" (i.e. \(H\leftrightarrow V\)) each time they pass through the PC in the "on" state. This leads to a self-cancellation effect for phase shift errors due to birefringence in the CQM for photons stored for an even number \(n\) of cycles. In addition, bit flip errors (imperfect polarization rotations) are ejected from the CQM geometry at incorrect times, contributing only to overall loss.
We use a Lithium Tantalate (LTA) multi-crystal based PC operated in a transverse configuration (ConOptics model 360-80, with 25D driver), with a half-wave voltage of only 140 V at 780 nm, and rise-time and fall times (i.e. switching times) of \(\sim\)15 ns, which are safely shorter than the 27 ns cycle time of the CQM. The user actively stores the incident photon, and releases it after a chosen value of \(n\) cycles, by simply switching the PC between its "on" and "off" states at the appropriate times.
As shown in Figure 2, the PC driver is activated by a short-pulse signal generator that is triggered by either (1) the detection of a photon in \(D_{1}\) for the initial tests of Figure 1(a), or (2) a periodic signal derived by frequency division of a 100 MHz synchronization signal from the mode-locked laser for the main entanglement storage experiments of Figure 1(b). In addition, the delays \(\Delta T_{1}\) and \(\Delta T_{2}\) required for these two types of experiments are formed by fiber spools that can be inserted and removed as needed.
A key 50/50 beamsplitter is inserted in the CQM input channel to reflect the CQM output to the second Bell test detector, \(D_{2}\). While this reduces the CQM overall efficiency to a maximum value of 25% in this proof-of-concept experiment, it also provides a valuable auxiliary detection channel that can be used in-situ for comparative Bell tests: Bell tests "before storage" use correlations between detectors \(D_{1}\) and \(D_{aux}\), while Bell tests "after storage" are performed with \(D_{1}\) and \(D_{2}\). We note that this efficiency limitation can be overcome by replacing the 50:50 beamsplitter with a high-quality optical circulator [25].
Additional tests of the PC with various lasers confirmed that with the half-wave voltage set for perfect 90\({}^{o}\) polarization rotation ("flipping") of 780 nm light, a wavelength range greater than 25 nm (near 780 nm) would be "flipped" with greater than 95% fidelity due to the weakly dispersive birefringence of the Pockels effect in our system. Combined with very broadband CQM mirror reflectivities (\(R>\) 98 % for \(H\) and \(V\) components over 750 nm - 1100 nm), this enables the CQM to serve as a high-speed broadband optical quantum memory device.
Figure 3 shows a summary of calibration data using the arrangement of Figure 1(a). While this arrangement does not demonstrate the storage of entanglement, we perform various measurements using the same polarizer settings needed to characterize the expected performance of the system for subsequent Bell tests. Figures 3(a) and 3(b) show plots of the coincidence counting rates between \(D_{1}\) and \(D_{2}\) as a function of \(\theta_{1}\) after heralded storage of photon 2 for \(n=4\) and \(n=6\) cycles, respectively, for the cases of \(\theta_{2}\) fixed at 0\({}^{o}\) (blue data) and 45\({}^{o}\) (red data). The sinusoidal fits to the data are then used to extract the visibility (a measure of output state polarization fidelity) and the maximum counting rates (a measure of loss during storage). Analogous data sets (not shown) were taken for storage up to \(n=20\) cycles, as well as for the "before storage" case using coincidence counts between detectors \(D_{1}\) and \(D_{aux}\). Figure 3(c) summarizes this data and provides two main results: an exponential
Figure 3: Summary of experimental results from the high data rate test and alignment measurements; (a) heralded output state polarization measurements for storage of \(n=4\) cycles. Blue data corresponds to \(\theta_{2}\) fixed at 0\({}^{o}\), while red data corresponds to \(\theta_{2}\)\(=\) 45\({}^{o}\); (b) analogous results for \(n=6\) cycles; (c) summary of extracted maximum coincidence rates (blue data) and visibilities in the \(-\)45\({}^{o}\)/45\({}^{o}\) basis from “before storage” runs (red data) and “after storage” runs (purple data) for various storage times up to \(n=\) 20 cycles (540 ns).
fit to the maximum count rate (blue data) shows a CQM loss of roughly 22% per cycle, while the "after storage" visibility (purple data) shows essentially no degradation with increasing storage time.
These preliminary results in Figure 3 provide an expectation of being able to violate Bell's inequality after storing entanglement using the arrangement of Figure 1(b) for increasingly long storage times, until the overall loss in the CQM drives the signal-to-noise ratio in the system down to an unmanageable level. This lead to the main results of the paper illustrated in Figure 4, which show examples of this entanglement storage for the case of \(n=4\) and 6 cycles.
Figures 4(a) and 4(b) correspond to the "before storage" and "after storage" coincidence count data for the case of \(n=4\) cycles. The experimental data shows the expected \(\sin^{2}(\theta_{1}-\theta_{2})\) signature of the \(|\psi^{-}\rangle\) Bell state, with"before storage" measured visibilities of \((95~{}\pm~{}3)\%\) in the \(H/V\) basis and \((92~{}\pm~{}1)\%\) in the \(-45^{o}/45^{o}\) basis, and corresponding "after storage" visibilities of \((97~{}\pm~{}1)\%\) and \((91~{}\pm~{}4)\%\). As is well known, combined visibilities greater than 71% in these experimental situations are sufficient for a violation of the CHSH form of Bell's inequality subject to certain reasonable assumptions [33], and here correspond to Bell parameter values of \(S~{}=~{}2.64~{}\pm~{}0.04\) before storage, and \(S~{}=~{}2.66~{}\pm~{}0.06\) after storage [34]. In this proof-of-concept experiment, these \(S~{}>~{}2\) parameter values provide a demonstration of the ability to store and maintain polarization entanglement in the CQM platform.
Figures 4(c) and 4(d) show analogous results for the case of \(n=6\) cycles. Here the measured visibilities correspond to Bell parameter values of \(S=2.69\pm 0.02\) before storage and \(S=2.52\pm 0.11\) after storage, once again demonstrating successful entanglement storage. We suspect the slightly lower \(S\) value and increased uncertainty in the "after storage" case was primarily due to small misalignments that occurred in the CQM between data runs, as well as the reduction in overall data rates which necessitated longer collection times and thus increased experimental instabilities during the \(n=6\) cycle run. As a technical point of interest in the experiment, the longer storage time of the \(n=6\) vs. \(n=4\) run required a larger division of the 100 MHz periodic CQM triggering signal to prevent accidental "on/off" state overlap in the PC during storage. The less frequent attempts at storage corresponded to a data rate reduction of 25% between the \(n=6\) vs. \(n=4\) runs, while the additional 2 cycles of loss provided a further reduction of 39%. These two types of data rate reduction prevented realistic attempts at entanglement storage for, say, 20 cycles in our current proof-of-concept type setup.
In summary, we have demonstrated the storage of entangled photons with \(\sim\)10 THz bandwidths from a conventional SPDC source using one active broadband CQM device, and a second "passive quantum memory" formed by a simple delay line, in analogy with earlier entanglement storage demonstrations using other quantum memory platforms [13, 14, 15, 10, 5, 19, 5]. The experimental results represent a demonstration of a rudimentary entanglement distribution protocol, in which one member of an entangled pair is delivered to a location \(A\) at a fixed time, while the other is delivered to a second distant location \(B\) at an arbitrarily chosen time.
The broadband nature of the CQM platform helps overcome the notorious "bandwidth matching" problem associated with using traditional broadband SPDC entangled photon sources and narrowband atomic quantum memories [5]. This becomes particularly advantageous when scaling up to multi-memory applications, where spectral filtering of the SPDC photons to match narrower memory bandwidths can lead to prohibitively low data rates in practical settings. However, it is important to note that the lack on an intrinsic optical nonlinearity the CQM platform represents a drawback for multi-node quantum repeater type applications in which a single node acts as both a memory and a quantum processor [35]. For these more challenging protocols, supplementing the CQM with additional probabilistic techniques from the linear optics quantum computing paradigm would be required [36].
The primary limitation in this proof-of-concept experiment was the use of randomly produced entangled photon pairs, which essentially necessitated random attempts at storage and thus low overall data rates. For future applications, these difficulties can be completely overcome by the use of heralded entangled pairs that can be produced by combining several random SPDC sources [37, 38, 39, 40]. We note that for more demanding applications, these same heralded SPDC sources can, in principle, be converted to "on-demand" entanglement sources by using two CQM's
Figure 4: Summary of experimental results demonstrating entanglement storage: (a) and (b) show “before storage” and “after storage” polarization correlations for the case of \(n=4\) cycles, with blue (red) data corresponding to \(\theta_{2}\) fixed at \(0^{\circ}\) (\(45^{\circ}\)). The sinusoidal data in (b) is shifted by \(90^{\circ}\) due to the intrinsic bit flipping in the CQM. (c) and (d) show analogous results for the case of \(n=6\) cycles. The high visibilities of the fits to the data violate the CHSH form of Bell’s inequality [34] and demonstrate the ability to store entanglement in the SPDC and CQM platform.
and some of the techniques demonstrated in the present work [38]. Consequently, near-term implementations of various multi-photon quantum networking and entanglement distribution protocols could benefit from the use of robust broadband SPDC sources and ultra-broadband CQM's, and the proof-of-concept experimental results presented here represent a tangible step in that direction.
**Acknowledgements:** This work was supported by the National Science Foundation under Grant No. 2013464.
|
2305.01652 | Humans as Light Bulbs: 3D Human Reconstruction from Thermal Reflection | The relatively hot temperature of the human body causes people to turn into
long-wave infrared light sources. Since this emitted light has a larger
wavelength than visible light, many surfaces in typical scenes act as infrared
mirrors with strong specular reflections. We exploit the thermal reflections of
a person onto objects in order to locate their position and reconstruct their
pose, even if they are not visible to a normal camera. We propose an
analysis-by-synthesis framework that jointly models the objects, people, and
their thermal reflections, which allows us to combine generative models with
differentiable rendering of reflections. Quantitative and qualitative
experiments show our approach works in highly challenging cases, such as with
curved mirrors or when the person is completely unseen by a normal camera. | Ruoshi Liu, Carl Vondrick | 2023-05-02T17:59:55Z | http://arxiv.org/abs/2305.01652v1 | # Humans as Light Bulbs: 3D Human Reconstruction from Thermal Reflection
###### Abstract
The relatively hot temperature of the human body causes people to turn into long-wave infrared light sources. Since this emitted light has a larger wavelength than visible light, many surfaces in typical scenes act as infrared mirrors with strong specular reflections. We exploit the thermal reflections of a person onto objects in order to locate their position and reconstruct their pose, even if they are not visible to a normal camera. We propose an analysis-by-synthesis framework that jointly models the objects, people, and their thermal reflections, which combines generative models with differentiable rendering of reflections. Quantitative and qualitative experiments show our approach works in highly challenging cases, such as with curved mirrors or when the person is completely unseen by a normal camera.
## 1 Introduction
One of the major goals of the computer vision community is to locate people and reconstruct their poses in everyday environments. What makes thermal cameras particularly interesting for this task is the fact that humans are often the hottest objects in indoor environments, thus becoming infrared light sources. Humans have a relatively stable body temperature of 37 degrees Celcius, which according to the Stefan-Boltzmann law, turns people into a light source with constant brightness under long-wave infrared (LWIR). This makes LWIR images a robust source of signals of human activities under many different light and camera conditions.
Since infrared light on the LWIR spectrum has a wavelength that is much longer than visible light (8\(\mu\)m-14\(\mu\)m vs. \(0.38\mu\)m-\(0.7\mu\)m), the objects in typical scenes look qualitatively very different from human vision. Many surfaces of
objects in our daily life - such as a ceramic bowl, a stainless steel fridge, or a polished wooden table top - have stronger specular reflections than in the visible light spectrum [7, 58]. Figure 1 shows the reflection of a person with the surface of salad bowls, which is barely visible to the naked eye, if at all, but clearly salient in the LWIR spectrum.
In cluttered environments, a visible light camera may not always be able to capture the person, such as due to a limited field of view or occlusions. In such scenes, the ideal scene for locating and reconstructing a person would be an environment full of mirrors. This is what the world looks like under the LWIR spectrum. Infrared mirrors are abundant in the thermal modality, and reflections reveal significant non-line-of-sight information about the surrounding world.
In this paper, we introduce a method that uses the image of a thermal reflection in order to reconstruct the position and pose of a person in a scene. We develop an analysis-by-synthesis framework to model objects, people, and their thermal reflections in order to reconstruct people and objects. Our approach combines generative models with differentiable rendering to infer the possible 3D scenes that are compatible with the observations. Given a thermal image, our approach optimizes for the latent variables of generative models such that light emitting from the person will reflect off the object and arrive at the thermal camera plane.
Our approach works in highly challenging cases where the object acts as a curved mirror. Even when a person is completely unseen by a normal visible light camera, our approach is able to localize and reconstruct their 3D pose from just their thermal reflection. Traditionally, the increased specularity of surfaces has posed a challenge to thermography, making it extremely difficult to measure the surface temperature of a thermally specular surface, which brings out a line of active research aiming to remove the specular reflection for more accurate surface temperature measurement [4, 5, 80, 40]. We instead exploit these "difficulties" of LWIR to tackle the problem of 3D human reconstruction from a single view of thermal reflection image.
The primary contribution of the paper is a method to use the thermal reflection of the human body on everyday objects to infer their location in a scene and its 3D structure. The rest of the paper will analyze this approach in detail. Section 2 provides a brief overview of related work for 3D reconstruction and differentiable rendering. Section 3 formulates an integrated generative model of humans and objects in a scene, then discusses how to perform differentiable rendering of reflection, which we are able to invert to reconstruct the 3D scene. Section 4 analyzes the capabilities of this approach in the real world. We believe thermal cameras are powerful tools to study human activities in daily environments, extending computer vision systems' ability to function more robustly even under extreme light conditions.
## 2 Related Work
**Differentiable Rendering.** Differentiable rendering is a differentiable process of rendering 2D images given 3D scenes. The gradient obtained from the image space w.r.t. the scene parameters can be calculated and used to perform optimization. Recent advances in implicit 3D representations, especially Neural Radiance Field (NeRF) [2, 3, 54, 60, 69], have made impressive results on rendering photo-realistic images for the view-synthesis problems.
Another line of work focuses on differentiable rasterization [32, 42, 46, 63, 73, 47]. These works aim to replace the traditional rasterization process in computer graphics based on 2D projections of primitives such as polygons with z-buffering, with a differentiable rasterization process.
While differentiable, these methods are limited by the intrinsic difficulty of modeling single or multiple bounces of light in a scene, which can be modeled with physics-based differentiable ray tracing [25, 30, 41, 57, 73, 81]. In our problem, because humans are light sources and we need to perform differentiable rendering of one-bounce reflection, we extended Soft Rasterizer [46].
**Single-View 3D Reconstruction.** From a practical point of view, obtaining 3D ground truth for supervision is often difficult and expensive [28]. In terms of the quantity of data available, the unlabeled 3D data is not comparable to the 2D data on the internet. This spurs a long-standing interest from the general computer vision community to pursue 3D reconstruction with as little information as a single-view [19, 31, 45, 46, 75, 77].
In addition to general 3D object reconstruction, another line of research focus on the 3D reconstruction of human body from single-view images and videos [35, 36, 44, 53, 61, 64]. Representatively, SMPL-X [61] is an expressive whole-body model with details around hands and faces, represented as a triangle mesh with 10,475 vertices. In the same paper, SMPLify-X was proposed to estimate an SMPL-X model from just a single RGB image. This is done by first detecting human keypoints from the image with an off-the-shelf keypoint detector [6, 10, 14, 17, 10, 70, 14]. Then the parameters of an SMPL-X model is optimized to fit the keypoints which serve as the observation of human in the 2D image.
**3D Generative Model.** Our system utilizes generative models for both objects and humans. For 3D objects, generative models are usually trained with synthetic datasets composed of CAD models [11]. Different generative architectures including VAE [9, 20, 20, 76], GAN [62, 62, 74, 63], normalizing flow [33, 34], and diffusion models [48] were proposed to generate objects in meshes, point clouds, or voxels. More recently, implicit 3D representation, or coordinate-based models, become a popular choice of modality to perform generative tasks [16, 59, 51, 23, 27, 56].
For humans, [61, 71] proposed generative models for 3D humans, represented as SMPL-X models. In [61], a VAE is trained to generate human poses from 4 datasets including Human3.6M, LSP, CMU Panoptic, and PosePriors [1, 22, 55, 78]. The VAE samples a latent vector from a high-dimensional Gaussian distribution and generates a human pose vector. This pose vector is applied with a sparse linear regressor and a linear blend skinning function to generate a triangle mesh in a fully differentiable manner.
**Thermal Computer Vision.** Previous work has applied computer vision to thermal images for various problems [13, 15, 18, 24, 37, 65, 72]. ContactDB [8] used thermal imaging to obtain accurate human grasps of everyday objects for robotics applications. [49] studied the problem of thermal non-line-of-sight imaging. In comparison, this work focuses on the 3D reconstruction of people from their thermal reflections in non-planar objects. Other work pursues 3D reconstruction of objects from thermal images [12, 50, 66, 67]. To our knowledge, we are the first to perform 3D reconstruction of humans from their thermal reflection.
## 3 Methods
Our system takes an RGBD image and a thermal image of everyday objects with thermally reflective surfaces and performs a 2-stage optimization to estimate 3D objects and a human not in sight from both cameras' perspectives. In the first stage, a 6 DoF pose, scale, and a neural signed distance function [59] are jointly estimated for each object present in the scene. In the second stage, the location, orientation, and pose of the human are jointly estimated to reconstruct the observed thermal reflection.
Section 3.1 formulates the problem we aim to solve. Section 3.2 gives an overview of the approach. Section 3.3 describes the generative models we used in our approach in detail. Section 3.4 lays out a differentiable rendering algorithm of human thermal reflection. Section 3.5 formulates the optimization process and the objective functions.
### Problem Formulation
We decompose a scene into 3 components: a human body, objects with specular surfaces in LWIR spectrum, and environmental heat sources. We first obtain a segmentation mask from each object in the scene from the RGBD image. To obtain the thermal reflection image, we perform ray tracing starting from the camera sensor to the light source - the human body, under Helmholtz reciprocity. Assuming a pinhole camera model, let \(\mathbf{n}\) be the surface normal of the object at point \(\mathbf{p}\), \(\mathbf{r}\) be the vector from the camera sensor to \(\mathbf{p}\) and \(\mathbf{r^{\prime}}\) the reflected ray vector. We model the intensity of each pixel \(I_{\mathbf{x}}\) in the thermal camera as a binary value:
\[I_{\mathbf{x}}=\begin{cases}1,&\mathbf{r^{\prime}}\text{ intersects with }\mathcal{T}_{\phi,T}(M_{h})\\ 0,&\text{otherwise}\end{cases} \tag{1}\]
where \(M_{h}\) represents the human shape in the form of a triangle mesh, and \(\mathcal{T}_{\phi,T}\) represents an SE(3) transformation matrix parameterized by rotation, translation, and scale. With background subtraction, the noise coming from environmental heat sources can be mitigated.
As described in figure 1, the calibrated thermal camera and RGBD camera with known intrinsic matrix and unknown extrinsic matrix capture an RGB image, a depth map, and a thermal image. Given these images as our observation containing N objects, we solve for the following 7
Figure 2: High-level overview of our analysis-by-synthesis framework. We sample random initializations from the latent space of pretrained generative models of humans and objects in 3D. Through a differentiable rendering process, we synthesize a reflection image of a human body on object surfaces. This synthesized reflection is compared with the observed reflection with an \(L_{1}\) loss. Gradients are backpropagated through differentiable rendering and generative models to the latent variables.
variables via optimization: locations \(\{\mathbf{T}_{obj}\}_{i=0}^{N}\), rotations \(\{\phi_{obj}\}_{i=0}^{N}\), scales \(\{s_{obj}\}_{i=0}^{N}\), and the shape \(\{M_{obj}\}_{i=0}^{N}\) of the objects, location \(\mathbf{T}_{h}\), rotation \(\phi_{h}\), and the shape \(M_{h}\) of the human, all in camera's perspective.
### Overview of Approach
The optimization problem we are solving is severely under-constrained, so we choose to leverage the priors provided by pretrained generative models. As described in figure 2, we first randomly sample the aforementioned 7 variables as initial input to the generative models to generate a 3D human and objects in the scene. Then we perform a differentiable rendering of human thermal reflection. For every ray from the camera sensor that intersects with an object, we can analytically calculate the reflected ray vector, given that the surface normals of the objects are defined by the output of the object generative model. With these reflected ray vectors, we can render a binary reflection image based on whether the reflected ray vectors intersect with humans, whose exact 3D shape and location are defined by the output of the human generative model. The optimization objective is to maximize the similarity between the rendered reflection image and the observed image captured by the thermal camera.
In order for such a pipeline to be differentiable, we need both the generative models of humans and objects, as well as the rendering algorithm, to be differentiable. In the following sections, we will describe how we achieve this.
### Generative Models
**Object: DeepSDF.** We decided to use DeepSDF [59] as our generative models for objects. SDF, or signed distance function, is a function between a point in space and its orthogonal distance to the closest surface. In essence, DeepSDF is an SDF parameterized by a neural network \(G_{obj}\) whose input is a 3D coordinate \(\mathbf{p}\) and output is a signed distance \(s\). Following [59], we condition a DeepSDF model on a latent vector \(\mathbf{z}_{obj}\) from a probabilistic latent space to make them generative model:
\[G_{obj}(\mathbf{p},\mathbf{z}_{obj})=s:\mathbf{p}\in\mathbb{R}^{3},s\in \mathbb{R} \tag{2}\]
**Human: SMPL-X.** We adopted SMPL-X [61] as our generative models of 3D humans. Broadly, SMPL-X is composed of 2 components. The first is a variational autoencoder (VAE) that projects a latent vector \(\mathbf{z}_{h}\) sampled from a probabilistic latent space with Gaussin prior to the human pose space, in the form of rotations of human body joints. The generated human body pose is then applied with a differentiable sparse linear regressor to generate vertices and triangle meshes representing the surface skins of a human body. Because both the VAE and the linear vertex regressor are differentiable, the location of each vertex is differentiable w.r.t. the latent vector \(\mathbf{z}_{h}\).
### Differentiable Rendering of Reflection
The information we have from the thermal image of objects is a reflected human silhouette. Soft rasterizer (SoftRas) [46] is a method of choice to perform differentiable rendering from 2D silhouette images. However, SoftRas is a differentiable rasterization algorithm, which does not directly apply to reflection, especially when the reflective surface is a curved surface defined by a DeepSDF. To overcome this limitation, we extended SoftRas to ray tracing under non-planar reflection off the zero-isosurface of a DeepSDF. This process is visualized in figure 3.
**DeepSDF Depth Estimation.** The complex geometry of an everyday object prevents us from projecting all triangles to the 2D image plane as in [46]. Thus, we need to march rays \(\{\mathbf{r}_{i}\}\) from camera sensor \(\mathbf{c}\), through the reflection point on the surface \(\{\mathbf{p}_{i}\}\) with a surface normal \(\{\mathbf{n}_{i}\}\), to the reflected rays \(\{\mathbf{r}^{\prime}_{i}\}\). To obtain the intersection point with the surface \(\{\mathbf{p}_{i}\}\) given an SDF representation of an object, we need a differentiable method to extract the zero-isosurface and calculate the depth of the surface along the incoming ray \(\mathbf{r}_{i}\). Previously, [79] proposed to perform surface projection by first grid-searching for a point close to the
Figure 4: 3D Object Reconstruction from RGBD
Figure 3: Differentiable Rendering of Reflection. Ray direction shown reverses the physical propagation direction of light by Helmholtz reciprocity.
zero-isosurface, then projecting along gradient direction \(\frac{\partial G}{\partial\mathbf{p}}\) with the predicted distance. However, because the gradient direction is not in the same direction as the incoming ray, performing such an operation could yield a point far from the intersection point between the incoming ray and zero-isosurface, especially when the attack angle is small. To mitigate this error, we perform finite steps of sphere tracing along the ray to estimate the intersection point \(\{\mathbf{p}_{i}\}\) as shown in figure 3.
**DeepSDF Surface Normal.** With the estimated intersection point \(\{\mathbf{p}_{i}\}\) between \(\{\mathbf{r}_{i}\}\) and the surface of the object, we calculate the surface normal of the object at \(\{\mathbf{p}_{i}\}\):
\[\mathbf{n}_{i}=\frac{\partial G_{obj}(\mathbf{p}_{i},\mathbf{z}_{obj})}{ \partial\mathbf{p}_{i}} \tag{3}\]
We can then calculate the reflected ray vector as:
\[\mathbf{r}^{\prime}_{i}=\mathbf{r}_{i}+2\cdot\mathbf{r}_{i}\cdot\frac{ \mathbf{n}_{i}}{\|\mathbf{n}_{i}\|_{2}} \tag{4}\]
**3D Ray-Triangle Distance.** We can then calculate the pairwise distance matrix, denoted as \(\mathcal{D}_{i,j}\) between each reflected ray \(\mathbf{r}^{\prime}_{i}\) and each triangle \(t_{j}\in\{\mathbf{M}_{h}\}\), where \(\mathbf{M}_{h}\) represents human body mesh. Each element in the distance matrix \(d_{i,j}\in\mathcal{D}\) can be expressed as a differentiable function of vertices of \(t_{j}\) and the reflected ray vector \(\mathbf{r}^{\prime}_{i}\). We can also obtain a ray-triangle intersection matrix \(\Lambda\) with the same dimension as the distance matrix. Since the value of the ray-triangle intersection is binary, this calculation is not required to be differentiable.
**Differentiable Ray Occupancy.** Following Soft-Ras [46], we define the influence of each triangle \(t_{j}\) on each ray \(r^{\prime}_{i}\) where the influence is expressed as a function of distance \(d_{i,j}\):
\[d^{\prime}_{i,j}=\text{sigmoid}\left(\lambda_{i,j}\frac{d^{2}_{i,j}}{\sigma} \right),\ \ \lambda_{i,j}\in\Lambda,\ \ d_{i,j}\in\mathcal{D} \tag{5}\]
where \(\lambda_{i,j}=1\) if reflected ray \(\mathbf{r}^{\prime}_{i}\) intersects with triangle \(\mathbf{t}_{j}\), otherwise \(-1\). \(d_{i,j}\) denotes the distance between ray \(\mathbf{r}^{\prime}_{i}\) and triangle \(\mathbf{t}_{j}\), \(\sigma\) is a hyperparameter that controls the
Figure 5: 3D Human Reconstruction (visualized from another camera view). From an RGBD image, we recover the 3D location \(\mathbf{T}_{obj}\), pose \(\phi_{obj}\), and shape \(\mathbf{z}_{obj}\) of each object. A marching cube visualization of the reconstructed 3D objects is shown in pink. With reconstructed objects, we recover 3D location \(\mathbf{T}_{h}\), pose \(\phi_{h}\), and shape \(\mathbf{z}_{h}\) of the human from a denoised thermal input showing reflections of the human on object surfaces, which we visualize in blue. We also include the original scene and our reconstruction from a calibrated third-camera view for comparison. This image is **not seen** by our system during reconstruction. The black mesh where the objects are located is the depth pointclouds captured by the RGBD camera.
"softness" of the influence. We then aggregate the influence of each triangle for a ray reflected \(\mathbf{r}_{i}^{\prime}\) to obtain the estimated binary occupancy of the ray by human body mesh \(\mathbf{M}_{h}\):
\[\hat{I}_{i}=\mathcal{A}(\{\mathcal{D}\}_{j})=1-\Pi_{j}(1-d_{i,j}) \tag{6}\]
The estimated binary occupancy of ray \(\hat{I}_{i}\) is a value between 0 and 1 and is compared with the ground truth binary thermal image defined in Eq. 1.
### Optimization for Inference
**3D Object Reconstruction.** We first estimate the 6 DoF pose, scale, and shape of the objects present in the scene following a similar method as in [29]. We optimize the locations \(\{\mathbf{T}_{obj}\}_{i=0}^{N}\), rotations \(\{\phi_{obj}\}_{i=0}^{N}\), scale \(\{\mathbf{s}_{obj}\}_{i=0}^{N}\), and the shape of the objects \(\{\mathbf{z}_{obj}\}_{i=0}^{N}\), where \(\{\mathbf{z}_{obj}\}_{i=0}^{N}\) are latent variables sampled from the probabilistic latent space of DeepSDF \(\mathbf{G}_{obj}\) s.t. \(\mathbf{M}_{obj}=G_{obj}(\mathbf{z}_{obj})\). For each object, we minimize the objective:
\[\mathcal{L}_{obj}=\mathcal{L}_{depth}+\mathcal{L}_{mask}+\mathcal{L}_{prior} \tag{7}\]
where \(\mathcal{L}_{depth}\) is the \(L_{1}\) loss between the estimated depth map and the measured depth map, \(\mathcal{L}_{mask}\) denotes a pixel-wise \(L_{2}\) loss between the estimated segmentation mask and the observed segmentation mask obtained from RGB observation, and \(\mathcal{L}_{prior}\) is a shape prior regularization term.
**3D Human Reconstruction** Given the estimated translations, rotations, scales, and shape latent vectors from 3D object reconstruction, we optimize translation \(\mathbf{T}_{h}\), rotation \(\phi_{h}\), and shape \(\mathbf{z}_{h}\) of the human where \(\mathbf{z}_{h}\) is the latent vector sampled from the pose VAE in SMPL-X s.t. \(\mathbf{M}_{h}=G_{h}(\mathbf{z}_{h})\). Upon obtaining the estimated reflection image \(\hat{\mathbf{I}}\) and observed thermal silhouette image \(\mathbf{I}\), we minimize the objective:
\[\mathcal{L}_{human}=\mathcal{L}_{silhouette}+\mathcal{L}_{prior} \tag{8}\]
where
\[\mathcal{L}_{silhouette}=1-\frac{\|\hat{\mathbf{I}}\otimes\mathbf{I}\|_{1}}{ \|\hat{\mathbf{I}}\oplus\mathbf{I}-\hat{\mathbf{I}}\otimes\mathbf{I}\|_{1}} \tag{9}\]
and \(\mathcal{L}_{prior}\) is an \(L_{2}\) regularization term on the human latent vector \(\mathbf{z}_{h}\)
## 4 Experiments
The goal of our experiments is to validate our hypothesis that LWIR thermal reflection on everyday objects provides sufficient information to perform accurate 3D human reconstruction in the real world. In section 4.1, we first demonstrate the accurate 3D reconstruction of objects from a single RGBD image, which serves as a foundation for 3D human reconstruction from reflection. We showcase our results on 3D human reconstruction with different poses and object types with everyday objects (section 4.2) and cars (section 4.3). Lastly, in section 4.4 we perform quantitative and qualitative ablation studies to evaluate the effectiveness of our technical approach.
### 3D Object Reconstruction
Real-world depth sensors are subject to often significant measurement errors and are sensitive to lighting conditions (assuming an active stereo sensor). The surface depth estimated is often noisy, non-smooth, and full of "holes", as shown in figure 4. Performing differentiable rendering of reflection using the direct output of the depth sensor will
Figure 6: Real-world 3D human reconstruction from thermal reflections of cars. A diverse set of human poses can be reconstructed by using the surfaces of different types of cars as infrared mirrors. RGB input (1st row) and thermal input (2nd row) captured by a depth camera and a thermal camera are used as input to our method. Our reconstruction (3rd row) is compared with the original scene (4th row), both rendered/captured from another camera viewpoint.
necessarily introduce an excessive amount of noise, given the reflected ray direction is calculated from the surface normal. Therefore, we opted to perform 3D object reconstruction from RGBD input first, then use the reconstructed surfaces for differentiable ray tracing.
In figure 4, we visualize the reconstructed objects from the RGBD input. Because our 3D representation of objects is an implicit function - DeepSDF, we perform marching cubes to extract the zero-isosurface of each object generated from the latent vector \(\mathbf{z}_{obj}\). We then applied the SE(3) transformation matrix which is calculated from the estimated location \(T_{obj}\) and pose \(\phi_{obj}\).
As shown in figure 4, the location, pose, and shape of 3D objects can be faithfully reconstructed. Most importantly, we are able to obtain a high-fidelity, smooth, and accurate object surface without an explicit regularization on surface smoothness, which sets the foundation for the differentiable rendering of reflection. The successful reconstruction even when the depth input is noisy can be largely attributed to the object priors provided by searching in the latent space of a pretrained generative model. Generative priors such as a bowl is usually symmetric, the outside surface of a mug is often smooth, are enforced during the optimization process.
### 3D Human Reconstruction
Given the reconstructed objects represented as individual DeepSDF models and their locations, we perform joint optimization of human location \(\mathbf{T}_{h}\), orientation \(\phi_{h}\), and shape \(\mathbf{z}_{h}\). The input to the differentiable rendering algorithm is a single binary thermal reflection image, representing a mask of human silhouette on each reflective object, as shown in figure 5. The binary mask of reflection is obtained from the thermal camera pointing towards the reflective objects, with simple denoising and thresholding. In addition to RGBD and thermal cameras, we put a third calibrated camera in the scene to capture the scene from another angle for evaluation and visualization. Note that any images from this camera are not used as input to our system.
We render the reconstruction from the third camera's perspective for comparison with the original scene at the exact time input data was captured. As shown in figure 5, the output of our method very accurately reconstructs the original scene. Note that the subject in the original scene is wearing normal clothing and the data is collected in a normal office environment without special lab environmental control. Besides, the objects used to reflect human thermal radiation are everyday objects with a variety of textures and materials that we purchased from supermarkets. This indicates the robustness of our system and its practical applicability to various settings.
### Cars as Infrared Mirrors
Non-line-of-sight information of human activity plays a crucial role in the safe deployment of autonomous driving systems. Therefore, we showcase an experiment where we use cars as infrared mirrors to reconstruct the 3D location, orientation, and shape of a pedestrian that's not in the line-of-site of a camera system. In figure 6, we show the results in a similar fashion as figure 5. 3D reconstruction from thermal imaging could allow new opportunities for autonomous vehicles to sense and safely avoid occluded pedestrians.
### Ablation Studies
To solve the extremely under-constrained and challenging problem, we made a lot of design decisions that turned out to be crucial to the quality of reconstruction. To evaluate the effectiveness of our technical approach, we perform ablation studies and compare our reconstruction with a baseline. We have included both quantitative evaluations as well as qualitative visualizations. Here we described some representative design decisions in detail.
**Edge Sampling.** As pointed out by [41], edge sampling plays an important role in differentiable ray tracing. This is even more significant for human reflection silhouettes. In addition, unless a person is standing right in front of the reflector, the reflection silhouette usually occupies a small
Figure 7: Visualization of reconstruction obtained from ablated variations of our full model. While all variations can still find the 3D location of humans relatively accurately, the fine-grained details of human poses are significantly improved in our full model.
region of the thermal image. We therefore perform edge detection on the reflection image to extract edges of human silhouette and sampling ray with a probability distribution concentrated at the vicinity of these edges and increasing the concentration as training progresses as a type of curriculum training.
**Sphere Tracing.** As we've described in 3.4, direct surface projection from the vicinity of an SDF will yield a point far from the real intersection between the incoming ray and the zero-isosurface of the SDF. Therefore, we perform 3 steps of sphere tracing to estimate the intersection point on the object.
**Surface Smoothing.** From experiments, we discovered that even after we perform sphere tracing, the reflection surface normals are still noisy, causing the differentiable rendering algorithm to produce a noisy reflection. This effectively injects noise into the gradients, making the optimization more challenging. We discovered that this is caused by the reconstructed DeepSDF having a locally non-smooth zero iso-surface. In figure 8, we visualize the surface normals calculated from a small region of zero-isosurface which shows the non-smoothness. To mitigate this error, we perform surface smoothing during differentiable rendering by sampling 8 neighboring rays surrounding the main ray and averaging all estimated surface normals for reflection calculation.
**Evaluation.** We evaluate our reconstruction as well as the 3 aforementioned ablated methods by comparing the 2D keypoints and 3D skeleton estimated from synchronized images captured by a calibrated third camera. We used [17] for 2D keypoints detection and [39] for 3D skeleton estimation. For comparison, we compared the reconstruction to 200 randomly sampled 2D human keypoints and 3D skeletons from the HumanEva dataset [68].
Both the quantitative experiments and qualitative visualizations have shown the effectiveness of our technical approach as well as the design decisions. Particularly, we believe our findings regarding differentiable rendering of reflections on implicit surfaces will provide insights to other computer vision researchers working with reflections.
## 5 Conclusion
This paper shows that 3D position and pose of a human can be reconstructed from a single thermal image of everyday objects reflecting human thermal radiations. We approach this problem by combining the priors learned by pre-trained 3D generative models and differentiable rendering of reflections. By formulating the problem as an optimization problem, we perform analysis by synthesis to explain the observations. We believe thermal cameras are powerful tools to study human activities in daily environments and integrating them with modern computer vision models will bring out many downstream applications in robotics, graphics, and 3D perception.
**Acknowledgements:** This research is based on work partially supported by the Toyota Research Institute, the NSF NRI Award #1925157, and the NSF CAREER Award #2046910. We acknowledge Shree Nayar, Shuran Song, Runlin Xu, James Tompkin, Mark Sheinin, Mia Chiquier, Jeremy Klotz for helpful feedback, and Su Li, Dylan Chen, Sophia Su for helping with data collection.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Evaluation Method} & Object & w/o Edge & w/o Sphere & w/o Surface & \multirow{2}{*}{Full Model} & \multirow{2}{*}{Random} \\ & Type & Sampling & Tracing & Smoothing & & \\ \hline
2D Keypoints [17] & Bowl & 0.231 & 0.224 & 0.145 & **0.116** & 0.346 \\
2D Keypoints [17] & Mug & 0.101 & 0.209 & 0.109 & **0.094** & 0.371 \\
3D Skeleton [39] & Bowl & 0.309 & 0.272 & 0.212 & **0.152** & 0.322 \\
3D Skeleton [39] & Mug & 0.223 & 0.215 & 0.202 & **0.126** & 0.317 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative evaluation of our reconstructed 3D human. We used two evaluation methods by comparing the extracted 2D keypoints and 3D skeleton with a calibrated 3rd camera view. Object type indicates the type of objects serving as reflectors. Columns 3-5 are three variations of our full model with some parts ablated. Random shows the corresponding metric if a random sample were to be drawn from the HumanEva [68] dataset, which includes diverse poses in daily human activities. Numbers show the average normalized Euclidean distance between reconstruction and ground truth.
Figure 8: Visualization of DeepSDF Surface Normals within a 1.6 cm\(\times\)1.6 cm area. From the visualization, we can clearly see an improvement in surface smoothness at a small scale, which is beneficial to the differentiable rendering process. The X-Y plane (horizontal) indicates the location on a surface with a step size of 0.2mm. Given the unit surface normal vector at a point on the grid \((x,y)\), we compute its dot product with the unit surface normal vector at \((0,0)\), and plot this value on the \(Z\)-axis. This shows the curvature of the surface as well as its level of smoothness. |
2310.05682 | Analysis of Rainfall Variability and Water Extent of Selected Hydropower
Reservoir Using Google Earth Engine (GEE): A Case Study from Two Tropical
Countries, Sri Lanka and Vietnam | This study presents a comprehensive remote sensing analysis of rainfall
patterns and selected hydropower reservoir water extent in two tropical monsoon
countries, Vietnam and Sri Lanka. The aim is to understand the relationship
between remotely sensed rainfall data and the dynamic changes (monthly) in
reservoir water extent. The analysis utilizes high-resolution optical imagery
and Sentinel-1 Synthetic Aperture Radar (SAR) data to observe and monitor water
bodies during different weather conditions, especially during the monsoon
season. The average annual rainfall for both countries is determined, and
spatiotemporal variations in monthly average rainfall are examined at regional
and reservoir basin levels using the Climate Hazards Group InfraRed
Precipitation with Station (CHIRPS) dataset from 1981 to 2022. Water extents
are derived for selected reservoirs using Sentinel-1 SAR Ground Range Detected
(GRD) images in Vietnam and Sri Lanka from 2017 to 2022. The images are
pre-processed and corrected using terrain correction and refined Lee filter. An
automated thresholding algorithm, OTSU, distinguishes water and land, taking
advantage of both VV and VH polarization data. The connected pixel count
threshold is applied to enhance result accuracy. The results indicate a clear
relationship between rainfall patterns and reservoir water extent, with
increased precipitation during the monsoon season leading to higher water
extents in the later months. This study contributes to understanding how
rainfall variability impacts reservoir water resources in tropical monsoon
regions. The preliminary findings can inform water resource management
strategies and support these countries' decision-making processes related to
hydropower generation, flood management, and irrigation. | Punsisi Rajakaruna, Surajit Ghosh, Bunyod Holmatov | 2023-10-09T12:51:46Z | http://arxiv.org/abs/2310.05682v2 | **Analysis of Rainfall Variability and Water Extent of Selected Hydropower Reservoir Using Google Earth Engine (GEE): A Case Study from Two Tropical Countries, Sri Lanka and Vietnam**
## Abstract
This study presents a comprehensive remote sensing analysis of rainfall patterns and selected hydropower reservoir water extent in two tropical monsoon countries, Vietnam and Sri Lanka. The aim is to understand the relationship between remotely sensed rainfall data and the dynamic changes (monthly) in reservoir water extent. The analysis utilizes high-resolution optical imagery and Sentinel-1 Synthetic Aperture Radar (SAR) data to observe and monitor water bodies during different weather conditions, especially during the monsoon season. The average annual rainfall for both countries is determined, and spatiotemporal variations in monthly average rainfall are examined at regional and reservoir basin levels using the Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) dataset from 1981 to 2022. Water extents are derived for selected reservoirs using Sentinel-1 SAR Ground Range Detected (GRD) images in Vietnam and Sri Lanka from 2017 to 2022. The images are pre-processed and corrected using terrain correction and refined Lee filter. An automated thresholding algorithm, OTSU, distinguishes water and land, taking advantage of both VV and VH polarization data. The connected pixel count threshold is applied to enhance result accuracy. The results indicate a clear relationship between rainfall patterns and reservoir water extent, with increased precipitation during the monsoon season leading to higher water extents in the later months. This study contributes to understanding how rainfall variability impacts reservoir water resources in tropical monsoon regions. The preliminary findings can inform water resource management strategies and support these countries' decision-making processes related to hydropower generation, flood management, and irrigation.
_Keywords - Reservoir Extents, Hydroelectric reservoir, Rainfall, Sentinel-1, GEE_
## 1 Introduction
Tropical monsoon countries like Vietnam and Sri Lanka, which have substantial weather exposure, could be ideal for monitoring the variation of water bodies concerning the changing weather [1]. Global climate change influences the long-term trend of climate variables such as rainfall, temperature, and wind. It is essential to analyze the long-term trend in rainfall patterns when a study of change in the reservoir water extent is conducted for a tropical region subjecting to periodic monsoons [2].
Vietnam uses hydroelectricity to fulfil a significant part of its electric power demand since the country has many reservoirs built for hydropower generation, flood management and irrigation [3]. Among them, 2900 reservoirs are contributing towards hydropower and irrigation while the total reservoir capacity of the country is 28 billion m\({}^{3}\). According to the Department of Water Resource Management in Vietnam, the availability of total per capita renewable water resources is gradually declining and is predicted to be only 3100 m\({}^{3}\) by 2025 as it depends on the upstream countries [4]. Identifying water storage capacity is a noteworthy problem in the country that needs to be addressed.
Sri Lanka is influenced by two monsoons, southwest and northeast, and the seasons in Sri Lanka consist of these two monsoons and two inter monsoons. The direction of both monsoons impacts the rainfall pattern of the country. The southwest monsoon of Sri Lanka happens from May to September and is more impactful for the country's southwest region. The northeast monsoon usually starts in December and continues till February. It is creating more influence in the northeast region of the country [5]. Victoria is the largest power station that contributes electricity generation with a capacity of 210 MW. Kottale provides a capacity of 201 MW. Both reservoirs are in the Mahaweli cascade. Samanalawewea is also a leading hydropower plant contributing to the countries' hydroelectricity with a capacity of 124 MW located in the Walawe cascade [6, 7, 8].
EO datasets are widely used to monitor the reservoir extent. Researchers can use high-resolution or moderate-resolution multi-spectral optical imagery from satellites with high revisit capacity for continuous monitoring, depending on their needs. Radar imagery can be used to overcome multiple problems with the use of optical imagery to map water bodies, especially during the monsoon season. Since the radar sensors are active sensors which can monitor the earth during day and night and have the capability of cloud penetration, they are highly recommended to be used in change detection during the rainy season [9].
The technological development of remote sensing instruments and advanced algorithms developed precipitation products such as CHIRPS (Climate Hazards Group Infrared Precipitation with Station) data [10]. Google Earth Engine (GEE) is a cloud computing platform introduced by Google in 2010 to conduct geospatial analysis with big data. Before GEE, Amazon Web Services and Microsoft Azure were introduced to the world to work with geospatial data. However, the benefit of GEE is that it supports more data types and is freely available. Many currently use GEE for their spatial analyses for the research and studies on climate change, agriculture, land use, disaster management, climate change impact, etc. The built-in functions, ability to export the desired outputs to be used in other applications like ArcGIS, global datasets created for specific use and interactive programming environment enhance the usability of GEE for geospatial analyses [11]. Combining remotely sensed precipitation and Sentinel-1 data to observe the change in the extent of water bodies is the main objective of this research. We used CHIRPS daily precipitation data and Sentinel 1 GRD satellite imagery to detect the monthly changes in the extent of three hydroelectric reservoirs in Vietnam and three hydroelectric reservoirs in Sri Lanka from 2017 to 2022. The present study aims to identify the relationship between the reservoir water extent using remotely sensed data and how the rainfall trend of the relevant areas has changed from 1981 - 2022.
## 2 Study Area
The study focuses on two tropical countries, Vietnam and Sri Lanka. Vietnam experiences high temperatures and humidity throughout the year as it is in the tropical and temporal zones. The region is affected by the Southwest Monsoon from May to November, while the annual rainfall lies between 700 - 5000 mm. Flood duration in the country is about six months, from July to December, and extreme flood events occur between late September to mid-October[12]. The country has 2900 reservoirs having a capacity of 28 billion m\({}^{3}\) contributing to the hydropower and irrigation requirements of the country [4]. We have selected Tri An, Yali and Thac Ba reservoirs, which contribute to the hydroelectricity generation in the country (**Fig 1**).
Sri Lanka is in the tropical monsoon climate zone with a land area of 65,610 km\({}^{2}\). Therefore, the country experiences two monsoon periods: southwest monsoon (May- September) and northeast monsoon (December- February). The average annual rainfall over the country depends on the country's spatial location; the highest rainfall is expected at the central highlands, which exceeds a value of 5000mm. The southeast lowlands have a rainfall of 1000mm [5]. Most of the hydropower stations within the country are operated under CEB (Ceylon Electricity Board), and the output of these power plants has been heavily impacted by the recent climate change and deviations in water use due to that. Mahaweli Cascade contains
the three largest hydropower plants in the country and contributes to the total hydropower by generating over 800 MW [13]. In the present study, we selected Victoria, Kotmale and Samanala Wewa reservoirs from Sri Lanka (**Fig 1**).
## 3 Methodology
The methodology (**Fig 2**) contains the comprehensive workflow of rainfall analysis and the reservoir extent calculation. The first phase describes the rainfall analysis, while the second phase explains the monthly water extent of the hydroelectric reservoirs.
### _Remote Sensing Datasets_
Sentinel 1 SAR GRD ImagesThe Sentinel 1 Ground Range Detected (GRD) dataset consists of daily updating image collection since 2014. Each Sentinel 1 scene includes three resolutions, four band combinations and three instrument modes and is pre-processed by following thermal noise removal, radiometric calibration, and terrain correction. The study used imagery captured with 10 m resolution and interferometric wide swath (IW) mode to derive the water extent of the existing reservoirs. The pre-processed data were acquired using an analysis-ready data cube with all the corrections applied to sentinel 1 SAR data. This ARD concept allows for rapid and easy use of complex datasets like SAR data. ARD helps the user avoid all the pre-processing complexities and concentrate on the use and analysis of the data.
CHIRPS Satellite Precipitation DatasetClimate Hazards Group InfraRed Precipitation with Station data (CHIRPS) and U.S. Geological Survey developed this IR-based precipitation dataset from 1981 to the present. The CHIRPS data have a high spatial resolution of 0.050 degrees. The gridded rainfall time-series dataset was formed by integrating satellite estimations and gauge
Figure 1: Hydrobasins of selected reservoirs
observations with global climatology data. Here, we used the CHIRPS Daily (Version 2.0 Final) dataset to determine the rainfall trend of the study area.
### Rainfall Analysis and Water Extent Mapping
The CHIRPS Daily Precipitation Data was imported from the GEE data catalogue from 1981 - 2022. The Average annual rainfall maps for the countries were generated. Then, the regional/provincial level monthly average rainfall was calculated in GEE to understand the spatiotemporal variation. The monthly average rainfall values for each reservoir basin were also calculated for the same period. The reservoir water extents were measured with SAR data and the OTSU algorithm for water extent mapping. Sentinel 1 images with IW mode and 10 m resolution were loaded and corrected using terrain correction and refined Lee filter. Then, the corrected imagery was uploaded to assets for further use. Both VV and VH polarization were considered. Then, the VV and VH threshold values were obtained using the OTSU algorithm. OTSU thresholding algorithm is used to automatically detect an optimal threshold using a grey-level histogram based on the distribution of pixel values. The equation of the OTSU thresholding algorithm is,
\[\sigma^{2}(t)=P_{w}(t)\times\sigma_{w}^{2}(t)+P_{nw}(t)\times\sigma_{nw}^{2}(t) \hskip 56.905512pt(1)\]
Here, the \(\sigma\) is the weighted sum of the variance between the water and non-water classes. Moreover, the P\({}_{w}\), \(\sigma_{w}\) and P\({}_{nw}\) and \(\sigma_{nw}\) is the variance and the probabilities of water (w) and non-water (nw) classes.
Since the backscatter intensity of the features is different in different conditions, an automatic OTSU algorithm is used to determine the dynamic threshold values for both VV and VH bands that separate water and land [14]. The connected pixel count threshold was also used to increase the accuracy of the result by removing the random pixels detected as water. These two steps were implemented for each corrected image and derived the water extents during each month for every reservoir from 2017 to 2022. Then, the water extent was calculated in square kilometres.
Figure 2: Methodology of the study
Results and Discussion
The above line graph represents the annual average rainfall of the two countries. Both countries have an annual average rainfall of over 1500 mm (**Fig 3**).
Several climate and environmental changes influence the change in reservoir water extent. From all of them, rainfall trend is also an influencing factor for dynamic reservoir water extents. Analyzing the relevant rainfall pattern of the relevant region or the hydro basin is also important to understand the variability of reservoir water extent.
Box and Whisker plot is a statistical visualization method that provides a synopsis of the distribution of the continuous variable. It is widely used to compare the distribution of variables within several groups and to determine the data spread. The key advantage of a box plot is that it indicates the data outliers. Figure 4 represents the box and whisk diagram of rainfall in Vietnam. According to that plot, the precipitation range and the values during the monsoon period are higher than in the dry season. The Box and Whisk diagrams for three reservoir extents in Vietnam and Sri Lanka are represented in **Figure 5** and **Figure 6**, respectively. The three reservoirs are showing a rise in reservoir extent after July. After October, the water extent shows only tiny deviations for those three reservoirs.
Figure 4: Box and Whisk Diagrams of Rainfall (top-Vietnam, bottom - Sri Lanka)
Figure 5: Box Whisk Diagrams for Water Extents Vietnam
Monsoon precipitation is considered a primary source of water accumulation in the reservoir (**Fig 7**), which is further used for irrigation and other services. The minimum and maximum water extent of three selected reservoirs of 2022 in Sri Lanka were mapped. The minimum extent is 10.85 km\({}^{2}\), estimated in April, while the maximum area is 21.23 km\({}^{2}\), reported in November. In January, the rainfall over the hydro basin is at its minimum value; in October, it is at its maximum value. The water extent of the Victoria Reservoir in January is 17.97 km\({}^{2}\) and 19.84 km\({}^{2}\) in October.
Kotmale Reservoir is also located in Mahaweli Basin. Therefore, it also receives the minimum rainfall in January and maximum rainfall in October. The maximum water extent of the Kotmale reservoir in 2022 is reported in August, and the minimum water extent is reported in March. The respective minimum and maximum extents are 3.63 km\({}^{2}\) and 5.41 km\({}^{2}\). The water extent of the reservoir in January 2022 was 5.171 km\({}^{2}\), while the extent was 5.28 km\({}^{2}\) in October, where the minimum and maximum rainfall were reported (**Fig 8**).
Samanalawewa reservoir in the Walawe basin also received the maximum rainfall for 2022 in October, while minimum rainfall was received in January. The maximum water extent of 6.00 km\({}^{2}\) of Samanalawewa was found in June 2022, and the minimum water extent of 4.55 km\({}^{2}\) was estimated in March. The water extent in the month with the lowest rainfall is 5.74 km\({}^{2}\): and 4.92 km\({}^{2}\) in October, the month with the highest rainfall for 2022.
Fig 6: Box and Whisk Diagrams of Water Extents Sri Lanka
The maximum rainfall in 2022 for the Chay hydro basin where the Thac Ba is located is received in August, and the minimum rainfall value is reported in December. The minimum water extent of Thac Ba was reported in April as a value of 132.86 km\({}^{2}\) and the maximum water extent of 187.44 km\({}^{2}\) was identified in November. The water extent of Thac Ba reservoir in August with maximum rainfall was 155.25 km\({}^{2}\) while it was 186.37 km\({}^{2}\) in December, which had minimum rainfall. Yaly Reservoir, located in the Se San basin, received the maximum rainfall in August 2022, while it was a minimum in February.
The maximum water extent of Yaly occurred in October, while April reported the minimum water extent in 2022. The minimum and maximum water extent values are 22.40 km\({}^{2}\) and 36.82 km\({}^{2}\) respectively. The water extent of Yaly in August is 24.40 km\({}^{2}\) and 31.28 km\({}^{2}\) in February. The maximum water extent of Tri An was determined in January, while the minimum water extent value was in June for the year 2022. The minimum extent is 199.40 km\({}^{2}\), while the maximum water extent is 286.01 km\({}^{2}\). Dong Nai hydro basin, where the Tri An is situated, received the maximum rainfall in September and the minimum in January for 2022. The reservoir extent of Tri An in September 2022 is 255 km\({}^{2}\)(**Fig 9**).
According to the graphical representations of rainfall and water extents, it can be seen that the reservoir extents increase gradually sometime after the rainfall increases. Similarly, during the dry seasons, the reservoir extents gradually decrease, and these deviations in the reservoir extents are not instant. Human interventions for reservoir management and changes in inflow and dam operations also impact the deviation of reservoir water extents [15, 16]. The excess water of these reservoirs is released to the nearby lowlands to maintain the reservoirs' adequate water volume and pressure. Therefore, the water extents monitored using the remotely sensed data could show a minimal change with respect to time.
Figure 7: Water Extent and Rainfall Graph (first row – Sri Lanka, second row- Vietnam)
The dynamic monitoring of reservoir water extent can support the researchers with the decision-making regarding protecting reservoir boundaries without harming the associative ecosystems and protecting the water quality, conserving the biodiversity, regulating runoff etc.[17]. The sustainability goals can be achieved with effective and robust water management policies, which could be developed after periodical monitoring of existing hydroelectric reservoirs.
Figure 8: Maximum and Minimum Reservoir Extent Maps for Sri Lanka
## 5 Conclusion
The study focused on analyzing the rainfall patterns and reservoir water extents using cloud platforms for the two tropical monsoon countries, Vietnam and Sri Lanka. The study utilized remote sensing data, including Sentinel 1 SAR imagery, to assess the monthly water extent of selected hydroelectric reservoirs from 2017 to 2022. The results indicated that both Vietnam and Sri Lanka experience high annual average rainfall, with variations depending on the monsoon seasons. The analysis of rainfall patterns revealed higher precipitation during the monsoon period compared to the dry season. The box and whisker plots demonstrated the relationship between rainfall and reservoir water extents, showcasing the rise in water levels during and after the monsoon season.
The study provided valuable insights into the dynamics of reservoir water extents and their correlation with rainfall trends. The change of reservoir water extent follows the monsoon precipitation pattern.
Figure 9: Maximum and Minimum Reservoir Extent Maps of Vietnam
Understanding these relationships is crucial for effective water resource management, particularly in the context of hydropower generation, flood management, and irrigation in both countries.
## Acknowledgement
This publication has been prepared as an output of the CGIAR Research Initiative on Low Emission Food Systems (Mitigate+). We would like to thank all funders who supported this research through their contributions to the CGIAR Trust Fund: [https://www.cgiar.org/funders/](https://www.cgiar.org/funders/)
|
2308.15803 | Funnel-based Control for Reach-Avoid-Stay Specifications | The paper addresses the problem of controller synthesis for control-affine
nonlinear systems to meet reach-avoid-stay specifications. Specifically, the
goal of the research is to obtain a closed-form control law ensuring that the
trajectories of the nonlinear system, reach a target set while avoiding all
unsafe regions and adhering to the state-space constraints. To tackle this
problem, we leverage the concept of the funnel-based control approach. Given an
arbitrary unsafe region, we introduce a circumvent function that guarantees the
system trajectory to steer clear of that region. Subsequently, an adaptive
funnel framework is proposed based on the target, followed by the construction
of a closed-form controller using the established funnel function, enforcing
the reach-avoid-stay specifications. To demonstrate the efficacy of the
proposed funnel-based control approach, a series of simulation experiments have
been carried out. | Ratnangshu Das, Pushpak Jagtap | 2023-08-30T07:16:25Z | http://arxiv.org/abs/2308.15803v1 | # Funnel-based Control for Reach-Avoid-Stay Specifications
###### Abstract
The paper addresses the problem of controller synthesis for control-affine nonlinear systems to meet reach-avoid-stay specifications. Specifically, the goal of the research is to obtain a closed-form control law ensuring that the trajectories of the nonlinear system, reach a target set while avoiding all unsafe regions and adhering to the state-space constraints. To tackle this problem, we leverage the concept of the funnel-based control approach. Given an arbitrary unsafe region, we introduce a circumvent function that guarantees the system trajectory to steer clear of that region. Subsequently, an adaptive funnel framework is proposed based on the target, followed by the construction of a closed-form controller using the established funnel function, enforcing the reach-avoid-stay specifications. To demonstrate the efficacy of the proposed funnel-based control approach, a series of simulation experiments have been carried out.
## I Introduction
In recent years, there has been significant interest in the study of reach-avoid-stay (RAS) specifications for the safe and reliable operation of autonomous systems. Essentially the system state trajectory should eventually reach a target set while avoiding any unsafe set and respecting state space constraints. Synthesizing controllers for these RAS specifications is an important class of control problem as they serve as building blocks for complex task specifications [1] and enable the design of robust control strategies in safety-critical control problems such as trajectory regulation, motion planning, and obstacle avoidance.
With the onset of the usage of formal languages for specifying complex tasks, symbolic control [2, 3] has emerged as a powerful tool. [4] proposed a fixed-point algorithm as a computational improvement over the abstraction-based methods for control synthesis in a reach-stay scenario. The authors in [5] presented a scalable controller synthesis technique by leveraging the concept of barrier functions in symbolic control. In spite of all these attempts at improving computational efficiency, these approaches still face challenges related to the so-called curse of dimensionality.
In contrast to formal methods, nonlinear control approaches like barrier-based control [6] ensure formal guarantees of safety and stability without the need for state-space discretization. Authors in [7] proposed implementing control Lyapunov-barrier functions to establish sufficient conditions for reach-avoid-stay specifications, specifically in the context of a system experiencing a Hopf-bifurcation. In [8], researchers present a stochastic analog of Lyapunov-barrier functions to characterize probabilistic reach-avoid-stay specifications, taking robustness into account. However, although these methods provide more efficient control synthesis, the reliance on optimization techniques can still lead to increased computational complexity, making barrier function-based methods computationally demanding, especially for large and high-dimensional systems.
On the other hand, the funnel-based control approach [9] offers the distinct advantage of designing a closed-loop control scheme satisfying a required tracking performance. Owing to the computationally tractable nature of funnel-based control, numerous successful applications have been reported in the literature [10]. From solving tracking control problems for unknown nonlinear systems [11] to handling multi-agent systems subjected to complex task specifications [12], researchers have demonstrated its efficacy in a wide range of control problems. Moreover, as the feedback control algorithm actively adjusts the system's trajectory to guide it towards the target, it has been effective in enforcing reachability specifications, i.e., reaching a target while respecting state space constraints [13, 14].
However, active obstacle avoidance using funnel-based control can be a challenging problem. One of the main difficulties lies in designing accurate and efficient funnel representations to ensure safe navigation around obstacles while maintaining reach-avoid-stay specifications. In [15], authors consider a pre-established trajectory around the obstacles and redefine the problem as implementing control funnel functions for path following. Although this approach ensures that the system remains in a safe region around the reference trajectory, it fails to utilize the inherent ability of funnel constraints to avoid obstacles.
This paper puts forward, for the very first time, a novel approach to integrate avoid-specifications within the funnel-based control framework. By adapting the funnel constraints, the closed-form control law dynamically adjusts the robot's trajectory to avoid any general unsafe set while maintaining the desired performance criteria. The effectiveness of this approach in satisfying reach-avoid specifications is further demonstrated through simulation studies, highlighting its potential to enhance the capabilities of robotic systems in navigating complex environments.
## II Preliminaries and Problem Formulation
### _Notations_
The symbols \(\mathbb{N}\), \(\mathbb{R}\), \(\mathbb{R}^{+}\), and \(\mathbb{R}^{+}_{0}\) denote the set of natural, real, positive real, and nonnegative real numbers, respectively. We use \(\mathbb{R}^{n\times m}\) to denote a vector space of real matrices with \(n\) rows and \(m\) columns. To represent a column vector with \(n\) rows, we use \(\mathbb{R}^{n}\). We represent the Euclidean
norm using \(\|\cdot\|\). For \(a,b\in\mathbb{R}\) and \(a<b\), we use \((a,b)\) to represent open interval in \(\mathbb{R}\). For \(a,b\in\mathbb{N}\) and \(a\leq b\), we use \([a;b]\) to denote close interval in \(\mathbb{N}\). To denote a vector \(x\in\mathbb{R}^{n}\) with entries \(x_{1},\ldots,x_{n}\), we use \(\ \mathsf{col}(x_{1},\ldots,x_{n})\), where \(x_{i}\in\mathbb{R},i\in[1;n]\) denotes \(i\)-th element of vector \(x\in\mathbb{R}^{n}\). A diagonal matrix in \(\mathbb{R}^{n\times n}\) with diagonal entries \(d_{1},\ldots,d_{n}\) is denoted by \(\mathsf{diag}(d_{1},\ldots,d_{n})\). Given \(N\in\mathbb{N}\) sets \(\mathbf{X}_{i}\), \(i\in[1;N]\), the Cartesian product of the sets is given by \(\mathbf{X}=\prod_{i\in[1;N]}\mathbf{X}_{i}:=\{(x_{1},\ldots,x_{N})|x_{i}\in \mathbf{X}_{i},i\in[1;N]\}\). Consider a set \(\mathbf{X}_{a}\subset\mathbb{R}^{n}\), its projection on \(i\)th dimension, where \(i\in[1;n]\), is given by an interval \([\mathbf{X}_{ai},\overline{\mathbf{X}}_{ai}]\subset\mathbb{R}\), where \(\underline{\mathbf{X}}_{ai}:=\min\{x_{i}\in\mathbb{R}\mid[x_{1}\ldots,x_{n}] \in\mathbf{X}_{a}\}\) and \(\overline{\mathbf{X}}_{ai}:=\max\{x_{i}\in\mathbb{R}\mid[x_{1},\ldots,x_{n}] \in\mathbf{X}_{a}\}\). We further define the hyper-rectangle \([\underline{\mathbf{X}}_{a}]=\prod_{i=[1;n]}\left[\underline{\mathbf{X}}_{ai},\overline{\mathbf{X}}_{ai}\right]\). We denote the empty set by \(\emptyset\). The space of bounded continuous functions is denoted by \(\mathcal{C}\). Given a compact set \(\mathbf{X}\), \(int(\mathbf{X})\) represents the interior of the set and \(\partial\mathbf{X}=\mathbf{X}\setminus int(\mathbf{X})\) represents the boundary of \(\mathbf{X}\). \(\overline{\max}\) and \(\overline{\min}\) are smooth approximations of the non-smooth \(\max\) and \(\min\) functions, defined as, \(\overline{\max}(a,b)\approx\frac{1}{\nu}\ln(\mathsf{e}^{\nu a}+\mathsf{e}^{ \nu b})\) and \(\overline{\min}(a,b)\approx-\frac{1}{\nu}\ln(\mathsf{e}^{-\nu a}+\mathsf{e}^{- \nu b})\), respectively. The sign function is defined as \(\mathsf{sign}(x):=\begin{cases}-1&\text{if }x<0\\ 1&\text{if }x\geq 0\end{cases}\).
### _System Definition_
Consider the following control-affine nonlinear system:
\[\mathcal{S}:\dot{x}=f(x)+g(x)u, \tag{1}\]
where \(x(t)=\mathsf{col}(x_{1}(t),\ldots,x_{n}(t))\in\mathbf{X}\subset\mathbb{R}^{n}\) and \(u(t)\in\mathbb{R}^{m}\) are the state and control input vectors, respectively. The state space of the system is defined by the closed and connected set \(\mathbf{X}\). The functions \(f:\mathbf{X}\rightarrow\mathbb{R}^{n}\) and \(g:\mathbf{X}\rightarrow\mathbb{R}^{n\times m}\) satisfy Assumption 1.
**Assumption 1**: \(f\) _and \(g\) are locally Lipschitz, and \(g(x)g^{T}(x)\) is positive definite for all \(x\in\mathbb{R}^{n}\)._
### _Problem Formulation_
The paper considers the desired behavior of the system \(\mathcal{S}\), in (1), defined in the form of reach-avoid-stay specifications.
Let the compact and connected set \(\mathbf{T}\subset\mathbf{X}\) be the target set, the set \(\mathbf{U}\subset\mathbf{X}\) be an unsafe region containing \(n_{u}\in\mathbb{N}\) unsafe sets defined as, \(\mathbf{U}=\bigcup_{j\in[1;n_{u}]}\mathcal{U}^{j}\), where \(\mathcal{U}^{j}\subset\mathbf{X}\) is a convex, compact, and connected set, representing, the \(j\)th unsafe set. Thus, in general, the unsafe region \(\mathbf{U}\), although necessarily compact, can be disconnected and nonconvex.
Now, we will formally define the main controller synthesis problem considered in this work.
**Problem II.1**: _Given a control-affine system \(\mathcal{S}\) in (1) with Assumption 1, target set \(\mathbf{T}\subset\mathbf{X}\), and unsafe region \(\mathbf{U}\), as defined above, design a closed-form controller to ensure the satisfaction of the reach-avoid-stay specification, i.e., for a given initial position \(x(0)\in\mathbf{X}\setminus\mathbf{U}\), there exists \(t\in\mathbb{R}_{0}^{+}\), such that, \(x(t)\in\mathbf{T}\) and for all \(t\in\mathbb{R}_{0}^{+}:x(t)\in\mathbf{X}\setminus\mathbf{U}\)._
We approach this problem using a funnel-based control strategy to enforce reachability specification (Section III) and then dynamically modifying the funnel around the unsafe region to ensure that the system trajectory avoids the unsafe region while respecting the state constraints (Sections IV-V).
**Remark II.2**: _If \(\mathbf{X}\) is of any arbitrary shape, we redefine the state space as the hyper-rectangle \(\dot{\mathbf{X}}:=[\![\mathbf{X}]\!]=\!\prod_{i\in[1;n]}[\underline{\mathbf{ X}}_{i},\overline{\mathbf{X}}_{i}]\) and expand the unsafe region \(\dot{\mathbf{U}}=\mathbf{U}\cup([\![\mathbf{X}]\!]\setminus\!\mathbf{X})\). Here, \([\underline{\mathbf{X}}_{i},\overline{\mathbf{X}}_{i}]\) represent the projection of set \(\mathbf{X}\) on the \(i\)th dimension. Note that, adding \([\![\mathbf{X}]\!]\setminus\!\mathbf{X}\) to the unsafe set \(\mathbf{U}\) and following Algorithm 1 in Section V, enforces stay specifications for an arbitrary state-space \(\mathbf{X}\)._
## III Controller for Reachability Specification
In this section, we formulate a funnel-based control strategy aimed at guaranteeing that the system's trajectory adheres to the reachability specifications, i.e., given a target set \(\mathbf{T}\subset\mathbf{X}\) and a given initial position \(x(0)\in\mathbf{X}\), the controlled trajectory will eventually reach the target set in finite time. To solve the reachability problem, we leverage the funnel-based control approach [9]. We first define the funnel constraints over the trajectory as follows:
\[\underbrace{-\underline{c}_{i}\rho_{i,L}(t)+\eta_{i}}_{\rho_{i,L}(t)}<x_{i}(t)< \underbrace{\overline{c}_{i}\rho_{i}(t)+\eta_{j}}_{\rho_{i,U}(t)},\forall i \in[1;n], \tag{2}\]
where \(\eta=\mathsf{col}(\eta_{1},\ldots,\eta_{n})\in int(\mathbf{T})\), \(\underline{c}_{i}=\eta_{i}-\underline{\mathbf{X}}_{i}\) and \(\overline{c}_{i}=\overline{\mathbf{X}}_{i}-\eta_{i}\). \(\rho_{i}(t)\) is the continuously differentiable, positive and non-increasing funnel function defined as:
\[\rho_{i}(t)=(\rho_{i,0}-\rho_{i,\infty})e^{-l_{i}t}+\rho_{i,\infty} \tag{3}\]
with \(\rho_{i,0}=1\), \(\rho_{i,\infty}\in\left(0,\min\left(\rho_{i,0},\frac{|\mathbf{T}_{i}-\eta_{i}|}{ \max\{\underline{c}_{i},\overline{c}_{i}\}}\right)\right)\) and \(l_{i}\in\mathbb{R}_{0}^{+}\) governs the lower bound of convergence rate.
The above choice of \(\rho_{i,0}\), \(\underline{c}_{i}\), and \(\overline{c}_{i}\) ensures that the initial state of the system \(x_{i}(0)\) is within \([\underline{\mathbf{X}}_{i},\overline{\mathbf{X}}_{i}],\forall i\in[1;n]\) and by the aforementioned choice of \(\rho_{i,\infty}\), as \(t\rightarrow\infty,x(t)\in\prod_{i\in[1;n]}\left(\eta_{i}+[-\underline{c}_{i} \rho_{i,\infty},\overline{c}_{i}\rho_{i,\infty}]\right)\subset\mathbf{T}\). Thus, enforcing system state inside funnel constraints (2) ensures reachability. An example of a funnel designed for enforcing reachability specification is shown in Figure 1 (a).
To design a controller enforcing condition (2), we first define the normalized error \(e(x,t)=\mathsf{col}(e_{1}(x_{1},t),\ldots e_{n}(x_{n},t))\), as
\[e_{i}(x_{i},t)=\frac{x_{i}(t)-\frac{1}{2}(\rho_{i,U}(t)+\rho_{i,L}(t))}{\frac{ 1}{2}(\rho_{i,U}(t)-\rho_{i,L}(t))},\forall i\in[1;n]. \tag{4}\]
Now the corresponding constrained region \(\mathbb{D}\) can be represented by \(\mathbb{D}:=\{e(x,t):e_{i}(x_{i},t)\in(-1,1),\forall i\in[1;n]\}\). Next the normalized error is transformed through a smooth and strictly increasing transformation function \(y:\mathbb{D}\rightarrow\mathbb{R}^{n}\) with \(y(0)=0\). The transformed error is then defined as \(\varepsilon=\mathsf{col}(\varepsilon_{1},\ldots,\varepsilon_{n})\), where
\[\varepsilon_{i}(x,t)=y(e_{i}(x,t))=\ln\left(\frac{1+e_{i}(x,t)}{1-e_{i}(x,t)} \right),\forall i\in[1;n]. \tag{5}\]
By this definition, if the transformed error \(\varepsilon(x,t)\) is bounded, then the
constrained region \(\mathbb{D}\) and the state \(x(t)\) adheres to (2). We also define \(\xi(x,t)=\mathsf{diag}(\xi_{1}(x,t),\ldots,\xi_{n}(x,t))\) with
\[\xi_{i}(x,t)=\frac{4}{\rho_{i,d}(t)(1-e_{i}(x,t)^{2})},\forall i\in[1;n] \tag{6}\]
where \(\rho_{i,d}=\rho_{i,U}-\rho_{i,L}\).
Now, in Theorem III.1, we propose a control strategy \(u(x,t)\) such that the state trajectory is constrained within the funnel.
**Theorem III.1**: _Consider the control-affine system \(\mathcal{S}\) given in (1) with Assumptions 1. Given a target set \(\mathbf{T}\), the funnel constraints \(\rho_{i,U}(t)\) and \(\rho_{i,L}(t)\) (2), the control strategy_
\[u(x,t)=-g(x)^{T}(g(x)g(x)^{T})^{-1}\\ \left(k\xi(x,t)\varepsilon(x,t)-\frac{1}{2}\dot{\rho}_{d}(t)e(x, t)\right) \tag{7}\]
_will drive the state trajectory \(x(t)\), to the target set \(\mathbf{T}\) in finite time, i.e., \(\exists t\in\mathbb{R}_{0}^{+}:x(t)\in\mathbf{T}\). Here, \(k\) is any positive constant, \(\rho_{d}:=\mathsf{diag}(\rho_{1,d},\ldots,\rho_{n,d})\), with \(\rho_{i,d}=\rho_{i,U}-\rho_{i,L}\), \(e(x,t)\), \(\varepsilon(x,t)\), and \(\xi(x,t)\) are defined in (4), (5), and (6), respectively._
The proof follows on similar grounds as that of Theorem IV.3 and is omitted here due to space constraints.
Thus, given a system \(\mathcal{S}\) in (1), a target set \(\mathbf{T}\) in the state space \(\mathbf{X}\), we can define a funnel and the closed-form well-defined control law (7) that will guide the system trajectory to the target, enforcing reachability specifications.
## IV Extension to Reach-Avoid-Stay Specification
In this section, we begin by exploring the integration of avoidance of unsafe regions within the funnel-based control framework. Subsequently, we present an adaptive funnel design strategy that enables the successful accomplishment of reach-avoid-stay tasks.
### _Design of Circumvent Function_
Consider an unsafe region \(\mathbf{U}\) with \(n_{u}\) compact, connected and convex sets \(\mathcal{U}^{j}\), for \(j\in[1;n_{u}]\). We propose to introduce the avoid specifications through a circumvent function \(\beta^{j}(t)\), \(j\in[1;n_{u}]\).
**Remark IV.1**: _Note that although we are putting an assumption on \(\mathcal{U}^{j}\) to be convex and connected, the general unsafe zone \(\mathbf{U}\) can be concave and disconnected. This will further be elaborated upon in Section V._
First, given an initial state \(x(0)\in\mathbf{X}\setminus\mathbf{U}\), we obtain the time range \([\underline{t}^{j},\underline{t}^{j}]\) over which the system trajectory \(x(t)\), on application of the control law \(u(x,t)\) (7) to satisfy the reachability specification, intersects with the \(j\)th unsafe set \(\mathcal{U}^{j}\), and is given by, \(\underline{t}^{j}=\inf\{t\in\mathbb{R}^{+}:x(t)\cap\mathcal{U}^{j}\neq \emptyset\}\) and \(\overline{t}^{j}=\sup\{t\in\mathbb{R}^{+}:x(t)\cap\mathcal{U}^{j}\neq \emptyset\}\).
Consider the first unsafe set that the system trajectory intersects be \(\mathcal{U}^{j}\), where
\[\hat{j}=\arg\min_{j\in[1;n_{u}]}\underline{t}^{j}_{i}. \tag{8}\]
Following this, we will discuss the introduction of the circumvent function and adaptive funnel design to steer clear of \(\mathcal{U}^{j}\). The subsequent extension to deal with the entire unsafe region \(\mathbf{U}\) with multiple disconnected concave unsafe sets is presented in Section V.
Further, note that the system's trajectory enters the unsafe zone \(\mathcal{U}^{\hat{j}}\), if and only if \(\exists t\in\mathbb{R}^{+}\), such that \(x_{i}(t)\cap[\underline{\mathcal{U}}^{\hat{j}}_{i},\overline{\mathcal{U}}^{ \hat{j}}_{i}]\neq\emptyset,\forall i\in[1;n]\). Hence, to satisfy the avoid specification, it is sufficient to introduce the circumvent function only in one dimension \(i^{j}\), given by
\[i^{\hat{j}}=\arg\min_{i\in[1;n]}\hat{t}^{\hat{j}}_{i}, \tag{9}\]
where \(\hat{t}^{\hat{j}}_{i}=\inf\{t\in\mathbb{R}^{+}:x(t)\cap[\underline{\mathcal{ U}}^{\hat{j}}_{i},\overline{\mathcal{U}}^{\hat{j}}_{i}]\neq\emptyset\}\) and \([\underline{\mathcal{U}}^{\hat{j}}_{i},\overline{\mathcal{U}}^{\hat{j}}_{i}]\) is the projection of \(\mathcal{U}^{\hat{j}}\) in the \(i\)th dimension. Note that, \(i^{\hat{j}}\) may not be unique and in the case, the trajectory enters the projections of \(\mathcal{U}^{\hat{j}}\) in multiple dimensions at the same time, the \(\arg\min\) function returns \(i^{\hat{j}}\) randomly from those multiple alternatives. The advantage of this random selection will be discussed in Section V.
We also have the liberty to choose between modifying either the upper or the lower constraint boundary of the funnel. Unless the scenario where \(\underline{\mathcal{U}}^{\hat{j}}_{i}=\underline{\mathbf{X}}_{i}\) or \(\overline{\mathcal{U}}^{\hat{j}}_{i}=\overline{\mathbf{X}}_{i}\), where the circumvent function should necessarily be introduced in the lower and upper constraint boundary, respectively (it can be visualized as the scenario of a wall-shaped obstacle, where there is no space between the state space boundary and the obstacle at one end), we randomly choose between the two options. Although an optimal alternative can be easily chosen, the advantage of random picking is discussed in Section V.
We define the circumvent function \(\beta\) on lower constraint boundary for \(i=i^{\hat{j}}\) as:
\[\beta^{\hat{j}}_{i}(t)=\begin{cases}B^{\hat{j}}\mathbf{e}^{\frac{-k^{\hat{j}} \left(t-m^{\hat{j}}\right)^{2}}{\left(r^{\hat{j}}\right)^{2}-\left(t-m\hat{ j}\right)^{2}}}+\underline{\mathbf{X}}_{i},&\forall t\in T_{act}\\ \underline{\mathbf{X}}_{i},&\forall t\in\mathbb{R}^{+}\setminus T_{act}\end{cases} \tag{10}\]
where, \(B^{\hat{j}}=\overline{\mathcal{U}}^{\hat{j}}_{i}-\underline{\mathbf{X}}_{ai}+ \delta B\), \(m^{\hat{j}}:=\frac{\hat{t}^{\hat{j}}+\overline{t}^{\hat{j}}}{2}\), \(r^{\hat{j}}:=\frac{\hat{t}^{\hat{j}}-\overline{t}^{\hat{j}}}{2}+\delta t\) and \(\delta t\in\mathbb{R}^{+}\) is a tolerance factor. The function is active in the time range \(T_{act}=[\underline{t}^{j}-\delta t,\overline{t}^{\hat{j}}+\delta t]\) when the system trajectory avoids \(\mathcal{U}^{\hat{j}}\). The \(\delta B\) governs how far from \(\mathcal{U}^{\hat{j}}\) should the trajectory stay clear. \(k^{\hat{j}}\in\mathbb{R}^{+}\) is a small positive constant and determines the smoothness of the circumvent function.
Similarly, we define a circumvent on the upper constraint boundary as
\[\beta^{\hat{j}}_{i}(t)=\begin{cases}-B^{\hat{j}}\mathbf{e}^{\frac{-k^{\hat{j}} \left(t-m\hat{j}\right)^{2}}{\left(r^{\hat{j}}\right)^{2}-\left(t-m\hat{j} \right)^{2}}}+\overline{\mathbf{X}}_{ai},&\forall t\in T_{act}\\ \overline{\mathbf{X}}_{ai},&\forall t\in\mathbb{R}^{+}\setminus T_{act}\end{cases} \tag{11}\]
with \(B^{\hat{j}}=\overline{\mathbf{X}}_{ai}-\underline{\mathcal{U}}^{\hat{j}}_{i}+ \delta B\) and the rest of the parameters are the same as above.
An example of the introduction of the circumvent function on the lower constraint of a funnel is shown in Figure 1 (b).
### _Adaptive Funnel Design_
Given a target set \(\mathbf{T}\) and obstacle \(\mathcal{U}^{\hat{j}}\), choose a point \(\eta\in int(\mathbf{T}\setminus U)\). Now, according to (2), construct the funnel
and \(\rho_{U}\) to satisfy the reachability specification. As defined in the previous subsection, we characterize the obstacle using the circumvent function \(\beta^{j}(t)\), and now, we incorporate it into the funnel design. To solve Problem 2.1, we propose the following adaptive funnel constraints.
\[\text{If }\hat{\beta_{i}^{j}}\text{ introduced on }:\begin{cases}\gamma_{i,L}(t):=\overline{\max}(\rho_{i,L}(t),\hat{\beta_{i}^{j }}(t)),\\ \gamma_{i,U}(t):=\rho_{i,U}(t)+\alpha_{i}(t),\end{cases} \tag{12}\]
\[\text{If }\hat{\beta_{i}^{j}}\text{ introduced on }:\begin{cases}\gamma_{i,L}(t):=\rho_{i,L}(t)- \alpha_{i}(t),\\ \gamma_{i,U}(t):=\overline{\min}(\rho_{i,U}(t),\hat{\beta_{i}^{j}}(t)),\end{cases} \tag{13}\]
The modifications in the constraints of the funnel are captured by a continuously differentiable update function, \(\alpha(t)=\mathsf{col}(\alpha_{1}(t),\ldots,\alpha_{n}(t))\). The adaptive law governing the dynamics of the update function is defined as:
\[\dot{\alpha}_{i}(t)=\frac{\theta_{i}(t)}{\psi_{i}(t)+\alpha_{i}(t)}-\kappa \alpha_{i}(t),\ \alpha_{i}(0)=0, \tag{14}\]
where \(\psi_{i}(t)=\rho_{i,U}(t)-\beta_{i}^{j}(t)-\mu\) if \(\beta_{i}^{j}\) introduced on lower constraint \(\rho_{i,L}\) and \(\psi_{i}(t)=\beta_{i}^{j}(t)-\rho_{i,L}(t)-\mu\) if \(\beta_{i}^{j}\) introduced on upper constraint \(\rho_{i,U}\), with \(\mu\in\mathbb{R}^{+}\) as a tolerance factor. \(\theta_{i}(t)\) acts as a trigger, activating the first part of the update function only when reach-avoid specifications are conflicting with a tolerance of \(\mu\) and is given by:
\[\theta_{i}(t)=\theta_{o}(1-\mathsf{sign}(\psi_{i}(t))),\]
where \(\theta_{o}\in\mathbb{R}^{+}\) controls the deviation of the funnel around the circumvent function.
Further, the non-smooth sign function is approximated by the smooth function \(\tanh\). When the conflict is resolved, \(\theta_{i}(t)\) becomes 0 and the second part decays \(\alpha_{i}(t)\) exponentially back to zero with a rate of decay governed by constant \(\kappa\). An example of how the circumvent function modifies the funnel is shown in Figure 1 (c).
Let us now define \(\gamma_{L}=\mathsf{col}(\gamma_{1,L},\ldots,\gamma_{n,L})\), \(\gamma_{U}=\mathsf{col}(\gamma_{1,U},\ldots,\gamma_{n,U})\), \(\gamma_{d}=\mathsf{diag}(\gamma_{1,U}-\gamma_{1,L},\ldots,\gamma_{n,U}-\gamma _{n,L})\), and \(\gamma_{s}=\mathsf{col}(\gamma_{1,U}+\gamma_{1,L},\ldots,\gamma_{n,U}+\gamma_ {n,L})\).
**Lemma IV.2**: \(\gamma_{s}(t),\dot{\gamma}_{s}(t),\gamma_{d}(t)\in\mathcal{C}\)_._
From definitions (3),(10), and (11), one has \(\rho(t)\in\mathcal{C}\) and \(\dot{\beta}^{j}(t)\in\mathcal{C}\). Thus, to show that \(\gamma_{s}(t),\gamma_{d}(t),\dot{\gamma}_{s}(t)\) and \(\dot{\gamma}_{d}(t)\in\mathcal{C}\), it is sufficient to show that \(\alpha(t),\dot{\alpha}(t)\in\mathcal{C}\).
Since \(\eta(t),\rho(t),\beta^{j}(t)\in\mathcal{C}\) and \(\mu>0\) is a bounded tolerance, \(\psi(t)=\mathsf{col}(\psi_{1}(t),\ldots,\psi_{n}(t))\) is also continuous and bounded. Further, \(\dot{\psi}(t)=\mathsf{col}(\psi_{1}(t),\ldots,\dot{\psi}_{n}(t))\in\mathcal{C}\). Hence, \(\psi(t),\dot{\psi}(t)\in\mathcal{C}\).
Now, depending on the sign of \(\psi(t)\), consider the two cases and look at \(\alpha(t)\) and \(\dot{\alpha}(t)\) elementwise:
**Case I.** [\(\psi_{i}(t)\geq 0\)] This implies that \(\mathsf{sign}(\psi_{i}(t))=1\) and \(\theta_{i}(t)=0\). Thus, \(\dot{\alpha}_{i}(t)=-\kappa\alpha_{i}(t)\in\mathcal{C}\) which implies \(\alpha_{i}(t)=\alpha_{i}(0)e^{-\kappa t}\in\mathcal{C}\) for \(i\in[1;n]\).
**Case II.** [\(\psi_{i}(t)<0\)] This implies that \(\mathsf{sign}(\psi_{i}(t))=-1\) and \(\theta_{i}(t)=2\theta_{o}\).
\[\dot{\alpha}_{i}(t)=\frac{2\theta_{o}}{\psi_{i}(t)+\alpha_{i}(t)}-\kappa \alpha_{i}(t). \tag{15}\]
We will prove the boundedness of \(\dot{\alpha}_{i}(t)\) by contradiction. Let \(\psi_{i}(t)+\alpha_{i}(t)\to 0\) (converges to zero). Then taking its derivative w.r.t time \(t\), we can say \(\dot{\psi}_{i}(t)+\dot{\alpha}_{i}(t)\in\mathcal{C}\). From the fact that \(\dot{\psi}_{i}(t)\in\mathcal{C}\), we have \(\dot{\alpha}_{i}(t)\in\mathcal{C}\). However, from (15) we can observe that if \(\psi_{i}(t)+\alpha_{i}(t)\to 0\) then \(\dot{\alpha}_{i}(t)\to\infty\). This leads to a contradiction. Therefore, \(\psi_{i}(t)+\alpha_{i}(t)\nrightarrow 0\) (does not converge to zero) and consequently, \(\dot{\alpha}_{i}(t)\in\mathcal{C}\) for \(i\in[1;n]\).
To further prove the boundedness of \(\alpha_{i}(t)\), we will again use contradiction. Let \(\alpha_{i}(t)\to\infty\). Now, since \(\psi_{i}(t)\) is bounded and \(2\theta_{o}\) is a finite constant, \(\frac{2\theta_{o}}{\psi_{i}(t)+\alpha_{i}(t)}\to 0\implies\dot{\alpha}_{i}(t)=- \kappa\alpha_{i}(t)\to-\infty\). But \(\alpha_{i}(t)\to\infty\) and \(\dot{\alpha}_{i}(t)\to-\infty\) are contradictory. Hence, \(\alpha_{i}(t)\nrightarrow\infty\) for \(i\in[1;n]\).
Let \(\alpha_{i}(t)\to-\infty\). Similarly, since \(\psi_{i}(t)\) is bounded and \(2\theta_{o}\) is a finite constant, \(\frac{2\theta_{o}}{\psi_{i}(t)+\alpha_{i}(t)}\to 0\implies\dot{\alpha}_{i}(t)=- \kappa\alpha_{i}(t)\to\infty\). But \(\alpha_{i}(t)\to-\infty\) and \(\dot{\alpha}_{i}(t)\to\infty\) are contradictory. Hence, \(\alpha_{i}(t)\nrightarrow-\infty\) for \(i\in[1;n]\). Therefore, in both cases we reach the same conclusion, \(\alpha(t),\dot{\alpha}(t)\in\mathcal{C}\).
### _Controller Design_
In this section, utilizing the adaptive funnel, discussed in the previous section, we derive the funnel control law to
Fig. 1: Funnel Design. (a) Reachability funnel to obtain \(\underline{t}\) and \(\overline{t}\). (b) Introduction of circumvent function. (c) Funnel adapted around circumvent function.
solve Problem 2.1. The controller design is done in three stages.
**Stage I.** Given an initial state \(x(0)\) and target state \(\mathbf{T}\), construct the funnel constraints (2) that guide the system trajectory to the target \(\mathbf{T}\), as discussed in Section III.
**Stage II.** Given the unsafe region \(\mathbf{U}\), obtain \(\mathcal{U}^{j}\) as shown in (8) and compute the circumvent function according to (10) or (11). Now modify the funnel around the circumvent function, as discussed in Section IV-B, and determine the adaptive funnel framework, defined by \(\gamma_{L}\) and \(\gamma_{U}\).
**Stage III.** For the modified funnel, we define the normalized error as
\[\hat{e}(x,t)=2\gamma_{d}(t)^{-1}\left(x-\frac{1}{2}\gamma_{s}(t)\right). \tag{16}\]
The corresponding constrained region \(\hat{\mathbb{D}}\) can be represented by: \(\hat{\mathbb{D}}:=\{\hat{e}(x,t):\hat{e}(x,t)\in(-1,1)^{n}\}\). The transformed error is defined as:
\[\hat{e}(x,t)=y(\hat{e}(x,t))\] \[=\mathsf{col}\!\left(\!\ln\left(\frac{1\!+\!\hat{e}_{1}(x_{1},t) }{1\!-\!\hat{e}_{1}(x_{1},t)}\right),\ldots,\ln\left(\frac{1\!+\!\hat{e}_{n}( x_{n},t)}{1\!-\!\hat{e}_{n}(x_{n},t)}\!\right)\!\right). \tag{17}\]
We also define a diagonal matrix, \(\hat{\xi}(x,t)\), as
\[\hat{\xi}(x,t)=\frac{4\gamma_{d}^{-1}}{(1-\hat{e}^{T}(x,t)\hat{e}(x,t))}. \tag{18}\]
Now, in Theorem IV.3, we propose a control strategy \(\hat{u}(x,t)\) such that the state trajectory satisfies reach-avoid-stay specifications.
**Theorem IV.3**: _Consider a nonlinear control-affine system \(\mathcal{S}\) given in (1), assigned a reach-avoid task expressed mathematically through (3) and (10, 11) respectively. If the initial state \(x(0)\) is within the modified funnel (Section IV-B), then the control strategy_
\[\hat{u}(x,t)=-g(x)^{T}(g(x)g(x)^{T})^{-1}\\ \left(\hat{k}\hat{\xi}(x,t)\hat{e}(x,t)-\frac{1}{2}\dot{\gamma}_{ d}(t)\hat{e}(x,t)\right). \tag{19}\]
_will drive the state trajectory \(x(t)\) to the target \(\mathbf{T}\) while avoiding the unsafe set \(\mathcal{U}^{j}\) (8) and adhering to state constraints, i.e., \(\exists t\in\mathbb{R}_{0}^{+}:x(t)\in\mathbf{T}\) and \(\forall t\in\mathbb{R}_{0}^{+},x(t)\notin\mathcal{U}^{j}\) and \(x(t)\in\mathbf{X}\). Here, \(k\) is any positive constant, \(\hat{e}(x,t)\), \(\hat{e}(x,t)\), and \(\hat{\xi}(x,t)\) are defined in (16), (17), and (18), respectively._
The proof comprises three steps. First, we show that there exists a maximal solution for the normalized error \(\hat{e}(x,t)\), which implies that \(\hat{e}(x,t)\) remains within \(\hat{\mathbb{D}}\) in the maximal time solution interval \([0,\tau_{\max})\). Next, show that the proposed control law (19) constraints \(\hat{e}(x,t)\) to a compact subset of \(\hat{\mathbb{D}}\). Finally, prove that \(\tau_{\max}\) can be extended to \(\infty\).
Before proceeding let us introduce two lemmas:
**Lemma IV.4**: _[_16_, Theorem 54]_ _Consider the IVP \(\dot{y}=H(y,t),y(0)\in\mathbb{D}_{y}\). Assume \(H:\mathbb{D}_{y}\times\mathbb{R}_{>0}\rightarrow\mathbb{R}\) is_
1. _locally Lipschitz on_ \(y\)_, for each_ \(t\in\mathbb{R}_{>0}\)__
2. _piecewise continuous on_ \(t\) _for each fixed_ \(y\in\mathbb{D}_{y}\)__
_Then there exists a unique and maximal solution \(y:[0,\tau_{\max})\rightarrow\mathbb{D}_{y}\), where \(\tau_{\max}\in\mathbb{R}_{>0}\cup\infty\)._
**Lemma IV.5**: _[_16_, Proposition C.3.6]_ _Consider all the assumptions of Lemma IV.4 to hold true. For a maximal solution \(y\) on \([0,\tau_{\max})\) with \(\tau_{\max}<\infty\) and for any compact set \(\mathbb{D}_{y}^{\prime}\in\mathbb{D}_{y}\), \(\exists t^{\prime}\in[0,\tau_{\max})\), such that \(y(t^{\prime})\notin\mathbb{D}_{y}\)._
Continuing with the proof.
**Step 1.** Taking derivatives of (16) and (17), we have:
\[\dot{\hat{e}}=2\gamma_{d}^{-1}\left(\dot{x}-\frac{1}{2}\dot{\gamma}_{s}-\frac{ 1}{2}\dot{\gamma}_{d}\hat{e}\right),\text{ and }\dot{\hat{e}}=\frac{2}{1-\hat{e}^{T}\hat{e}}\dot{ \hat{e}}. \tag{20}\]
Substituting the controller (19) in the system dynamics (1), we obtain the closed-loop dynamics:
\[\dot{x}=H_{1}(x,\hat{e},t):=f(x)+\left(-\hat{k}\hat{\xi}\hat{\varepsilon}-\frac {1}{2}\dot{\gamma}_{d}\hat{e}\right)\]
and substituting the above equation in \(\dot{\hat{e}}\), we obtain
\[\dot{\hat{e}}=H_{2}(x,\hat{e},t):=2\gamma_{d}^{-1}\left(H_{1}(x,\hat{e},t)- \frac{1}{2}\dot{\gamma}_{s}-\frac{1}{2}\dot{\gamma}_{d}\hat{e}\right).\]
Consider the augmented state \(y\) and its derivative \(\dot{y}\) as
\[y=\begin{bmatrix}x\\ \hat{e}\end{bmatrix},\,\dot{y}=H(y,t):=\begin{bmatrix}H_{1}(x,\hat{e},t)\\ H_{2}(x,\hat{e},t)\end{bmatrix}.\]
Since the initial state \(x(0)\) is within the updated funnel, the initial normalized error \(\hat{e}(x(0),0)\) is within the constrained region \(\hat{\mathbb{D}}\). Note, that \(\hat{\mathbb{D}}\) is an open and bounded set. Further, define \(\hat{\mathbb{D}}_{x}:=\{x\in\mathbb{R}^{n}|\hat{e}(x(0),0)\in\hat{\mathbb{D}}\}\), which is a non-empty open and bounded set. Thus, \(\hat{\mathbb{D}}_{y}:=\hat{\mathbb{D}}_{x}\times\hat{\mathbb{D}}\) is also a non-empty open and bounded set and the initial condition of the augmented state satisfy \(y(0)=\begin{bmatrix}x(0)\\ \hat{e}(x(0),0)\end{bmatrix}\in\hat{\mathbb{D}}_{y}\). Therefore, we have the following initial value problem at hand: \(\dot{y}=H(y,t),y(0)\in\hat{\mathbb{D}}_{y}\).
We can see that \(\hat{e}\) (17), \(\hat{\xi}\) (18) and \(\dot{\gamma}_{d}\hat{e}\), defined on \(\hat{\mathbb{D}}_{y}\), are locally Lipschitz continuous in \(\hat{e}\). Further, according to Assumption 1, \(f(x)\) is also Lipschitz continuous on \(\hat{\mathbb{D}}_{y}\) in \(x\). Therefore, we can conclude that \(H(y,t)\) is locally Lipschitz continuous on \(\hat{\mathbb{D}}_{y}\) in \(y\).
Hence, according to Lemma IV.4, there exists a maximal solution of the IVP \(\dot{y}=H(y,t),y(0)\in\hat{\mathbb{D}}_{y}\) in the time interval \([0,\tau_{\max})\): \(y(t)\in\hat{\mathbb{D}}_{y}\forall t\in[0,\tau_{\max})\).
**Step 2.** Based on Step 1, we know
\[y(t)\in\hat{\mathbb{D}}_{y},\forall t\in[0,\tau_{\max})\] \[\implies \hat{e}(t)\in\hat{\mathbb{D}},\forall t\in[0,\tau_{\max})\] \[\implies \gamma_{L}(t)<x(t)<\gamma_{U}(t),\forall t\in[0,\tau_{\max}).\]
Consider the following positive definite and radially unbounded Lyapunov function candidate: \(V=\frac{1}{2}\hat{e}^{T}\hat{\varepsilon}\).
Differentiating \(V\) with respect to time \(t\) and substituting \(\dot{\hat{e}}\), \(\dot{\hat{e}}\) and system dynamics (1), we obtain:
\[\dot{V} =\hat{\varepsilon}^{T}\hat{\varepsilon}=\hat{\varepsilon}^{T}\frac {2}{1-\hat{e}^{T}\hat{e}}\dot{\hat{e}}=\hat{\varepsilon}^{T}\hat{\xi} \left(\dot{x}-\frac{1}{2}(\dot{\gamma}_{s}-\dot{\gamma}_{d}\hat{e})\right)\] \[=\hat{\varepsilon}^{T}\hat{\xi}\left(f(x)+g(x)u-\frac{1}{2}(\dot{ \gamma}_{s}-\dot{\gamma}_{d}\hat{e})\right).\]
Now employ the control strategy (19), we get
\[\dot{V} =\hat{\varepsilon}^{T}\hat{\xi}\left(f(x)+\left(-k\hat{\xi}\hat{ \varepsilon}-\frac{1}{2}\dot{\gamma}_{d}\hat{e}\right)-\frac{1}{2}(\dot{\gamma} _{s}-\dot{\gamma}_{d}\hat{e})\right)\] \[=\hat{\varepsilon}^{T}\hat{\xi}\left(-k\hat{\xi}\hat{\varepsilon} +\left(f(x)-\frac{1}{2}\dot{\gamma}_{s}\right)\right)\] \[\leq\left\|\hat{\varepsilon}^{T}\hat{\xi}\left(-k\hat{\xi}\hat{ \varepsilon}+\left(f(x)-\frac{1}{2}\dot{\gamma}_{s}\right)\right)\right\|\] \[\leq-k\|\hat{\varepsilon}\|^{2}\|\hat{\xi}\|^{2}+\|\hat{\varepsilon }\|\|\hat{\xi}\|\|\hat{\Phi}\|,\]
where \(\hat{\Phi}:=f(x)-\frac{1}{2}\dot{\gamma}_{s}\). We will look at the boundedness of the two terms in \(\hat{\Phi}\) separately. First, we know \(f(x)\) is a continuous function of \(x\) and \(x\in\hat{\mathbb{D}}_{x},\forall t\in[0,\tau_{\max})\), an open and bounded set. Thus, by applying the extreme value theorem, we can infer \(\|f(x)\|<\infty\). Finally, from Lemma IV.2 we know that \(\dot{\gamma}_{s}\) is also bounded. Hence, \(\hat{\Phi}\in\mathcal{C},\forall t\in[0,\tau_{max}]\).
Now add and substract \(k\hat{\theta}\left\|\hat{\varepsilon}\right\|^{2}\|\hat{\xi}\|^{2}\), where \(0<\hat{\theta}<1\)
\[\dot{V} \leq-k(1-\hat{\theta})\left\|\hat{\varepsilon}\right\|^{2}\|\hat {\xi}\|^{2}-\|\hat{\varepsilon}\|\left\|\hat{\xi}\right\|\left(k\hat{\theta} \left\|\hat{\varepsilon}\right\|\|\hat{\xi}\right\|-\|\hat{\Phi}\|\right)\] \[\leq-k(1-\hat{\theta})\left\|\hat{\varepsilon}\right\|^{2}\left\| \hat{\xi}\right\|^{2},\forall k\hat{\theta}\left\|\hat{\varepsilon}\right\| \|\hat{\xi}\|-\|\hat{\Phi}\|\geq 0\] \[\leq-k(1-\hat{\theta})\left\|\hat{\varepsilon}\right\|^{2}\|\hat{ \xi}\|^{2},\forall\|\hat{\varepsilon}\|\geq\frac{\|\hat{\Phi}\|}{k\hat{\theta} \|\hat{\xi}\|},\forall t\in[0,\tau_{\max}).\]
Therefore, we can conclude that there exists a time-independent upper bound \(\hat{\varepsilon}^{*}\in\mathbb{R}_{0}^{+}\) to the transformed error \(\hat{\varepsilon}\), i.e., \(\|\hat{\varepsilon}\|\leq\hat{\varepsilon}^{*}\forall t\in[0,\tau_{\max})\).
Further, we know from (17) that \(\hat{\varepsilon}_{i}=\ln\left(\frac{1+\hat{\varepsilon}_{i}}{1-\hat{ \varepsilon}_{i}}\right)\). Taking inverse, we can bound the normalized error \(\hat{e}(x,t)=\mathsf{col}(\hat{e}_{1},\ldots,\hat{e}_{n})\) as:
\[-1<\frac{\hat{e}_{i}^{-\hat{\varepsilon}_{i}^{*}}-1}{\hat{e}_{i} ^{-\hat{\varepsilon}_{i}^{*}}+1}=:\hat{e}_{i,L}\leq\hat{e}_{i}\leq\hat{e}_{i,U }:=\frac{\hat{e}_{i}^{\hat{\varepsilon}_{i}^{*}}-1}{\hat{e}_{i}^{\hat{ \varepsilon}_{i}^{*}}+1}<1\] \[\qquad\qquad\qquad\qquad\qquad\forall t\in[0,\tau_{\max}),\ \text{for}\ i\in[1;n].\]
Therefore, by employing the control law (19), we can constrain \(\hat{e}\) to a compact subset of \(\hat{\mathbb{D}}\) as:
\[\hat{e}(x,t)\in[\hat{e}_{L},\hat{e}_{U}]=:\hat{\mathbb{D}}^{\prime}\subset \hat{\mathbb{D}},\forall t\in[0,\tau_{\max}), \tag{21}\]
where, \(\hat{e}_{L}=\mathsf{col}(\hat{e}_{1,L},\ldots,\hat{e}_{n,L})\) and \(\hat{e}_{U}=\mathsf{col}(\hat{e}_{1,U},\ldots,\hat{e}_{n,U})\)
**Step 3.** Finally, we prove that \(\tau_{\max}\) can be extended to \(\infty\).
We know that \(\hat{e}(x,t)\in\hat{\mathbb{D}}^{\prime},\forall t\in[0,\tau_{\max})\), where \(\hat{\mathbb{D}}^{\prime}\) is a non-empty compact subset of \(\hat{\mathbb{D}}\).
Consequently we can conclude that \(x(t)=\frac{1}{2}\gamma_{d}\hat{e}+\gamma_{s}\) also evolves in a compact set:
\[x(t)\in\hat{\mathbb{D}}_{x}^{\prime}\subset\hat{\mathbb{D}}_{x},\forall t\in[ 0,\tau_{\max}). \tag{22}\]
Define the compact set \(\hat{\mathbb{D}}_{y}^{\prime}:=\hat{\mathbb{D}}_{x}^{\prime}\times\hat{ \mathbb{D}}^{\prime}\) and note that \(\hat{\mathbb{D}}_{y}^{\prime}\subset\hat{\mathbb{D}}\). Therefore, there is no \(t\in[0,\tau_{\max})\) such that \(y(t)\notin\hat{\mathbb{D}}_{y}\).
However, if \(\tau_{\max}<\infty\) then according to Lemma IV.5, \(\exists t^{\prime}\in[0,\tau_{\max})\) such that \(y(t)\notin\hat{\mathbb{D}}_{y}\). This leads to a contradiction! Hence, we conclude that \(\tau_{\max}\) can be extended to \(\infty\), i.e., \(x(t)\) satisfies the funnel constraints in (2) \(\forall t\geq 0\).
In conclusion, the satisfaction of (2) is guaranteed for all time when we employ the control strategy (19).
**Remark IV.6**: _From Assumption 1, we know \(g(x)g^{T}(x)\) is invertible. (21) entails that \(\hat{e}\) is bounded. And by definitions (16) and (18), \(\hat{\xi}\) and \(\hat{\varepsilon}\) are also bounded. Further, from Lemma IV.2, \(\dot{\gamma}_{d}\) also \(\in\mathcal{C}\). Finally, all the non-smooth functions in the revamped funnel design in Section IV-B are replaced by their smooth approximations. Hence, the control law \(\hat{u}(x,t)\) (19) is well-defined, i.e., continuous, smooth, and bounded._
**Remark IV.7**: _The structure of the controller defined in Theorem III.1 (7) is the same as that in (19), only with the modified funnel constraints._
In Figure 2, we present a 3D visualization of a scenario where \(\mathbf{X}\subset\mathbb{R}^{2}\) and the modified funnel circumvents around the unsafe set \(\mathbf{U}\), providing a safe path for the trajectory to reach the target \(\mathbf{T}\subset\mathbf{X}\) while staying clear of \(\mathbf{U}\).
## V Extension to Tackle General Unsafe sets
Given the unsafe region \(\mathbf{U}\) with \(n_{u}\) connected convex sets, we first choose \(\mathcal{U}^{j}\) (8). The control law \(\hat{u}(x,t)\) (19) ensures that the controlled state trajectory reaches the target while avoiding this \(\mathcal{U}^{j}\). Now, to address \(\mathcal{U}^{j}\) for \(j=1,2,\ldots,n_{u}\), we iterate through this procedure until the controlled system trajectory stays entirely clear of the unsafe region \(\mathbf{U}\).
Further, in each iteration, for defining the \(\beta\) function, we have a certain degree of randomness. We randomly select \(i^{j}\) from all the possible alternatives (9). Moreover, we also randomly choose whether to introduce \(\beta_{i}^{j}\) in the lower constraint (10) or upper constraint (11), as discussed in Section IV-A. This randomness allows exploration of all the paths around unsafe regions, thus resulting in a higher probability of obtaining a closed-form controller satisfying reach-avoid-stay specifications for complex environments. The algorithm is presented in Algorithm 1.
Fig. 2: 3D visualization.
**Input:**\(\mathbf{X},\mathbf{T},\mathbf{U}=\{\mathcal{U}_{1},\mathcal{U}_{2},\ldots, \mathcal{U}_{n_{u}}\},x(0)\)
**Output:**\(\hat{u}(x(0),\mathbf{X},\mathbf{T},\mathbf{U},x,t):\{\exists\tau\in\mathbb{R}_{0}^{ +}:x(\tau)\in\mathbf{T}\text{ and }\forall t\in\mathbb{R}_{0}^{+}:x(t)\in\mathbf{X},x(t) \cap\mathbf{U}=\emptyset\}\)
1. Given \(\mathbf{T}\), choose \(\eta\in int(\mathbf{T})\) and construct funnel constraints to enforce reachability (2)
2. Apply control law \(u(x,t)\) (7) to drive the controlled trajectory \(x(t)\) to the target while remaining within the state limits.
3. **while** true **do**
4. Obtain obstacle \(\mathcal{U}_{j}\in\mathbf{U}\) (8) and introduce the circumvent function \(\beta\) (10) or (11) to modify the funnel around the obstacle as discussed in Section IV-B.
5. Apply control law \(\hat{u}(x,t)\) (19) and obtain the controlled trajectory \(x_{u}(t)\).
6. **if** (\(x_{u}(t)\cap\mathcal{U}_{j}=\emptyset,\forall j\in[1;n_{u}]\))
7. **return**\(\hat{u}(x,t)\)
8. **end**
9. **end**
**Algorithm 1** Extension for general unsafe region
**Corollary V.1**: _Thus, given a system \(\mathcal{S}\) in (1), target set \(\mathbf{T}\) in the state space \(\mathbf{X}\) and unsafe region \(\mathbf{U}\), termination of the Algorithm 1 defines an adaptive funnel framework and provides us a well-defined closed-form control law (19) that will guide the system trajectory to the target while avoiding the unsafe region, enforcing reach-avoid-stay specifications._
A simulation study illustrating the efficacy of the algorithm in solving reach-avoid-stay specifications in a multi-obstacle environment is presented in the next section.
## VI Simulation Results
Consider a three-wheeled omnidirectional robot operating on a 2-D plane. The Kinematic model of the mobile robot is expressed as:
\[\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{\theta}\end{bmatrix}=\begin{bmatrix}\cos\theta&\sin\theta&0\\ \sin\theta&-\cos\theta&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}u\\ v\\ \omega\end{bmatrix}, \tag{23}\]
where \((x,y)\) and \(\theta\) captures the robot's position and orientation respectively. The control inputs, u, v, and \(\omega\) are linear velocities in the x and y direction of the robot frame and the angular velocity respectively. Note that the robot dynamics satisfy Assumption 1.
We ran the tests for a 2D arena with two wall obstacles and a circular obstacle. The funnel for guiding the robot towards the target is shaped according to (3) with the following parameters: \(\rho_{i,0}=1,\rho_{i,\infty}=0.05\) and \(l_{i}=0.7\) for \(i\in{1,2}\). \(i=1\) and \(i=2\) represents the \(x_{1}\) and \(x_{2}\) coordinates respectively. For the circumvent function (10) or (11), we define \(k=0.001\), \(\delta B=0\) and \(\delta t=0.1\) for all the obstacles. Finally the adaptive law (14) is established with \(\mu=10\), \(\kappa=0.3\) and \(\theta_{o}=0.1\).
The simulation results with three different initial states are depicted in Figure 3.
## VII Conclusion
In this work, we consider the controller synthesis problem for reach-avoid-stay specification. Given state space constraints, obstacles, and targets, we first proposed the introduction of a circumvent function and construction of an adaptive funnel framework. We have then derived a closed-form control law ensuring that the trajectories of a nonlinear system reach target while avoiding all the unsafe regions and respecting state-space constraints, thus, enforcing reach-avoid-stay specifications. Finally, the efficacy of the proposed approach is demonstrated through simulation studies.
|
2304.02072 | Atom-photon dressed states in a waveguide-QED system with multiple giant
atoms | We study the properties of bound states in waveguide-QED systems consisting
of multiple giant atoms coupled to a coupled-resonator waveguide. Based on the
general analytical expressions for these states and the corresponding energy
spectra, we analyze in detail the threshold conditions for the appearance of
bound states and the photon-mediated interactions between dressed atoms for
different configurations. In addition, when multiple giant atoms are coupled to
the waveguide, different types of interacting atomic chain can be obtained by
manipulating the coupling configurations. Accordingly, the energy spectra of
the bound states form metaband structures in the photonic band gaps. This makes
the system a useful platform for quantum simulation and quantum information
processing. | W. Z. Jia, M. T. Yu | 2023-04-04T18:44:41Z | http://arxiv.org/abs/2304.02072v2 | Atom-photon dressed states in a waveguide-QED system with multiple giant atoms coupled to a resonator-array waveguide
###### Abstract
We study the properties of bound and scattering states in the single-excitation subspace in waveguide-QED systems consisting of multiple giant atoms coupled to a coupled-resonator waveguide. Based on the most general analytical expressions possible for these states and the corresponding energy spectra, we analyze in detail relevant phenomena due to the influence of a structured environment combined with the non-dipole effects of giant atoms. We analyze the threshold conditions for the appearance of bound states and the photon-mediated interactions between dressed atoms for different configurations. In addition, when multiple giant atoms are coupled to the waveguide, the bound states in the photonic band gaps can form different types of metabend structures, depending on coupling configurations. This makes the system a useful platform for quantum simulations. Finally the influence of the structured bath on the scattering spectra of multiple atoms also becomes remarkable in the strong coupling regime, leading to unconventional spectral structures.
## I Introduction
Waveguide quantum electrodynamics (wQED) systems [1; 2], realized by coupling a single atom or multiple atoms to a one-dimensional (1D) waveguide, have attracted widespread attention in recent years. Such systems are excellent platforms for investigating strong light-matter interactions at the single-photon level and may have potential applications in modern quantum technologies [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. The physics of light-matter interactions in 1D becomes even more involved when the waveguide is engineered to have finite-bandwidth and nontrivial dispersion relations, e.g., an array of linear or nonlinear optical resonators described by tight-binding model [20; 21; 22] and a 1D topological photonic bath described by the Su-Schrieffer-Heeger (SSH) model [23; 24; 25]. When quantum emitters are coupled to a finite-bandwidth waveguide, there exist two types of atom-photon dressed states. The scattering states, with energies in the photonic band, are spatially extended over the whole waveguide [20; 26; 27]. In addition, there exist atom-photon bound states (BSs) with energies outside the continuum of propagating modes and exponentially localized photonic components [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], which can be looked on as continuum generalizations of the dressed states in cavity-QED structures [30].
With the development of modern nanotechnology, artificial atoms (e.g., transmon qubits [40]) can couple to the bosonic modes (phonons or microwave photons) in a 1D waveguide at multiple points spaced wavelength distances, called giant atoms [41]. Unlike point-like small atoms, a single giant atom can be connected to a waveguide through multiple coupling points, leading to some novel phenomena relevant to interference and time-delay effects, such as frequency-dependent decay rate and Lamb shift of a single atom [42; 43; 44], decoherence-free interaction between two braided giant atoms [45; 46; 43], non-Markovian dynamics with polynomial spontaneous decay [47; 48], and unconventional scattering spectra due to effects beyond the dipole approximation [49; 50; 51; 52; 53]. The effects of chiral atom-waveguide coupling [54] and ultra strong coupling [55; 56] in the wQED systems with giant atoms have also been investigated. The structure of giant atom can also be implemented with cold atoms in optical lattices [57], with a giant spin ensemble [58], or even in a synthetic dimension [59; 60]. Recently, the light-matter interactions for giant atoms coupled to a finite bandwidth waveguide, especially the properties of the BSs, have received much attention. For example, when a single giant atom is coupled to a coupled-resonator waveguide [61] or a topological waveguide [62], the BSs are strongly influenced by the coupling configuration and the structured bath. The interaction between giant atoms due to the overlap of single-atom BSs can be stronger than that between small atoms [63]. When a Josephson-chain waveguide with periodic impedance modulation interacts with giant atoms, chiral BSs can be generated due to interference effects [64].
Motivated by the above discussion, in this paper we analyze the properties of BSs and scattering states in a system of multiple giant atoms coupled to a common photonic bath realized by a 1D array of coupled resonators. We focus on the single-excitation manifold and derive the most general analytical expressions possible for these states and their corresponding energy spectra, from which we can obtain a variety of properties of them. For the case of single-giant-atom BSs, one can recover the numerical results in previous literature [61], as well as several properties of the BSs not discussed before, including the threshold conditions and the localization lengths of the BSs. For the case of double-giant-atom BSs, we find that the threshold conditions of the BSs, the photon-mediated dipole-dipole-like interactions between the atoms, and the shapes of the photonic wave functions are strongly influenced by the layout of the connection points. For the case of multi-giant-atom BSs,
we observe the formation of metabands in the photonic band gaps, and part or all of the bands can merge with the continuum under certain threshold conditions. The structures of the bands can be explained in terms of a 1D chain of interacting dressed atoms. By manipulating the distribution of coupling points as well as the interaction strengths at these points, one can construct various spin chain models, e.g., the SSH chain. Furthermore, we investigate the single-photon scattering spectra. We find that in the weak coupling regime, both the linear and the Markovian approximations are available (i.e., both the group velocity and the phase decay can be regarded as detuning independent), and thus the line shapes are characterized by some characteristic quantities, including the Lamb shift, the individual decay, the exchange interaction, and the collective decay [45; 51]. While in the strong coupling regime, the influence of the nonlinear dispersion and the detuning-dependent phase-accumulation effect must be taken into account, thus the spectra show more abundant structures with unconventional line shapes, which are very different from those for weak coupling.
The remainder of this paper is organized as follows. In Sec. II we introduce the basic model of the wQED structure containing multiple giant atoms. In Sec. III we provide general analytical expressions for the single-excitation atom-photon BSs. In Secs. IV, V, and VI, we discuss the properties of single-, double-, and multi-giant-atom BSs, respectively. In Sec. VII, we analyze the properties of single-photon scattering states. Finally, further discussions and conclusions are given in Sec. VIII.
## II Model
We consider a system where a set of \(N_{\rm a}\) two-level giant atoms is coupled to a finite-bandwidth optical waveguide. The waveguide is modeled as an array of \(N\rightarrow\infty\) optical resonators with center frequency \(\omega_{c}\) and a nearest-neighbor tunnel coupling \(J\). The \(m\)th (\(m=1,2,\cdots,N_{\rm a}\)) atom couples to \(M_{m}\) resonators simultaneously, as shown in Fig. 1 (It should be emphasized that, for simplicity, Figure 1 presents the case of separate atoms as an example, but the following analyses are available for any complex configuration.). The total Hamiltonian for this system is (\(\hbar=1\))
\[H = \omega_{c}\sum_{j}c_{j}^{\dagger}c_{j}-J\sum_{j}\left(c_{j}^{ \dagger}c_{j+1}+{\rm H.c.}\right) \tag{1}\] \[+\sum_{m=1}^{N_{\rm a}}\Omega_{m}\left|e\right>_{m}\left<e \right|+\sum_{m=1}^{N_{\rm a}}\sum_{l=1}^{M_{m}}\delta_{ml}\left(c_{n_{ml}}^{ \dagger}\sigma_{m}^{-}+{\rm H.c.}\right),\]
where \(c_{j}\) (\(c_{j}^{\dagger}\)) is the annihilation (creation) operator of the photonic mode at the \(j\)th site. \(\sigma_{m}^{+}=\left|e\right>_{m}\langle g\right|\) (\(\sigma_{m}^{-}=\left|g\rangle_{m}\langle e\right|\)) is the raising (lowering) operator of the \(m\)th atom. \(\Omega_{m}\) is the transition frequency between the ground state \(\left|g\right>_{m}\) and the excited state \(\left|e\right>_{m}\). \(n_{ml}\) is used to label the site connecting to the \(l\)th coupling point of the \(m\)th atom, the corresponding atom-photon coupling strength is \(g_{ml}\). Note that in Eq. (1) we have performed rotating wave approximation by assuming that \(\omega_{c}\simeq\Omega_{m}\) and that \(\omega_{c},\Omega_{m}\gg J,g_{ml}\).
By introducing momentum operators
\[c_{k}=\frac{1}{\sqrt{N}}\sum_{j}e^{-ikj}c_{j}, \tag{2}\]
with \(k\in[-\pi,\pi]\), we can obtain the Hamiltonian in momentum space
\[H = \sum_{k}\omega_{k}c_{k}^{\dagger}c_{k}+\sum_{m=1}^{N_{\rm a}} \Omega_{m}\left|e\right>_{m}\left<e\right| \tag{3}\] \[+\frac{1}{\sqrt{N}}\sum_{k}\sum_{m=1}^{N_{\rm a}}\sum_{l=1}^{M_{m }}\delta_{ml}\left(e^{-ikn_{ml}}c_{k}^{\dagger}\sigma_{m}^{-}+{\rm H.c.}\right),\]
where the tight-binding Hamiltonian of the resonator array [the first line of Eq. (1)] is rewritten in the diagonal form, with mode frequencies
\[\omega_{k}=\omega_{c}-2J\cos k \tag{4}\]
lying inside a band with central frequency \(\omega_{c}\) and total width \(4J\).
## III General analytical expressions for the single-excitation atom-photon bound states
In the single-excitation subspace, an eigenstate of Hamiltonian (3) has the form
\[\left|\psi\right>=\left(\cos\theta\sum_{m=1}^{N_{\rm a}}u_{m}\sigma_{m}^{+}+ \sin\theta\sum_{k}f_{k}c_{k}^{\dagger}\right)\left|G\right>, \tag{5}\]
where \(u_{m}\) is the excitation amplitude of the \(m\)th atom and \(f_{k}\) is the excitation amplitude of a photon with wave vector \(k\), respectively, satisfying the normalization conditions \(\sum_{m=1}^{N_{\rm a}}\left|u_{m}\right|^{2}=1\) and \(\sum_{k}\left|f_{k}\right|^{2}=1\). \(\theta\) is the mixed angle of the atomic and the photonic components. \(\left|G\right>\) is the ground state of system, with all the atoms and resonators being unexcited. Plugging this ansatz into the
Figure 1: A schematic of multiple giant atoms coupled to an array of coupled optical resonators.
eigen equation \(H\left|\psi\right\rangle=E\left|\psi\right\rangle\) and changing into a rotating frame with respect to \(\omega_{c}\) (i.e., making the substitution \(E-\omega_{c}\to E\)) yields the coupled equations
\[\left[E+2J\cos k\right]f_{k}=\frac{\cot\theta}{\sqrt{N}}\sum_{m=1}^{N_{\text{a} }}\sum_{l=1}^{M_{m}}g_{ml}e^{-ikn_{ml}}u_{m}, \tag{6a}\] \[\left(E-\delta_{m}\right)u_{m}=\frac{\tan\theta}{\sqrt{N}}\sum_{k}\sum_{l=1}^{M _{m}}g_{ml}e^{ikn_{ml}}f_{k}, \tag{6b}\]
where the detuning between the transition frequency of the \(m\)th atom and the central frequency of waveguide is defined as \(\delta_{m}=\mathbf{\Omega}_{m}-\omega_{c}\). By plugging Eq. (6a) into Eq. (6b) to eliminate \(f_{k}\) and using the substitution \(\frac{1}{N}\sum_{k}\rightarrow\frac{1}{2\pi}\int_{-\pi}^{\pi}\mathrm{d}k\), we end up with
\[\mathbf{H}\mathbf{u}=E\mathbf{u}, \tag{7}\]
where \(\mathbf{u}=(u_{1},u_{2},\cdots,u_{N_{\text{a}}})^{\top}\) and
\[\mathbf{H}=\mathrm{diag}(\delta_{1},\delta_{2},\cdots,\delta_{N_{\text{a}}}) +\mathbf{\Sigma}(E). \tag{8}\]
The elements of the energy correction matrix \(\mathbf{\Sigma}(E)\) are defined as
\[\Sigma_{mm^{\prime}}\left(E\right)=\sum_{l=1}^{M_{m}}\sum_{l^{\prime}=1}^{M_{ m^{\prime}}}\frac{g_{ml}g_{ml^{\prime}}}{2\pi}\int_{-\pi}^{\pi}\mathrm{d}k\, \frac{e^{ik\left(n_{ml}-n_{m^{\prime}}\right)}}{E+2J\cos k}. \tag{9}\]
The diagonal element \(\Sigma_{mm}(E)\) represents the self-energy correction of the atom \(m\) due to emission and reabsorption of photons by the same atom. The off-diagonal term \(\Sigma_{mm^{\prime}}(E)\) (\(m\neq m^{\prime}\)) represents the mutual-energy correction due to photon exchange between atoms \(m\) and \(m^{\prime}\) through the optical waveguide, which can lead photon-mediate interactions between atoms. By using residue theorem, the integral in Eq. (9) is calculated explicitly [65] under condition \(|E|>2J\), thus we have
\[\Sigma_{mm^{\prime}}^{(\beta)}\left(E\right)=\frac{\sum_{l=1}^{M_{m}}\sum_{l^ {\prime}=1}^{M_{m^{\prime}}}g_{ml}g_{ml^{\prime}}(-\beta)^{\left|n_{ml^{\prime }}-n_{m^{\prime}l^{\prime}}\right|}e^{-\frac{\left|n_{ml^{\prime}}-n_{m^{ \prime}l^{\prime}}\right|}{\mathcal{A}(E)}}}{E\sqrt{1-\frac{4J^{2}}{E^{2}}}}, \tag{10}\]
with \(\lambda(E)\) being defined as
\[\lambda(E)=\left[\mathrm{arccosh}\left(\frac{|E|}{2J}\right)\right]^{-1}. \tag{11}\]
When \(\beta=1\) (\(\beta=-1\)), the expression (10) represents the energy-correction function for \(E>2J\) (\(E<-2J\)), which can be used to determine the energy of the BS above (below) the scattering continuum.
One can see from Eq. (7) that the energies of the BSs are the real solutions of the following equation
\[\det[\mathbf{H}-E\mathbf{I}]=0, \tag{12}\]
with \(\mathbf{I}\) being the identity matrix. Note that above equation is a transcendental equation because \(\mathbf{H}\) is a function of \(E\). For fixed \(\beta\), there are at most \(N_{\text{a}}\) real solutions, labeled as \(E_{\beta s}\) [\(s=1,2,\cdots s_{\text{m}}\) (\(s_{\text{m}}\leq N_{\text{a}}\))]. And when \(\beta=1\) (\(\beta=-1\)), the solutions satisfy \(E_{1s}>2J\) (\(E_{-1s}<-2J\)), corresponding to the energies of the upper (lower) BSs.
After fixing the energy \(E_{\beta s}\) of a BS, one can further obtain the corresponding excitation amplitudes \(u_{m}^{(\beta s)}\) of the atoms from Eq. (7). We further plug these values into Eq. (6a), and then substitute Eqs. (6a) and (2) into Eq. (5). Finally, after integrating over \(k\) and fixing the mixing angle using the normalization conditions, we can obtain the expression of BS in real space
\[\left|\psi_{\beta s}\right\rangle=\left(\cos\theta_{\beta s}\sum_{m=1}^{N_{ \text{a}}}u_{m}^{(\beta s)}\sigma_{m}^{+}+\beta\sin\theta_{\beta s}\sum_{j}f_{ j}^{(\beta s)}c_{j}^{\dagger}\right)\left|G\right\rangle, \tag{13}\]
corresponding to the energy \(E_{\beta s}\). The mixing angle \(\theta_{\beta s}\) satisfies the relation
\[\tan\theta_{\beta s}=\frac{\mathcal{N}_{\beta s}}{2\sinh\frac{1}{\lambda_{ \beta s}}}. \tag{14}\]
The photonic excitation amplitude takes the form
\[f_{j}^{(\beta s)}=\frac{1}{\mathcal{N}_{\beta s}}\sum_{m=1}^{N_{\text{a}}}\sum _{l=1}^{M_{m}}u_{m}^{(\beta s)}\tilde{g}_{ml}(-\beta)^{\left|j-n_{ml}\right|}e ^{-\frac{\left|j-n_{ml}\right|}{\mathcal{N}_{\beta s}}}. \tag{15}\]
In Eqs. (14) and (15), the localization length is defined as \(\lambda_{\beta s}=\lambda(E_{\beta s})\), representing the size of the photonic wave packet around each coupling point. The normalization constant \(\mathcal{N}_{\beta s}\) is given by
\[\mathcal{N}_{\beta s}=\sqrt{\sum_{m,m^{\prime}=1}^{N_{\text{a}}}\sum_{l=1}^{M _{m^{\prime}}}\sum_{l=1}^{M_{m^{\prime}}}u_{m}^{(\beta s)}u_{m^{\prime}}^{( \beta s)*}\tilde{g}_{ml}\tilde{g}_{m^{\prime}}(-\beta)^{\left|n_{ml}-n_{m^{ \prime}l^{\prime}}\right|}\left(\coth\frac{1}{\lambda_{\beta s}}+\left|n_{ml}-n_ {m^{\prime}l^{\prime}}\right|\right)e^{-\frac{\left|n_{ml^{\prime}}-n_{m^{ \prime}l^{\prime}}\right|}{\mathcal{N}_{\beta s}}}}, \tag{16}\]
where \(\tilde{g}_{ml}=g_{ml}/J\) is the coupling strength scaled by \(J\). Equation (15) exhibits that the total photonic wave function is a superposition of a variety of exponentially localized wave packets centered around the site \(n_{ml}\) with amplitudes \(\tilde{g}_{ml}u_{m}^{(\beta s)}/\mathcal{N}_{\beta s}\) and localization length \(\lambda_{\beta s}\). Ac
cording to Eq. (11), the function \(\lambda(E)\) decreases monotonically with increasing \(|E|\) in the domain \(|E|>2J\). Thus the further away the eigen energy is from the band edge, the more the photonic cloud is localized around each coupling point.
Equation (13) can be rewritten as another useful form
\[\left|\psi_{\beta s}\right\rangle=\sum_{m=1}^{N_{\mathrm{u}}}u_{m}^{(\beta s)}D_ {\beta s}^{\dagger}(m)\left|G\right\rangle, \tag{17}\]
where the dressed-state creation operator \(D_{\beta s}^{\dagger}(m)\) related to atom \(m\) is defined as
\[D_{\beta s}^{\dagger}(m)=\cos\theta_{\beta s}\sigma_{m}^{\star}+\beta\sin \theta_{\beta s}\sum_{l=1}^{M_{\mathrm{m}}}\frac{\tilde{c}_{\beta s}^{\dagger} (n_{ml})}{\mathcal{N}_{\beta s}}. \tag{18}\]
The unnormalized photonic creation operator takes the form
\[\tilde{c}_{\beta s}^{\dagger}(n_{ml})=\sum_{J}\tilde{g}_{ml}(-\beta)^{\left|J- n_{ml}\right|}e^{-\frac{\left|J-n_{ml}\right|}{4\beta s}}\,c_{J}^{\dagger}, \tag{19}\]
which creates a photon in an exponentially localized wave packet around the \(l\)th coupling point of the \(m\)th atom.
The results obtained in this section are applicable for the most general setup possible, with \(N_{\mathrm{a}}\) atoms such that atom \(m\) has \(M_{m}\) connection points, from which one can also recover the behavior of other systems previously discussed, e.g., multiple small atoms [30], a single giant atom [61]. Based on the analytical expressions obtained above, we will discuss the properties of the BSs in detail in Sec IV-Sec VI.
Note that in the wQED systems with giant atoms, there also exist other kinds of BSs. For example, when a giant atom is coupled to a linear waveguide, BSs can appear in the continuum with photons confined between different coupling points [66; 67], and similar phenomena have been studied previously in wQED systems with small atoms [68; 69]. When a giant atom is coupled to a dynamically modulated coupled-resonator waveguide, after interacting with the atom, some of the scattered photons also show the localization effect [60]. In this paper we do not consider these kinds of states and use the term _bound state_ only for atom-photon dressed state with energy in the band gap and photonic component exponentially localized around each coupling point.
## IV Bound states for a single giant atom
Let us first consider a single giant atom with \(N_{\mathrm{c}}\) connection points. \(\delta\) is the detuning between the transition frequency of the atom and the central frequency of the waveguide, \(n_{l}\) is used to label the site connecting to the \(l\)th coupling point, and \(g_{l}\) is the corresponding atom-photon coupling strength. In this case, equation (12) reduces to the following form
\[E-\delta=\Sigma_{\beta}(E) \tag{20}\]
with
\[\Sigma_{\beta}(E)=\frac{\sum_{l,l^{\prime}=1}^{N_{\mathrm{c}}}g_{l}g_{l^{ \prime}}(-\beta)^{\left|n_{l}-n_{l^{\prime}}\right|}e^{-\frac{\left|g_{l}-n_{l ^{\prime}}\right|}{4(E)}}}{E\sqrt{1-\frac{4J^{2}}{E^{2}}}} \tag{21}\]
being the self-energy function of a single giant atom. We label the real solutions of Eq. (20) as \(E_{\beta}\), with \(\beta=\pm 1\) representing the upper and lower BSs, respectively. The corresponding wave functions can be written as
\[\left|\psi_{\beta}\right\rangle=\left(\cos\theta_{\beta}\sigma^{\star}+\beta \sin\theta_{\beta}\sum_{j}f_{j}^{(\beta)}c_{j}^{\dagger}\right)\left|G\right\rangle, \tag{22}\]
where the mixing angle is defined by \(\tan\theta_{\beta}=\mathcal{N}_{\beta}/\big{(}2\sinh\frac{1}{4\beta}\big{)}\), the photonic excitation amplitude takes the form
\[f_{j}^{(\beta)}=\frac{1}{\mathcal{N}_{\beta}}\sum_{l=1}^{N_{\mathrm{c}}} \tilde{g}_{l}\left(-\beta\right)^{\left|J-n_{l}\right|}e^{-\frac{\left|j-n_{l }\right|}{4\beta}}, \tag{23}\]
and the localization length is defined as \(\lambda_{\beta}=\lambda(E_{\beta})\). The normalization constant \(\mathcal{N}_{\beta}\) is given by
\[\mathcal{N}_{\beta}=\sqrt{\sum_{l,l^{\prime}=1}^{N_{\mathrm{c}}}\tilde{g}_{l} \tilde{g}_{l^{\prime}}(-\beta)^{\left|n_{l}-n_{l^{\prime}}\right|}\left(\coth \frac{1}{\lambda_{\beta}}+\left|n_{l}-n_{l^{\prime}}\right|\right)e^{-\frac{ \left|n_{l}-n_{l^{\prime}}\right|}{4\beta}}}, \tag{24}\]
where \(\tilde{g}_{l}=g_{l}/J\) is the coupling strength scaled by \(J\).
Note that although the BSs for a single giant atom were numerically investigated in Ref. [61], we can extract several properties of the BSs from above analytical expressions, including the threshold condition and the localization length of the BSs, which were not discussed in previous literatures. Here we focus on the special case that the distance between neighboring coupling points is
Figure 2: The BS energy levels for the case of a single giant atom are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(N_{\mathrm{c}}\). The inset in each panel shows the corresponding schematic of system. For all plots \(\delta=0\) is assumed.
a constant \(\Delta n\) and the coupling strengths are all the same, labeled as \(g\). By analyzing Eq. (20) (see more details in Appendix A), we find that there are at most two BSs outside the scattering continuum, among which a BS below the continuum (with energy \(E_{-1}<2J\)) always exists, whereas the appearance of the other BS above the continuum (with energy \(E_{1}>2J\)) depends on the parameters of system. Specifically, if \(\delta<2J\), \(N_{\rm c}\in\mathbb{E}^{+}\) and \(\Delta n\in\mathbb{O}^{+}\), the upper BS exists when \(g>\sqrt{2J(2J-\delta)/(N_{\rm c}\Delta n)}\), and for coupling strength below this value, the BS disappears, as shown in Figs. 2(a) and 2(b). For other parameters, the upper BS exists for all \(g\), as displayed in Figs. 2(c) and 2(d).
Figure 3A shows the spatial wave function distribution of photons \(\tilde{f}_{j}^{(\beta)}=\beta\sin\theta_{\beta}f_{j}^{(\beta)}\) when a giant atom couples to the waveguide through two identical connection points. One can see that due to the non-dipole effect, the photonic cloud is no longer concentrated around one point, which is different from a small atom [30]. When the coupling strength \(g\) is small, the exponential wave-packets around different coupling points begin to overlap because the localization length \(\lambda_{\beta}\) is compare to or even larger than the distance between the coupling points (see the left column of Fig. 3A). On the contrary, for a large coupling strength, the photons are highly localized around the coupling points (see the right column of Fig. 3A). In addition, for the BS in the lower band gap, the photonic excitation amplitudes of all the sites have the same sign (see the upper row of Fig. 3A). While for the BS in the upper band gap, the photonic excitation amplitudes of neighboring sites have the opposite signs [see the lower row of Fig. 3A].
To further understand the properties of BS wave functions, we take the upper BS as an example and provide in Fig. 3B the localization length \(\lambda_{+1}\) and the atomic excited-state population \(P_{+1}=(\cos\theta_{+1})^{2}\) as functions of \(g\) for different \(\Delta n\) and \(\delta\). When \(\Delta n\to\infty\), the system can be regarded as an effective small atom with coupling strength \(\sqrt{N_{\rm c}}g\), thus the corresponding \(\lambda_{+1}\) and \(P_{+1}\) are shown by the dashed lines in Fig. 3B for reference. The localization length \(\lambda_{+1}\) decreases monotonically, i.e., the photonic cloud becomes more and more localized, as the coupling \(g\) increases, as shown by the upper row of Fig. 3B. In addition, when the atomic frequency is inside the upper band gap, i.e., \(\delta>2J\), we have \(\lambda_{+1}\to\left(\arccos\frac{|\delta|}{2J}\right)^{-1}\) and \(P_{+1}\to 1\) as \(g\to 0\) for different \(\Delta n\) (see the left column of Fig. 3B). That is to say, the upper BS becomes more atom-like as \(g\to 0\), only a small amount of residual photonic cloud with finite localization length remains. On the other hand, for \(\delta<2J\), we have \(\lambda_{+1}\to\infty\) and \(P_{+1}\to 0\) as \(g\) decreases to the threshold value obtained previously [\(g\to\sqrt{2J(2J-\delta)/(N_{\rm c}\Delta n)}\) for \(N_{\rm c}\in\mathbb{E}^{+}\) and \(\Delta n\in\mathbb{O}^{+}\), and \(g\to 0\) for other values of \(N_{\rm c}\) and \(\Delta n\), see the right column of Fig. 3B], i.e., the upper BS becomes more photon-like and eventually becomes indistinguishable from the propagating waveguide modes.
In the strong coupling limit \(g\gg J\), we have \(\lambda_{\beta}\ll 1\) (see, for example, the upper row of Fig. 3B, where the
Figure 3: (A) The spatial wave function distribution of photons when a giant atom couples to the coupled-resonator waveguide through two connection points with distance \(\Delta n=6\). The coupling strength at each connection points is set to \(g=J\) (left column) and \(g=5J\) (right column). The upper (lower) row corresponds to the lower (upper) BS. The black disks in each panel are used to label the positions of the sites connecting to the atom. (B) The localization lengths (upper row) and the corresponding atomic populations (lower row) for a single giant atom with \(N_{\rm c}=2\) are plotted as functions of the coupling strength for different values of \(\Delta n\). The atom-photon detuning is set to \(\delta=2.2J\) (left column) and \(\delta=0\) (right column). For comparison, in each panel the dashed line indicates the corresponding localization length or atomic population for the case of \(\Delta n\to\infty\). The blue grid lines in the right column are used to label the threshold values of the BSs for \(\Delta n=1,3\).
case of \(\beta=+1\) is shown), which means that the photons are almost confined in the resonators connecting to the atom. The system thus can be looked on as a cavity-QED system with an atom simultaneously coupled to \(N_{\mathrm{c}}\) non-interaction cavities, or equally, an atom coupled to a cavity with strength \(\sqrt{N_{\mathrm{c}}}g\). Thus, one can see that the dashed line in each panel of Fig. 3B is the asymptotic expression of the solid curves when \(g\gg J\). Accordingly, the BSs can be approximated as \(\big{(}\cos\theta_{B}\sigma^{+}+\beta\sin\theta_{B}\sum_{l=1}^{N_{\mathrm{c}}} c_{n_{l}}^{\dagger}/\sqrt{N_{\mathrm{c}}}\big{)}|G\rangle\) with corresponding energies \(E_{\beta}\simeq\delta/2+\beta\sqrt{\delta^{2}+4N_{\mathrm{c}}g^{2}}/2\) (e.g., for \(\delta=0\), \(E_{\beta}\simeq\beta\sqrt{N_{\mathrm{c}}}g\), as shown in Fig. 2). The mixing angle becomes \(\tan 2\theta_{B}=2\beta\sqrt{N_{\mathrm{c}}}g/\delta\), satisfying \(\cos\theta_{B}=\sin\theta_{-\beta}\) [see, for example, the lower row of Fig. 3B, where the atomic excited-state population \(P_{+1}=(\cos\theta_{+1})^{2}\to 1/2\) for \(g\gg J,\delta\)].
## V Bound states for double giant atoms
Now we consider the case of two atoms (denoted by \(a\) and \(b\)). According to Eq. (12), we obtain the transcendental equation for the energy of BS
\[E=\frac{1}{2}\left(\tilde{\delta}_{a}^{(\beta)}(E)+\tilde{\delta}_{b}^{(\beta )}(E)+\zeta\sqrt{[\tilde{\delta}_{ab}^{(\beta)}(E)]^{2}+4[\Sigma_{ab}^{(\beta) }(E)]^{2}}\right), \tag{25}\]
with \(\zeta=\pm 1\). The effective detuning is defined as \(\tilde{\delta}_{m}^{(\beta)}(E)=\delta_{m}+\Sigma_{mm}^{(\beta)}(E)\) (\(m=a,b\)), which means that the detuning \(\delta_{m}\) of a bare atom is shifted by the self-energy correction \(\Sigma_{mm}^{(\beta)}(E)\). And \(\tilde{\delta}_{ab}^{(\beta)}(E)=\tilde{\delta}_{a}^{(\beta)}(E)-\tilde{ \delta}_{b}^{(\beta)}(E)\) is the difference between the effective detunings. \(\Sigma_{ab}^{(\beta)}(E)\) is the mutual energy between the atom \(a\) and \(b\). From Eq. (25), one can fix at most four real solutions, labeled as \(E_{\beta\zeta}\). The corresponding atomic excitation amplitudes can be obtained from Eq. (7) and take the form
\[u_{a}^{(\beta\zeta)}=\sin\Theta_{\beta\zeta},\quad u_{b}^{(\beta\zeta)}=\cos \Theta_{\beta\zeta}, \tag{26}\]
with the mixing angle
\[\tan\Theta_{\beta\zeta}=\frac{-2\Sigma_{ab}^{(\beta)}(E_{\beta\zeta})}{\tilde {\delta}_{ab}^{(\beta)}(E_{\beta\zeta})-\zeta\sqrt{[\tilde{\delta}_{ab}^{(\beta )}(E_{\beta\zeta})]^{2}+4[\Sigma_{ab}^{(\beta)}(E_{\beta\zeta})]^{2}}}. \tag{27}\]
The corresponding atom-photon BSs in real space can be expressed by Eq. (13) or Eq. (17) by letting \(m=a,b\) and \(s=\zeta\).
Note that above results are applicable to any configuration containing double giant atoms, each with multiple connection points. In what follows, we focus on the special case that the frequencies of the two atoms are equal (i.e., \(\delta_{a}=\delta_{b}=\delta\)), each of the atoms has two connection points, and all the coupling strengths are identical (\(g_{ml}=g\)). We also assume that the coupling points are symmetrically distributed, thus each BS has definite parity. Moreover, as summarized in Ref. [45], there are three basic topologies for double-giant-atom structures considered here, called separate, braided, and nested giant atoms, respectively. The corresponding configurations are shown schematically in Fig. 4. In the following subsections, we will discuss in detail the characteristic of the BSs for these configurations, including the threshold conditions of the BSs, the photon-mediated dipole-dipole interactions between the atoms, the shapes of the photonic wave functions, and so on.
### Two separate atoms
First we consider the configuration containing two separate atoms. For the structures with parity symmetry we are interested in, we can let \(\Delta n=n_{a2}-n_{a1}=n_{b2}-n_{b1}\) and \(\Delta m=n_{b1}-n_{a2}\), as shown in Fig. 4(a). Thus from Eq. (10), we can see that the self-energies of the two atoms are identical \(\Sigma_{aa}^{(\beta)}(E)=\Sigma_{bb}^{(\beta)}(E)\equiv\tilde{\Sigma}_{\beta} (E)\), with
\[\tilde{\Sigma}_{\beta}(E)=\frac{2g^{2}\left(1+(-\beta)^{\Delta n}e^{-\frac{ \Delta n}{A(E)}}\right)}{E\sqrt{1-\frac{4J^{2}}{E^{2}}}}. \tag{28}\]
And the mutual-energy becomes
\[\Sigma_{ab}^{(\beta)}(E)=\frac{1}{2}(-\beta)^{\Delta m}e^{-\frac{\Delta m}{A(E )}}\left(1+(-\beta)^{\Delta n}e^{-\frac{\Delta n}{A(E)}}\right)\tilde{\Sigma} _{\beta}(E). \tag{29}\]
For present case of two identical atoms with \(\tilde{\delta}_{ab}^{(\beta)}(E)=0\), equation (25) becomes
\[E-\delta=\Sigma_{\beta\alpha}(E), \tag{30}\]
with
\[\Sigma_{\beta\alpha}(E)=\tilde{\Sigma}_{\beta}(E)+\alpha\Sigma_{ab}^{(\beta)} (E). \tag{31}\]
Figure 4: Sketches of two giant atoms coupled to an array of coupled optical resonators for three distinct topologies: (a) two separate giant atoms, (b) two braided giant atoms, and (c) two nested giant atoms.
Equation (30) has up to four solutions, denoted as \(E_{\mathbf{\beta}\alpha}\), outside the scattering continuum. The corresponding atomic excitation amplitudes become \(u_{a}^{(\mathbf{\beta}\alpha)}=1/\sqrt{2}\) and \(u_{b}^{(\mathbf{\beta}\alpha)}=\alpha/\sqrt{2}\). Here \(\alpha=\zeta\text{sgn}(\Sigma_{ab}^{(\mathbf{\beta})})=\pm 1\) is used to label the even- and odd-parity states, respectively. The corresponding BSs can be expressed in terms of Eq. (13) or Eq. (17) by letting \(m=a,b\) and \(s=\alpha\).
By analyzing Eq. (30), we can derive the threshold conditions for the BSs for this case (see more details in Appendix B.1), which are summarized below. To verify these conditions, we exhibit in Fig. 5A the single-excitation spectra as functions of \(g\) (with \(\delta=0\)) for different values of \(\Delta n\) and \(\Delta m\).
_Threshold conditions for the lower BSs_:
(i) When \(\delta<-2J\), there always exist two lower BSs, and the energy of the even-parity state is lower than that of the odd-parity one.
(ii) When \(\delta>-2J\), the lower BS with even parity always exists, while another odd-parity one with higher energy exists only when \(g>\sqrt{J(2J+\delta)/(\Delta n+2\Delta m)}\), as shown in Fig. 5A.
_Threshold conditions for the upper BSs_:
(i) When \(\delta>2J\), there always exist two upper BSs with opposite parities. If \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{E}^{+}\)), the energy of the odd-parity (even-parity) state is higher than that of the even-parity (odd-parity) one.
(ii) When \(\delta<2J\), the appearance of upper BSs depends on the parameters \(\Delta n\) and \(\Delta m\), which can be summarized as follows:
* If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), two BSs with opposite parities appear simultaneously when \(g>\sqrt{J(2J-\delta)/\Delta n}\), and the energy of the odd-parity (even-parity) state is higher than that of the even-parity (odd-parity) one, as shown in the left column of Fig. 5A.
* If \(\Delta n\in\mathbb{B}^{+}\), and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), there always exists an upper BS with odd (even) parity, while another upper BS with opposite parity and lower energy only exists when \(g>\sqrt{J(2J-\delta)/(\Delta n+2\Delta m)}\), as shown in the right column of Fig. 5A.
In the strong coupling regime \(g\gg J\), the photons are highly localized around the coupling points. Thus only a small overlap exists between the photonic wave functions associated with the single-atom BSs, which can induce a small splitting of the energies centered at \(E_{\mathbf{\beta}}\simeq\beta\sqrt{2}g\) (assuming \(\delta=0\)) with \(|E_{\mathbf{\beta},1}-E_{\mathbf{\beta},-1}|\ll|E_{\mathbf{\beta}}|\) (Here \(E_{\mathbf{\beta}}\) is the BS energy for a single giant atom), charactering the dipole-dipole-like coupling between distant dressed states. In this regime, the Hamiltonian can be approximated as
\[H\simeq E_{\mathbf{\beta}}\sum_{i=a,b}D_{\mathbf{\beta}}^{\dagger}(i)D_{\mathbf{\beta}}(i )+\frac{1}{2}U_{\mathbf{\beta}}\left[D_{\mathbf{\beta}}^{\dagger}(a)D_{\mathbf{\beta}}(b) +\text{H.c.}\right], \tag{32}\]
where \(D_{\mathbf{\beta}}(i)\) are the single-atom dressed-state operators, which are treated as mutually commuting degrees of freedom, because there is only a small overlap between the photonic clouds associated with different single-atom BSs.
\[U_{\mathbf{\beta}}\simeq\frac{1}{2}(-1)^{\Delta m}\beta^{\Delta m+1}(\sqrt{2} \tilde{g})^{1-\Delta m}J \tag{33}\]
is the strength of dipole-dipole-like coupling between distant dressed states (\(\delta=0\) has been assumed). This type of interaction can be understood as the emitters exchanging virtual photons through the bath, which are localized
Figure 5: (A) The BS energy levels for the case of two separate atoms are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(\Delta m\). The inset in each panel shows the corresponding schematic of system. The red solid (blue dashed) lines indicate the BS energies of the even-parity (odd-parity) states. The photonic band is shown by the shaded region. For all plots \(\delta=0\) is assumed. (B) The spatial wave function distribution of photons when two separate giant atoms are coupled to waveguide with distance parameters \(\Delta n=\Delta m=8\). The coupling strength at each connection points is set to \(g=3J\) (upper row) and \(g=0.4J\) (lower row), which is larger than the threshold so that the even- and odd-parity states coexist. The left (right) column corresponds to the even-parity (odd-parity) BS. The black (white) disks in each panel are used to label the positions of the sites connecting to the atom \(a\) (\(b\)).
around the coupling points. Here \(\tilde{g}=g/J\) is the atom-waveguide coupling strength scaled by \(J\). One can see that in present strong coupling regime, \(U_{\beta}\) is only dependent on \(\Delta m\) and not influenced by \(\Delta n\), because it is mainly contributed from the overlap of the photonic clouds between the coupling points \(n_{a2}\) and \(n_{b1}\) (with a distance of \(\Delta m\)). Furthermore, \(U_{\beta}\) should be proportional to the effective coupling strength \(\sqrt{2}g\) between a single atom and the waveguide, and meanwhile decrease exponentially with \(\Delta m\) (charactered by a factor \(e^{-\Delta m/\lambda_{B}}\), which can be further approximated as \([J/(\sqrt{2}g)]^{\Delta m}\) in the strong coupling limit). Consequently, for \(\Delta m=1\), the trade off between these two effects results in a constant dipole-dipole-like coupling \(|U_{\beta}|=J/2\) independent of \(g\), thus the splitting between energy levels \(E_{\beta,1}\) and \(E_{\beta,-1}\) is always \(J/2\) for large \(g\), as shown in the upper row of Fig. 5A. For \(\Delta m>1\), this leads to a \((\Delta m-1)\)-th power inverse law of \(|U_{\beta}|\propto 1/g^{\Delta m-1}\), thus the splitting vanishes rapidly as \(g\) increases, resulting in \(E_{\beta,1}\simeq E_{\beta,-1}\simeq E_{\beta}\simeq\beta\sqrt{2}g\) for large \(g\), as shown in the lower row of Fig. 5A.
Figure 5B shows the photonic wave functions of the lower BSs \(\tilde{f}_{j}^{(-1,\,\alpha)}=-\sin\theta_{-1,\alpha}f_{j}^{(-1,\,\alpha)}\) when two separate atoms couple to the waveguide. For relatively large coupling strength, the photons are highly localized around the coupling points, i.e., the exponential wave-packets around different coupling are almost separate with very small overlap [see the first row of Fig. 5B]. On the contrary, for small coupling strength, the overlap between the wave-packets becomes remarkable and the mutual distortion of the wave packets should be taken into account. As a result, the even-parity BS with lower energy, is a "bonding" state with the photons being more localized in the area between the two atoms (i.e., between \(n_{a2}\) and \(n_{b1}\)). In contrast, the odd-parity state with higher energy, can be regarded an "antibonding" state with the photons being more delocalized, i.e., the excitations of the sites between \(n_{a2}\) and \(n_{b1}\) are suppressed, while the sites on the left (right) of \(n_{a2}\) (\(n_{b1}\)) are more likely to be excited [see the lower row of Fig. 5B].
### Two braided atoms
Next we consider the configuration containing two braided atoms. Also, we focus on the structures with parity symmetry, and assume \(\Delta n=n_{a2}-n_{a1}=n_{b2}-n_{b1}\) and \(\Delta m=n_{a2}-n_{b1}\), respectively, as shown in Fig. 4(b). Thus the two atoms are also with equal self-energy \(\tilde{\Sigma}_{\beta}(E)\) described by Eq. (28). While the mutual-energy can be written as
\[\Sigma_{ab}^{(\beta)}(E)=\frac{g^{2}(-\beta)^{\Delta m}e^{-\frac{\Delta n- \Delta m}{\lambda(E)}}\,\left(e^{\frac{\Delta n-2\Delta m}{\lambda(E)}}+e^{- \frac{\Delta n}{\lambda(E)}}+2(-\beta)^{\Delta n}\right)}{E\sqrt{1-\frac{4J^{2 }}{E^{2}}}}. \tag{34}\]
After plugging Eqs. (28) and (34) into Eq. (31), one can obtain the up to four BS energies \(E_{\beta\alpha}\) from Eq. (30) [Note that Eqs. (30) and (31) are also applicable for this configuration if letting the mutual-energy take the form of Eq. (34)]. The corresponding BSs have definite parities with atomic excitation amplitudes \(u_{a}^{(\beta\alpha)}=1/\sqrt{2}\) and \(u_{b}^{(\beta\alpha)}=\alpha/\sqrt{2}\), respectively.
After detail analysis (see more details in Appendix B.2), we can obtain the threshold conditions of BSs for this case, which are summarized below. To verify these conditions, we exhibit in Fig. 6A the single-excitation spectra as functions of \(g\) (with \(\delta=0\)) for different values of \(\Delta n\) and \(\Delta m\).
_Threshold conditions for the lower BSs_:
(i) When \(\delta<-2J\), there always exist two lower BSs with opposite parities, and the energy of even-parity state is lower than that of the odd-parity one.
(ii) When \(\delta>-2J\), a lower BS with even parity always exists, and another odd-parity one with higher energy appears when \(g>\sqrt{J(2J+\delta)/(\Delta n-\Delta m)}\), as shown in Fig. 6A.
_Threshold conditions for the upper BSs_:
(i) When \(\delta>2J\), there always exist two upper BSs with opposite parities. The influences of \(\Delta n\) and \(\Delta m\) on these states can be summarized as follows:
* If \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Delta n>2\Delta m\), and \(2J<\delta<E_{c}\), there exists a coupling strength \(g=g_{c}\) given by Eq. (11), where the even-parity and the odd-parity states are degenerate. Here \(E_{c}\) is the solution of the transcendental equation \(\Sigma_{ab}^{(1)}(E)=0\) in the domain \(E>2J\), as discussed in Appendix B.2. Additionally, the energy of the BS with odd (even) parity is lower than that of the even-parity (odd-parity) one when \(g<g_{c}\), while the result is opposite when \(g>g_{c}\).
* If \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Delta n>2\Delta m\), and \(\delta\geq E_{c}\), the energy of the odd-parity (even-parity) state is higher than that of the even-parity (odd-parity) one.
* If \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)) and \(\Delta n<2\Delta m\), the energy of odd (even) parity is lower than the even-parity (odd-parity) one.
* If \(\Delta n\in\mathbb{B}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), the energy of the BS with odd (even) parity is higher than that of the even-parity (odd-parity) one.
* When \(\delta<2J\), the appearance of upper BSs also depends on the parameters \(\Delta n\) and \(\Delta m\), which can be summarized as follows:
* If \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)) and \(\Delta n>2\Delta m\), there exists an upper BS with parity \(\alpha=\pm 1\) when \(g>\sqrt{J(2J-\delta)/(\Delta n\pm\Delta m)}\) [\(g>\sqrt{J(2J-\delta)/(\Delta n\mp\Delta m)}\)]. If they both exist, the energy of the odd-parity (even-parity) state is lower than that of the even-parity (odd-parity) one when \(g<g_{c}\), whereas the result is opposite when \(g>g_{c}\)
In addition, they are degenerate when \(g=g_{c}\), as shown in the upper left panel of Fig. 6A. One can see that for the parameters in this panel, the degenerate point \(g_{c}\simeq 1.356J\), where the energies of the even- and the odd-parity states are equal, with \(E_{1,1}=E_{1,-1}=E_{c}\simeq 2.383J\).
* If \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)) and \(\Delta n<2\Delta m\), the threshold conditions for the upper BSs are the same as the case of \(\Delta m>2\Delta m\). While the energy of the odd-parity (even-parity) state is always lower than the even-parity (odd-parity) one, as shown in the upper right panel of Fig. 6A.
* If \(\Delta n\in\mathbb{B}^{+}\), and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), an upper BS with odd (even) parity always exists, another even-parity (odd-parity) one, with lower energy, only exists when \(g>\sqrt{J(2J-\delta)/(\Delta n-\Delta m)}\), as shown in the lower row of Fig. 6A.
In the strong coupling regime \(g\gg J\), the system can also be described by the effective Hamiltonian (32). In this case, the strength of dipole-dipole-like coupling between the two atoms can be approximated as (assuming \(\delta=0\))
\[U_{\beta}\simeq\left\{\begin{array}{ll}\frac{1}{2}(-1)^{\Delta m}\beta^{ \Delta m+1}(\sqrt{2}\tilde{g})^{1-\Delta m}J,&\Delta n>2\Delta m,\\ \\ \frac{3}{2}(-1)^{\Delta m}\beta^{\Delta m+1}(\sqrt{2}\tilde{g})^{1-\Delta m} J,&\Delta n=2\Delta m.\end{array}\right. \tag{35}\]
The factors \(1/2\), \(1\), and \(3/2\) in the above expressions are related to the overlapping degree of the photonic clouds belonging to different atoms. Specifically, when \(\Delta n>2\Delta m\), \(U_{\beta}\) is mainly contributed from the overlap of the photonic wave functions between the coupling points \(n_{b1}\) and \(n_{a2}\) with a distance of \(\Delta m\) (note that \(\Delta m<\Delta n-\Delta m\), i.e., \(|n_{a2}-n_{b1}|<|n_{a1}-n_{b1}|=|n_{a2}-n_{b2}|\) for this case), thus the expression of \(U_{\beta}\) is the same as that of two separate atoms [see Eqs. (33) and (35)]. When \(\Delta n<2\Delta m\), \(U_{\beta}\) is mainly contributed from the overlaps of the photonic clouds in the regions \(n_{a1}<j<n_{b1}\) and \(n_{a2}<j<n_{b2}\), both with width \(\Delta n-\Delta m\) (note that \(\Delta n-\Delta m<\Delta m\), i.e., \(|n_{a1}-n_{b1}|=|n_{a2}-n_{b2}|<|n_{a2}-n_{b1}|\) for this case), thus the expression of \(U_{\beta}\) for this case can be obtained from that for the case of \(\Delta n>2\Delta m\) by replacing the factor \(1/2\) with \(1\) and replacing \(\Delta m\) with \(\Delta n-\Delta m\), respectively [see Eq. (35)]. When \(\Delta n=2\Delta m\), the overlaps of the photonic clouds in all the three interatomic regions contribute equally (note that \(\Delta m=\Delta n-\Delta m\), i.e., \(|n_{a1}-n_{b1}|=|n_{a2}-n_{b1}|=|n_{a2}-n_{b2}|=\Delta m\) for this case), thus the expression of \(U_{\beta}\) for this case can be obtained from that for the case of \(\Delta n>2\Delta m\) by replacing the factor \(1/2\) with \(3/2\) [see Eq. (35)]. In the strong coupling regime, the splitting between energy levels \(E_{\beta,1}\) and \(E_{\beta,-1}\) is approximately \(|U_{\beta}|\). Specifically, (i) for \(\Delta m=1,\Delta n>2\), the splitting is approximately \(J/2\), as shown in the upper left panel of Fig. 6A; (ii) for \(\Delta m>1,\Delta n-\Delta m=1\), the splitting is approximately \(J\), as shown in the upper right panel of Fig. 6A; Fig. 6A; (iii) for \(\Delta m=1,\Delta n=2\), the splitting is approximately \(3/J\), as shown in the lower left panel of Fig. 6A; (iv) for \(\Delta m>1,\Delta n-\Delta m>1\), and \(\Delta n>2\Delta m\) (\(\Delta n<2\Delta m\)), \(|U_{\beta}|\propto 1/g^{\Delta m-1}\) (\(|U_{\beta}|\propto 1/g^{\Delta m-1}\)) exhibits an inverse power law. Thus the splitting vanishes rapidly as \(g\) increases, resulting in \(E_{\beta,1}\simeq E_{\beta,-1}\simeq E_{\beta}\simeq\beta\sqrt{2}g\) when \(g\) is large enough, as shown in the lower right panel of Fig. 6A.
Figure 6B shows the photonic wave functions of the
Figure 6: (A) The BS energy levels for the case of two braided atoms are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(\Delta m\). The inset in each panel shows the corresponding schematic of system. The red solid (blue dashed) lines indicate the BS energies of the even-parity (odd-parity) states. The photonic band is shown by the shaded region. For all plots \(\delta=0\) is assumed. (B) The spatial wave function distribution of photons when two braided giant atoms are coupled to the waveguide with distance parameters \(\Delta n=16,\Delta m=8\). The coupling strength at each connection points is set to \(g=3J\) (upper row) and \(g=0.6J\) (lower row), which is larger than the threshold so that the even- and odd-parity states coexist. The left (right) column corresponds to the even-parity (odd-parity) BS. The black (white) disks in each panel are used to label the positions of the sites connecting to the atom \(a\) (\(b\)).
lower BSs \(\tilde{f}_{j}^{(-1,\alpha)}=-\sin\theta_{-1,\alpha}f_{j}^{(-1,\alpha)}\) for this case. Similar to the case of separate atoms, for relatively large \(g\), the photonic clouds are highly localized around the coupling points. For small coupling strength, the photonic cloud of the even-parity BS, is more localized in the interatomic region (i.e., between \(n_{a1}\) and \(n_{b2}\)). In contrast, for the odd-parity state with higher energy, the photonic wave function becomes more delocalized, i.e., the excitations of the sites between \(n_{a1}\) and \(n_{b2}\) are suppressed, while the sites on the left (right) of \(n_{a1}\) (\(n_{b2}\)) are more likely to be excited.
### Two nested atoms
Finally, we consider the configuration containing two nested atoms. Again, we focus on the structures with parity symmetry, and assume that the distance between the connection points of the inner atom (atom \(b\)) is \(\Delta n=n_{b2}-n_{b1}\), and the distance between the connection points of different atoms is \(\Delta m=n_{b1}-n_{a1}=n_{a2}-n_{b2}\), as shown in Fig. 4(c). Thus the self-energy \(\Sigma_{bb}^{(\beta)}(E)\) can also be described by Eq. (28). While the self-energy \(\Sigma_{aa}^{(\beta)}(E)\) can be obtained by replacing \(\Delta n\) in Eq. (28) by \(\Delta n+2\Delta m\). And the mutual-energy becomes
\[\Sigma_{ab}^{(\beta)}(E)=(-\beta)^{\Delta m}e^{-\frac{\Delta m}{4(E)}}\tilde{ \Sigma}_{\beta}(E). \tag{36}\]
Thus the energy of BS satisfies the equation
\[E-\delta=\Sigma_{\beta\xi}(E), \tag{37}\]
with
\[\begin{split}\Sigma_{\beta\xi}(E)=&\frac{1}{2} \left[\Sigma_{aa}^{(\beta)}(E)+\Sigma_{bb}^{(\beta)}(E)\right]\\ +&\frac{\zeta}{2}\sqrt{\left[\Sigma_{aa}^{(\beta)}( E)-\Sigma_{bb}^{(\beta)}(E)\right]^{2}+4\left[\Sigma_{ab}^{(\beta)}(E) \right]^{2}}.\end{split} \tag{38}\]
Note that for this case, different from the separate and braided configurations, the two atoms are not identical because \(\Sigma_{aa}^{(\beta)}(E)\neq\Sigma_{bb}^{(\beta)}(E)\). One can obtain up to four real solutions of Eq. (37), labeled as \(E_{\beta\xi}\), corresponding to the energies of the BSs. For fixed \(\beta\), we have \(E_{\beta,1}>E_{\beta,-1}\) [see Eq. (38)]. The corresponding atomic excitation amplitudes should be described by (26). Note that for both states with \(E_{\beta,\pm 1}\), the photonic excitation amplitudes \(f_{j}^{(\beta,\pm 1)}\) are always _even_ functions of \(j\), which is different from the separate and braided configurations, where a pair of upper (or lower) BSs have opposite parities. But we can discriminate them by using the sign of \(u_{a}^{(\beta\xi)}/u_{b}^{(\beta\xi)}\), labeled as \(\text{sgn}(\tan\Theta_{\beta\xi})\equiv\eta\), which is dependent on the signs of \(\xi\) and \(\Sigma_{ab}^{(\beta)}\) [see Eq. (27)]. That is to say, the atom excitation amplitudes have the same sign (opposite signs) for \(\eta=1\) (\(\eta=-1\)). In what follows, we relabel the quantities like \(E_{\beta\xi},f_{j}^{(\beta\xi)}\) as \(E_{\beta\eta},f_{j}^{(\beta\eta)}\). The corresponding BSs can be expressed in terms of Eq. (13) or Eq. (17) by letting \(s=\eta\).
Again, we can obtain the threshold conditions of BSs (see more details in Appendix B.3), which are summarized below. To verify these conditions, we exhibit in Fig. 7A the single-excitation spectra as functions of \(g\) (with \(\delta=0\)) for different values of \(\Delta n\) and \(\Delta m\).
_Threshold conditions for the lower BSs_:
(i) When \(\delta<-2J\), there always exist two lower BSs, and the energy of the state with \(\eta=-1\) is higher than another one with \(\eta=1\).
(ii) When \(\delta>-2J\), there always exists a lower BS with \(\eta=1\), and another one with \(\eta=-1\) and higher energy appears when \(g>\sqrt{J(2J+\delta)/\Delta m}\), as shown in Fig. 7A.
_Threshold conditions for the upper BSs_:
(i) When \(\delta>2J\), there always exist two upper BSs. If \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{E}^{+}\)), the energy of the state with \(\eta=-1\) (\(\eta=1\)) is higher.
(ii) When \(\delta<2J\), the appearance of upper BSs depends on the parameters \(\Delta n\) and \(\Delta m\), which can be summarized as follows:
* If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), an upper BS with \(\eta=\mp 1\) (\(\eta=\pm 1\)) appears when \[g>\sqrt{\frac{J(2J-\delta)}{\Delta n+\Delta m\pm\sqrt{\left(\Delta n\right)^{2} +\left(\Delta m\right)^{2}}}},\] and the energy of the state with \(\eta=-1\) (\(\eta=1\)) is higher, as shown in the left column of Fig. 7A.
* If \(\Delta n\in\mathbb{B}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), an upper BS with \(\eta=-1\) (\(\eta=1\)) always exists, and another one with lower energy and \(\eta=1\) (\(\eta=-1\)) appears when \(g>\sqrt{J(2J-\delta)/\Delta m}\), as shown in the right column of Fig. 7A.
In the strong coupling regime \(g\gg J\), and if \(\Delta n\geqslant\Delta m\), one can also obtain an effective Hamiltonian described by Eq. (32), in which the strength of the dipole-dipole-like coupling between the two atoms can be approximated as (assuming \(\delta=0\))
\[U_{\beta}\simeq(-1)^{\Delta m}\beta^{\Delta m+1}\big{(}\sqrt{2}\tilde{g}\big{)} ^{1-\Delta m}J, \tag{39}\]
which is mainly contributed from the overlaps of the photonic clouds in the regions \(n_{a1}<j<n_{b1}\) and \(n_{b2}<j<n_{a2}\), both with width \(\Delta m\) (note that \(\Delta m<\Delta n+\Delta m\), i.e., \(|n_{a1}-n_{b1}|=|n_{a2}-n_{b2}|<|n_{a1}-n_{b2}|=|n_{a2}-n_{b1}|\) for this case). Thus it is not a surprise that the expression of \(U_{\beta}\) for this case can be obtained from that for the case of two separate atoms by replacing the factor \(1/2\) with \(1\) [see Eqs. (33) and (39)]. In addition, for \(\Delta m=1\), we have \(|U_{\beta}|\simeq J\), indicating that the splitting between energy levels \(E_{\beta,1}\) and \(E_{\beta,-1}\) is always \(J\) for large \(g\), as shown in the upper row of Fig. 7A. And for \(\Delta m>1\), the splitting exhibits a \((\Delta m-1)\)-th power inverse law of \(|U_{\beta}|\propto 1/g^{\Delta m-1}\), resulting in \(E_{\beta,1}\simeq E_{\beta,-1}\simeq E_{\beta}\simeq\beta\sqrt{2}g\) for large \(g\), as shown in the lower row of Fig. 7A.
Figure 7B shows the photonic wave functions of the lower BSs \(\tilde{f}_{j}^{(-1,\eta)}=-\sin\theta_{-1,\eta}f_{j}^{(-1,\eta)}\) for this case. For relatively large \(g\), the photonic clouds are highly localized around the coupling points. For small coupling strength, the photonic cloud of the BS with \(\eta=1\) is more localized in the region between \(n_{a1}\) and \(n_{a2}\). In contrast, for the state with \(\eta=-1\), the photonic wave function becomes more delocalized, i.e., the excitations of the sites in the interatomic regions \(n_{a1}<j<n_{b1}\) and \(n_{b2}<j<n_{a2}\) are suppressed, while the sites in the regions \(j<n_{a1}\), \(j>n_{a2}\) and \(n_{b1}<j<n_{b2}\) are more likely to be excited.
## VI Bound states for multiple giant atoms
The above analysis can be extended to multiple atoms. As discussed in Sec. III, when a 1D chain of \(N_{\rm a}\) atoms couples to a waveguide, there are at most \(N_{\rm a}\) lower (upper) BSs. For large \(N_{\rm a}\), similar to small atoms [30], the lower (upper) BSs form a metaband structure for propagating dressed-state excitations below (above) the bare photonic band, as shown by the solid lines in Figs. 8A-10A and Fig. 11B, which are obtained numerically. Due to the diversity brought about by the distribution of connection points, the metaband structures and their corresponding threshold conditions for multi-giant-atom systems become more complex than those for small atoms. Importantly, the structures of the bands can be explained in terms of photon-mediated interactions between atoms. This could make the system a useful platform for quantum simulation. In this section, we focus on the 1D array of interacting dressed atoms described by: (i) the normal 1D tight-binding model, (ii) the SSH model [70].
### Metaband structures for dressed-atom array described by normal 1D tight-binding model
Now we consider a 1D chain of \(N_{\rm a}\) atoms (each with two coupling points) coupled to a waveguide, with each pair of neighboring atoms being in one of the three basic configurations summarized in Fig. 4. The frequencies of all the atoms are equal (i.e., \(\delta_{i}=\delta\)), and all the coupling strengths are identical (labeled as \(g\)). We will show that for all the three types of configurations, under the condition \(N\gg N_{\rm a}\gg 1\), the threshold conditions of the metabands can be obtained analytically. In fact, under condition \(N_{\rm a}\gg 1\), the excitation probabilities of all the atoms are equal, thus one can assume that the metaband is bounded (or _approximately bounded_ for 1D chain of braided atoms, see Sec. VI.1.2) by the states described by
\[\ket{\psi}=\left(\cos\theta\sum_{p=1}^{N_{\rm a}}\frac{\gamma^{p}}{\sqrt{N_{ \rm a}}}\sigma_{p}^{+}+\sin\theta\sum_{k}f_{k}\,c_{k}^{\dagger}\right)\ket{G}, \tag{40}\]
where \(\gamma=\pm 1\) corresponds to the upper or lower borders of a metaband, depending on parameters. Imposing the requirement that the ansatz (40) be an eigenstate of Hamiltonian (3) with eigenvalue \(E\) yields a transcendental equation
\[E-\delta=\Sigma_{\beta\gamma}^{(N_{\rm a}\gg 1)}(E), \tag{41}\]
with energy correction function
\[\Sigma_{\beta Y}^{(N_{\rm a}\gg 1)}(E)=\frac{g^{2}}{E\sqrt{1-\frac{4J^{2}}{E^{2 }}}}\sum_{p=-\frac{N_{\rm a}}{2}}^{\frac{N_{\rm a}}{2}}\sum_{l,l=1}^{2}\gamma ^{p}\left(-\beta e^{-\frac{\gamma}{4E\gamma}}\right)^{\left|n_{0l}-n_{p\ell^{ \prime}}\right|}. \tag{42}\]
Figure 7: (A) The BS energy levels for the case of two nested atoms are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(\Delta m\). The inset in each panel shows the corresponding schematic of system. The red solid (red dashed) lines indicate the BS energies of the states with \(\eta=1\) (\(\eta=-1\)). The photonic band is shown by the shaded region. For all plots \(\delta=0\) is assumed. (B) The spatial wave function distribution of photons when two nested giant atoms are couple to the waveguide with distance parameters \(\Delta n=\Delta m=8\). The coupling strength at each connection points is set to \(g=3J\) (upper row) and \(g=0.6J\) (lower row), which is larger than the threshold so that the even- and odd-parity states coexist. The left (right) column corresponds to the BS with \(\eta=1\) (\(\eta=-1\)). The black (white) disks in each panel are used to label the positions of the sites connecting to the atom \(a\) (\(b\)).
Note that \(\beta=\pm 1\) is for upper and lower bound states, respectively. The explicit expression of this function depends on the configurations, from which we can fix the solutions \(E_{\beta\gamma}\) (see the red dashed and blue dotted lines in Figs. 8A-10A) of the transcendental equation (41) and obtain the threshold conditions of the metabands, as shown in the following subsections. Moreover, we will show that for each configuration, the structures of the metabands in the strong coupling regime can be effectively described by a normal 1D tight-binding model.
#### v.2.1 1D chain of separate atoms
Now we consider the case that each pair of neighboring atoms is in a separate configuration, with distance parameters \(\Delta n\) and \(\Delta m\) defined in Sec. V.1. The positions of the coupling points can be expressed as \(n_{pl}=p(\Delta n+\Delta m)+(-1)^{I}\Delta n/2\). Thus, by letting \(N_{\rm a}\to\infty\) and calculating the sum of geometric series, we end up with a compact expression of Eq. (42):
\[\Sigma_{\beta\gamma}^{(N_{\rm a}\gg 1)}(E)=\frac{1+\gamma(-\beta)^{\Delta m }e^{-\frac{\Delta n\Delta m}{d(E)}}}{1-\gamma(-\beta)^{\Delta n+\Delta m}e^{- \frac{\Delta n\Delta m}{d(E)}}}\tilde{\Sigma}_{\beta}(E). \tag{43}\]
Here \(\tilde{\Sigma}_{\beta}(E)\) is the self-energy of a single giant atom described by Eq. (28). After some analyses like the case of \(N_{\rm a}=2\), we can obtain the threshold conditions of the metabands for \(N_{\rm a}\gg 1\) (see more details in Appendix C.1), which are summarized below.
_Threshold conditions for the lower metaband_:
(i) When \(\delta<-2J\), the lower metaband is separated from the photonic continuum. The energy \(E_{-1,1}\) (\(E_{-1,-1}\)) forms the lower (upper) border of the dressed-state metaband.
(ii) When \(\delta>-2J\), the lower metaband becomes separated from the photonic continuum when \(g>\sqrt{J(2J+\delta)/\Delta m}\), the energy \(E_{-1,1}\) (\(E_{-1,-1}\)) forms the lower (upper) border of the dressed-state metaband. When \(g<\sqrt{J(2J+\delta)/\Delta m}\), a fraction of the dressed-state band disappears in the waveguide continuum and when \(g\to 0\) the whole metaband merges with the continuum, as shown by the red dashed and blue dotted lines in Fig. 8 A.
_Threshold conditions for the upper metaband_:
(i) When \(\delta>2J\), the upper metaband is separated from the photonic continuum. If \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{E}^{+}\)), the energy \(E_{1,1}\) (\(E_{1,-1}\)) forms the lower border of the dressed-state metaband, and the energy \(E_{1,-1}\) (\(E_{1,1}\)) forms the upper border.
(ii) When \(\delta<2J\), the appearance of the upper metaband depends on the parameters \(\Delta n\) and \(\Delta m\), which can be summarized as follows:
* If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{E}^{+}\)), the upper metaband is totally separated from the photonic continuum as \(g>\sqrt{J(2J-\delta)/\Delta n}\), the upper metaband is totally separated from the photonic continuum as \(g>\sqrt{J(2J-\delta)/\Delta m}\), with upper border \(E_{1,-1}\) (\(E_{1,1}\)) and lower border \(E_{1,1}\) (\(E_{1,-1}\)), respectively. When \(g\) becomes smaller than this value, a fraction of the dressed-state band merges with the continuum, and when \(g\) further decreases below another threshold value, with \(g<\sqrt{J(2J-\delta)/\Delta n}\), the metaband completely disappears, as shown by the red dashed and blue dotted lines in the upper row of Fig. 8A. Note that for a finite \(N_{\rm a}\), all the upper BSs disappear simultaneously at \(g=\sqrt{J(2J-\delta)/\Delta m}\), as shown by the solid lines in the upper row of Fig. 8A.
* If \(\Delta n\in\mathbb{E}^{+}\), and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{E}^{+}\)), the upper metaband is totally separated from the photonic continuum as \(g>\sqrt{J(2J-\delta)/\Delta m}\), with upper border \(E_{1,-1}\) (\(E_{1,1}\)) and lower border \(E_{1,1}\) (\(E_{1,-1}\)), respectively. When \(g\) becomes smaller than this value, a fraction of the dressed-state band merges with the continuum, and when \(g\to 0\), the metaband completely disappears, as shown by the red dashed and blue dotted lines in the lower row of Fig. 8A.
Next we give some further discussions on the metaband structures in the strong coupling regime \(g\gg J\). In this regime, the photon-mediated interactions between different single-atom dressed states are dipole-dipole-like ones. Moreover, the ratio of nearest-neighbor coupling to next-nearest-neighbor one can be approximated as \((\sqrt{2}\tilde{g})^{\Delta n+\Delta m}\gg 1\). Thus the next-nearest-neighbor coupling can be neglected, and the array of dressed atoms forms a 1D tight-binding chain with nearest-neighbor hopping rate \(U_{\beta}/2\) described by Eq. (33) (assuming \(\delta=0\)), the corresponding effective Hamiltonian can be approximated as
\[H=E_{\beta}\sum_{i=1}^{N_{\rm a}}D_{\beta}^{\dagger}(i)D_{\beta}(i)+\frac{1}{2} U_{\beta}\sum_{i=1}^{N_{\rm a}-1}\left[D_{\beta}^{\dagger}(i)D_{\beta}(i+1)+{\rm H.c.}\right]. \tag{44}\]
Like the case of double giant atoms, \(D_{\beta}(i)\) are the single-atom dressed-state operators. According to well known results of tight-binding model, the energy spectrum of the Hamiltonian (44) exhibits a band structure around the frequency \(E_{\beta}\simeq\beta\sqrt{2}g\) (the frequency of a single-atom dressed-state) with total width \(\Delta\omega\equiv 2|U_{\beta}|\simeq(\sqrt{2}\tilde{g})^{1-\Delta m}J\). This result indicates that in the strong coupling regime, the array of dressed atoms inherits the tight-binding interaction of the cavity array [see the photonic part of Hamiltonian (1)], leading to the formation of metaband for propagating dressed-state excitations. For \(\Delta m=1\), we have \(\Delta\omega=J\), i.e., the spectrum width of the metaband for large \(g\) is one quarter of the width of photonic band, as shown in the left column of Fig. 8A. And for \(\Delta m>1\), the spectrum width as \(g\) increases, exhibiting \((\Delta m-1)\)-th power inverse law of \(\Delta\omega\propto 1/g^{\Delta m-1}\), as shown in the right column of Fig. 8A.
Figure 8B shows the atomic and the photonic excitation amplitudes for the lowermost and the uppermost BSs in the lower metaband. Here we choose a 1D chain of \(N_{a}=10\) separate atoms with \(\Delta n=\Delta m=1\). The cou
pling strength is set as a relatively large value \(g=5J\), so that the tight-binding Hamiltonian (44) is a good approximation. One can see that the results obtained from tight-binding approximation (the red disks in Fig. 8B) are in good agreement with the full numerical ones (the bars in Fig. 8B). One can see that in such a multi-atom BS, the photonic cloud mainly concentrate around the sites connecting to the excited atoms, with the photonic excitation amplitudes being proportional to the corresponding atomic excitation amplitudes. Note that these properties are also applicable to braided and nested configurations of 1D atomic arrays, as shown in the following subsections.
#### iv.2.2 1D chain of braided atoms
Now we consider the case that each pair of neighboring atoms is in a braided configuration, with distance parameters \(\Delta n\) and \(\Delta m\) defined in Sec. V.2. Note that to form a 1D chain with braided structure, the condition \(\Delta n>2\Delta m\) is required. The positions of the coupling points are \(n_{pl}=p(\Delta n-\Delta m)+(-1)^{l}\Delta n/2\) for this case. After plugging this expression into Eq. (42) and calculating the sum of geometric series, one can obtain
\[\Sigma^{(N_{\alpha}\gg 1)}_{\beta\gamma}(E)=\frac{1+\gamma(-\beta)^{\Delta m }e^{-\frac{\Delta m}{4(E)}}}{1-\gamma(-\beta)^{\Delta n-\Delta m}e^{-\frac{ \Delta m}{4(E)}}}\tilde{\Sigma}^{\prime}_{\beta}(E). \tag{45}\]
Here \(\tilde{\Sigma}^{\prime}_{\beta}(E)\) can be obtained by replacing \(\Delta n\) in the expression of \(\tilde{\Sigma}_{\beta}(E)\) [see Eq. (28)] with \(\Delta n-2\Delta m\).
It should be emphasized that different from the case of separate-atom array, the energies \(E_{\beta\gamma}\) obtained from the transcendental equation (41) are not always the borders of metaband due to effects of the next-nearest-neighbor interactions. But when \(\Delta n-3\Delta m>0\), i.e., the distance \(\Delta m\) of a pair of nearest-neighbor atoms (with braided configuration) is smaller than the distance \(\Delta n-2\Delta m\) of a pair of next-nearest-neighbor atoms (with separate configuration), the additional influence of the next-nearest-neighbor interactions is relatively small. Thus \(E_{\beta\gamma}\) can still _approximately_ give the borders of the metaband for coupling strength near the threshold values of the metaband(see Fig. 9A). And in the strong coupling regime, the atomic array can be approximated as a tight-binding chain with negligible next-nearest-neighbor interactions, the metaband is _exactly_ bounded by the energies \(E_{\beta\gamma}\) (see Fig. 9A). After some analysis, we can obtain the (approximate) threshold conditions of the metabands (see more details in Appendix C.2). Specifically, when \(\delta<2J\) and \(\Delta n\in\mathbb{O}^{+}\), the threshold conditions of the upper metaband can be obtained from the results for separate-atom array (see Sec. VI.1.1) by replacing \(\Delta n\) with \(\Delta n-2\Delta m\). For all the other cases, the threshold conditions are the same as those of separate-atom array. Above results can be verified in Fig. 9A.
In the strong coupling regime \(g\gg J\), the nearest-neighbor hopping rate \(U_{\beta}/2\) described by the first line of Eq. (35) [\(\delta=0\) is assumed, and note that this expression is the same as that of separate atoms Eq. (33)]. In addition, under condition \(\Delta n-3\Delta m>0\), the ratio of nearest-neighbor coupling to next-nearest-neighbor one can be approximated as \((\gamma\bar{2}\bar{g})^{\Delta n-3\Delta m}\gg 1\), thus the ar
Figure 8: (A) The numerically calculated BS energy levels for the case of \(N_{\text{a}}=10\) separate atoms (the solid lines) are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(\Delta m\). The red dashed (blue dotted) lines indicate the energies of the BSs with \(\gamma=1\) (\(\gamma=-1\)), which form the borders of the metaband in the limit of \(N_{\text{a}}\gg 1\). The photonic band is shown by the shaded region. The inset in each panel shows the corresponding schematic of system. For all plots \(\delta=0\) is assumed. (B) Left column: the atomic excitation amplitudes for the lowermost (uppermost) BS in the lower metaband, as shown in the upper (lower) panel. Right column: the corresponding photonic excitation amplitudes. The bars are numerical results and the red disks are approximate ones from the effective Hamiltonian (44). The parameters are \(N_{\text{a}}=10\), \(\Delta n=\Delta m=1\), \(\delta=0\), and \(g=5J\).
ray of dressed atoms can also be approximately described by the tight-binding Hamiltonian (44). Consequently, the characteristics of metaband in the strong coupling regime are the same as the case of separate-atom array (summarized in Sec. VI.1.1). For example, for \(\Delta m=1\), the metaband centers around \(E_{\beta}\simeq\beta\sqrt{2}g\) with a constant spectrum width \(\Delta\omega=J\) (one quarter of the width of photonic band, see the left column of Fig. 9A). And for \(\Delta m>1\), the spectrum width \(\Delta\omega\propto 1/g^{\Delta m-1}\) exhibits \((\Delta m-1)\)-th power inverse law as \(g\) increases (see the right column of Fig. 9A).
Figure 9B shows the atomic and the photonic excitation amplitudes for the lowermost and the uppermost BSs in the lower metaband. Here we choose a 1D chain of \(N_{a}=10\) braided atoms with \(\Delta n=5,\Delta m=1\) and \(g=5J\). One can see that the results obtained from tight-binding approximation (red disks) are in good agreement with the full numerical ones (bars).
#### vi.2.3 1D chain of nested atoms
Now we consider a 1D chain of \(N_{\rm a}\) nested atoms with each pair of neighboring atoms in a nested configuration defined in Sec. V.3. \(\Delta n\) is the distance between the two connecting points of the inner most atom, and \(\Delta m\) is the distance between a pair of neighboring right (or left) connection points. The positions of the coupling points are \(n_{pl}=(-1)^{l-1}[(p-N_{\rm a}/2)\Delta m-\Delta n/2]\). When \(N_{\rm a}\gg 1\), the terms \(\exp[-|n_{01}-n_{p^{\prime}2}|/\lambda(E)]\) and \(\exp[-|n_{02}-n_{p^{\prime}1}|/\lambda(E)]\) in Eq. (42) can be ignored. Thus after calculating the sum of geometric series, Eq. (42) can be further written as
\[\Sigma^{(N_{\rm a}\gg 1)}_{\beta\gamma}(E)=\frac{1+\gamma(-\beta)^{\Delta m }e^{-\frac{\Delta m}{4(E)}}}{1-\gamma(-\beta)^{\Delta m}e^{-\frac{\Delta m}{4( E)}}}\frac{2g^{2}}{E\sqrt{1-\frac{4J^{2}}{E^{2}}}}, \tag{46}\]
showing that this system is equivalent to a 1D chain with \(N_{\rm a}\gg 1\) small atoms [30], with atom-waveguide coupling strength \(\sqrt{2}g\). After some analysis like the case of two separate giant atoms, we can obtain the threshold conditions of the metabands (see more details in Appendix C.3). Specifically, the threshold conditions of lower metaband are the same as those of separate-atom array. And the threshold conditions of the upper metaband for both the cases of \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta n\in\mathbb{B}^{+}\) are the same as those of a separate-atom array with \(\Delta n\in\mathbb{B}^{+}\) (summarized in Sec. VI.1.1). Above results can be verified in Fig. 10A.
Next we give some further discussions on the metaband structures in the strong coupling regime \(g\gg J\). We assume that the condition \(\Delta n>\Delta m\) is satisfied so that the atoms are _approximately identical_. The ratio of nearest-neighbor coupling to next-nearest-neighbor one can be approximated as \((\sqrt{2}\tilde{g})^{\Delta m}\gg 1\). Thus the array of dressed atoms forms a 1D tight-binding-chain with nearest-neighbor hopping rate \(U_{\beta}/2\) described by Eq. (39) (assuming \(\delta=0\)). Its corresponding energy levels form a metaband structure with central frequency \(E_{\beta}\simeq\beta\sqrt{2}g\) and total width \(\Delta\omega\equiv 2|U_{\beta}|\simeq 2(\sqrt{2}\tilde{g})^{1-\Delta m}J\). For \(\Delta m=1\), the metaband has a constant spectrum width \(\Delta\omega=2J\) (one half of the width of photonic band, see the upper panel of Fig. 10A). And for \(\Delta m>1\), the spectrum width exhibits \((\Delta m-1)\)-th power inverse law of
Figure 9: (A) The numerically calculated BS energy levels for the case of \(N_{\rm a}=10\) braided atoms (the solid lines) are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(\Delta m\). The red dashed (blue dotted) lines indicate the energies of the BSs with \(\gamma=1\) (\(\gamma=-1\)), which form the (or approximate) borders of the metaband in the limit of \(N_{\rm a}\gg 1\). The photonic band is shown by the shaded region. The inset in each panel shows the corresponding schematic of system. For all plots \(\delta=0\) is assumed. (B) Left column: the atomic excitation amplitudes for the lowermost (uppermost) BS in the lower metaband, as shown in the upper (lower) panel. Right column: the corresponding photonic excitation amplitudes. The bars are numerical results and the red disks are approximate ones from the effective Hamiltonian (44). The parameters are \(N_{\rm a}=10\), \(\Delta n=5\), \(\Delta m=1\), \(\delta=0\), and \(g=5J\).
\(\Delta\omega\propto 1/\mathrm{g}^{\Delta m-1}\) as \(g\) increases, as shown in the lower panel of Fig. 10A.
Figure 10B shows the atomic and the photonic excitation amplitudes for the lowermost and the uppermost BSs in the lower metaband. Here we choose a 1D chain of \(N_{a}=10\) nested atoms with \(\Delta n=2,\Delta m=1\), and \(g=5J\). One can see that the results obtained from tight-binding approximation (red disks) are in good agreement with the full numerical ones (bars).
### Topological metaband structures for bound states
As shown in the previous subsection, a chain of identical dressed giant atoms with equal coupling strength at each connection point, can be described by normal 1D tight-binding model in the strong coupling regime. Since the photon-mediated interactions between atoms are highly tunable via designing the layout of the connection points and the strengths of atom-resonator couplings at these points, one can construct a wide variety of models of coupled spins to produce nontrivial many-body phases, which may become an attractive avenue in quantum simulation. Here, as an example, we provide a proposal to construct a topological spin chain described by the SSH model [70] based on manipulating dressed-state excitations of a 1D chain of separate giant atoms. This 1D toy model can be used to understand some of the fundamental ideas of topological physics [71; 72; 73; 74], particularly in low-dimensional systems [75; 76; 77].
As schematically shown in Fig. 11A, a 1D chain of \(N_{a}\) atoms is coupled to the resonator array, with each pair of neighboring atoms in a separate configuration. The distance parameters \(\Delta n\) and \(\Delta m\) are defined in Sec. V.1 (In Fig. 11A, we take \(\Delta n=2\) and \(\Delta m=1\) as an example.). The two atoms in a unit cell are labelled as A and B. For the A (B) atom, the coupling strengths at the left and right connection points are \(g\) (\(\mu g\)) and \(\mu g\) (\(g\)), respectively. \(\mu\) is a dimensionless positive coefficient. Again, in the regime \(g\gg J\), the interactions induced by BS photons between different atoms are dipole-dipole-like coupling. We also assume that \(\tilde{g}^{-\Delta n}\ll\mu\) and \(\delta=0\) are satisfied. Thus the strengths of nearest-neighbor coupling are \(|U_{\beta}^{(1)}|\simeq\mu^{2}\tilde{g}^{1-\Delta m}J/(1+\mu^{2})^{(\Delta m+1)/2}\) (for hopping within the unit cell) and \(|U_{\beta}^{(2)}|\simeq\tilde{g}^{1-\Delta m}J/(1+\mu^{2})^{(\Delta m+1)/2}\) (for hopping connecting neighboring unit cells), respectively. Moreover, the ratio of nearest-neighbor coupling to next-nearest-neighbor can be much less than \(1\), so that the tight-binding approximation is valid. As a result, we obtained a topological dressed-atom array described by
Figure 10: (A) The numerically calculated BS energy levels for the case of \(N_{\mathrm{a}}=10\) nested atoms (the solid lines) are plotted as functions of the coupling strength \(g\) for different values of \(\Delta n\) and \(\Delta m\). The red dashed (blue dotted) lines indicate the energies of the BSs with \(\gamma=1\) (\(\gamma=-1\)), which form the borders of the metaband in the limit of \(N_{\mathrm{a}}\gg 1\). The photonic band is shown by the shaded region. The inset in each panel shows the corresponding schematic of system. For all plots \(\delta=0\) is assumed. (B) Left column: the atomic excitation amplitudes for the lowermost (uppermost) BS in the lower metaband, as shown in the upper (lower) panel. Right column: the corresponding photonic excitation amplitudes. The bars are numerical results and the red disks are approximate ones from the effective Hamiltonian (44). The parameters are \(N_{\mathrm{a}}=10\), \(\Delta n=2\), \(\Delta m=1\), \(\delta=0\), and \(g=5J\).
the SSH model [70]
\[H= E_{\beta}\sum_{i=1}^{N_{\rm a}}D_{\beta}^{\dagger}(i)D_{\beta}(i)+ \frac{1}{2}U_{\beta}^{(1)}\sum_{i=\rm odd}\left[D_{\beta}^{\dagger}(i)D_{\beta}( i+1)+\rm H.c.\right]\] \[+ \frac{1}{2}U_{\beta}^{(2)}\sum_{i=\rm even}\left[D_{\beta}^{ \dagger}(i)D_{\beta}(i+1)+\rm H.c.\right]. \tag{47}\]
By assuming periodic boundary conditions, one can obtain the corresponding metabands formed by the atom-photon dressed states
\[E_{\beta,\pm}(K)=E_{\beta}\pm\frac{1}{2}\sqrt{{U_{\beta}^{(1)}}^{2}+U_{\beta}^ {(2)}+2U_{\beta}^{(1)}U_{\beta}^{(2)}\cos k}, \tag{48}\]
where \(K\in[-\pi,\pi]\) is the wave vector. From this dispersion relation, one can find that when \(\mu\neq 1\) (i.e., \(|U_{\beta}^{(1)}|\neq|U_{\beta}^{(2)}|\), note that \(|U_{\beta}^{(1)}/U_{\beta}^{(2)}|\approx\mu^{2}\) in the strong coupling regime), the spectrum is gapped and forms two symmetric bands centered about the reference frequency \(E_{\beta}\simeq\beta\sqrt{1+\mu^{2}}g\) (the BS energy for a single giant atom), with the spectrum width and band gap
\[\Delta\omega=\left|U_{\beta}^{(1)}\right|+\left|U_{\beta}^{(2)}\right|\simeq \frac{\tilde{g}^{1-\Delta m}J}{(1+\mu^{2})^{\frac{\Delta m-1}{2}}}, \tag{49a}\] \[\delta\omega=\left|U_{\beta}^{(1)}\right|-\left|U_{\beta}^{(2)}\right|\cong \frac{\left|1-\mu^{2}\right|\tilde{g}^{1-\Delta m}J}{(1+\mu^{2})^{\frac{\Delta m +1}{2}}}, \tag{49b}\]
respectively. In particular, for a setup with \(\Delta m=1\) (as shown in Fig. 11A), we have \(|U_{\beta}^{(1)}|\simeq\mu^{2}J/(1+\mu^{2})\), \(|U_{\beta}^{(2)}|\simeq J/(1+\mu^{2})\). The spectrum width becomes \(\Delta\omega\simeq J\), which is independent of \(g\). And the band gap becomes \(\delta\omega\simeq|1-\mu^{2}|J/(1+\mu^{2})\) (see the lower and upper meta-bands in the left panel of Fig. 11B). With finite systems, the value of \(\mu\) determines whether the chain ends with weak hoppings, which leads to the appearance of topologically robust edge states with energy \(E_{\beta}\). As shown by the red lines in the left panel of Fig. 11B, for \(\mu=0.5\) (i.e., \(|U_{\beta}^{(1)}|=0.2J,|U_{\beta}^{(2)}|=0.8J\)) and \(N_{\rm a}=10\), two degenerate edge states with energy \(E_{1}\) (\(E_{-1}\)) appear in the upper (lower) metaband gap. To show the influence of parameter \(\mu\) on the topology of system, we plot the BS energy levels of the lower metaband (i.e., \(\beta=-1\). Note that for the upper metaband, the result is similar.) as functions of \(\mu\) in the right panel of Fig. 11B. We can see that as \(\mu<1\) (i.e., \(|U_{-1}^{(1)}|<|U_{-1}^{(2)}|\)), there are two degenerate edge states with energy \(E_{-1}\), indicating that the system is in a topologically non-trivial phase. But, if \(\mu>1\) (i.e., \(|U_{-1}^{(1)}|>|U_{-1}^{(2)}|\)), no edge states appear in the spectrum gap. Therefore, the system is in a topologically trivial phase. In the case of \(\mu=1\) (i.e., \(|U_{-1}^{(1)}|=|U_{-1}^{(2)}|\)) the gap closes, recovering the normal 1D tight-binding model discussed in Sec. VI.1.1.
Figure 11C shows the atomic and the photonic excitation amplitudes for the topologically robust edge BSs in the metaband gap (indicated by the hollow circles in Fig. 11B, corresponding to \(g=10J\) and \(\mu=0.5\)). The atom number is even, with \(N_{\rm a}=10\). One can see that the two degenerate edge states are localized at the ends of the atom array, with even and odd parities, respectively. The results obtained from the approximate Hamiltonian (47)
Figure 11: (A) Sketch of a setup with separate giant atoms realizing an SSH chain. The distance parameters are set as \(\Delta m=2\) and \(\Delta m=1\), respectively. (B) Left panel: the BS energy levels are plotted as functions of the coupling strength \(g\) for the setup shown in (A). The parameters are \(N_{\rm a}=10\), \(\mu=0.5\), and \(\delta=0\). The photonic band is shown by the shaded region. Right panel: the BS energy levels are plotted as functions of \(\mu\) for the setup shown in (A). The coupling strength is set as \(g=10J\). Other parameters are the same as the left panel. (C) Left column: the upper (lower) panel shows the atomic excitation amplitudes for the edge state with even (odd) parity in the lower metaband gap, as indicated by the circles in (B), i.e., \(g=10J\) and \(\mu=0.5\). Right column: the corresponding photonic excitation amplitudes. The bars are numerical results and the red disks are approximate results from the effective Hamiltonian (47).
(red disks) are in good agreement with the full numerical ones (bars) obtained from the Hamiltonian (1). Note that for the usual SSH-type atomic chain, the neighboring atoms are directly coupled, thus the eigen states of system, including the bulk and edge states, contains only excitations of bare atoms. However, for present system, the dressed atoms, instead of bare atoms, are effectively coupled through photon-mediated interactions, forming collective dressed states. Thus the edge states shown in Fig. 11C exhibit characters of dressed states, including both atomic and photonic excitations. The photonic clouds accompanying the most excited atoms are mainly concentrated around the left and right ends of the atom array.
## VII Single-photon scattering states
### General analytical expressions for scattering amplitudes
In this section we study the the scattering properties of the propagating photons. We assume that a single photon with energy \(E=\omega_{k}=\omega_{c}-2J\cos k\) lying inside the band propagate along the waveguide toward the atoms. The scattering eigen state in the single excitation manifold can be written as
\[|\psi_{\mathrm{S}}\rangle=\left(\sum_{j}\phi_{j}c_{j}^{\dagger}+\sum_{m=1}^{N_{ \mathrm{a}}}v_{m}\sigma_{m}^{\dagger}\right)|G\rangle, \tag{50}\]
where \(\phi_{j}\) is the excitation amplitude of the \(j\)th resonator, and \(v_{m}\) is the excitation amplitude of the \(m\)th atom. Substituting Eq. (50) into eigen equation \(H|\psi_{\mathrm{S}}\rangle=E|\psi_{\mathrm{S}}\rangle\) [\(H\) is the Hamiltonian of system described by Eq. (1)] yields a set of equations as follows
\[\left(\omega_{k}-\omega_{c}\right)\phi_{n_{\mathrm{m}l}}+J\left(\phi_{n_{ \mathrm{m}l}-1}+\phi_{n_{\mathrm{m}l}+1}\right)-g_{ml}v_{m}=0, \tag{51a}\] \[\left(\omega_{k}-\Omega_{m}\right)v_{m}-\sum_{l=1}^{M_{m}}g_{ml}\phi_{n_{ \mathrm{m}l}}=0. \tag{51b}\]
Assuming that the photons incident from the left of waveguide, the excitation amplitude of the resonator \(j\) between the \(s_{ml}\)th coupling point and the \((s_{ml}+1)\)th one takes the form
\[\phi_{j}=t_{s_{ml}}e^{ikj}+r_{s_{ml}}e^{-ikj}, \tag{52}\]
where \(t_{s_{ml}}\) is the transmission amplitude at the coupling point \(s_{ml}\), and \(r_{s_{ml}}\) is the reflection amplitude at the coupling point \(s_{ml}+1\). Note that the running index \(s_{ml}=1,2,\cdots N^{\prime}-1\) (\(N^{\prime}=\sum_{m=1}^{N_{\mathrm{a}}}M_{m}\) is the total number of coupling points) labeling the coupling point at the site \(n_{ml}\), is simply counting from the first coupling point (connecting to the site \(n_{11}\)) at the far left to the last one (the \(\tilde{l}\)th coupling point of the \(\tilde{m}\)th atom, connecting to the site \(n_{\tilde{m}\tilde{l}}\)) at the far right. Note that \(n_{\tilde{m}\tilde{l}}\) need not be \(n_{N_{\mathrm{a}}M_{N_{\mathrm{a}}}}\), since for giant atoms the last coupling point can belong to any of the atoms. For sites located at the left side of the first coupling point (with \(j<n_{11}\)), we have \(\phi_{j}=e^{ikj}+re^{-ikj}\). And for sites located at the right side of the last coupling point (with \(j>n_{\tilde{m}\tilde{l}}\)), we have \(\phi_{j}=te^{ikj}\). \(t\) (\(r\)) is the transmission (reflection) amplitude of the atomic array.
By using Eqs. (51a)-(52) and the continuity condition \(\phi_{n_{\mathrm{m}l}-}=\phi_{n_{\mathrm{m}l}+}\), and after some algebra, we can obtain the transmission and reflection amplitudes
\[t=1-i\frac{1}{v_{\mathrm{g}}(k)}\mathbb{G}^{\dagger}\left[\omega_{k}\mathbb{I }-\mathbb{H}^{\mathrm{(eff)}}(k)\right]^{-1}\mathbb{G}, \tag{53a}\] \[r=-i\frac{1}{v_{\mathrm{g}}(k)}\mathbb{G}^{\top}\left[\omega_{k}\mathbb{I}- \mathbb{H}^{\mathrm{(eff)}}(k)\right]^{-1}\mathbb{G}. \tag{53b}\]
Here
\[v_{\mathrm{g}}(k)=\frac{\partial\omega_{k}}{\partial k}=2J\sin k=\sqrt{4J^{2} -(\omega_{k}-\omega_{c})^{2}} \tag{54}\]
is the group velocity. \(\mathbb{I}\) is the identity matrix. \(\mathbb{G}\) takes the form \(\mathbb{G}=\left(G_{1},G_{2},\cdots G_{m},\cdots G_{N_{\mathrm{a}}}\right)^{\top}\), with elements \(G_{m}=\sum_{l=1}^{M_{m}}g_{ml}e^{ikn_{\mathrm{m}l}}\). \(\mathbb{H}^{\mathrm{(eff)}}(k)\) is the effective non-Hermitian Hamiltonian of the atom array, with elements
\[H^{\mathrm{(eff)}}_{mm^{\prime}}(k)=\Omega_{m}\delta_{mm^{\prime}}-i\frac{1}{ v_{\mathrm{g}}(k)}\sum_{l=1}^{M_{m}}\sum_{l^{\prime}=1}^{M_{m^{\prime}}}g_{ml}g_{m^{ \prime}l^{\prime}}e^{ik\left|m_{\mathrm{m}l}-n_{m^{\prime}l^{\prime}}\right|}. \tag{55}\]
The off-diagonal elements of this Hamiltonian describe both coherent and dissipative interactions between atoms mediated by the waveguide modes. Note that these types of interactions are long-range ones, due to the exchange of propagating photons between atoms. While the effective interactions between dressed atoms discussed in Sec. V and Sec. VI can be interpreted as the emitters exchanging bound photons through the bath, with interaction range determined by the localization length of photonic wave packet.
The scattering amplitudes described by Eqs. (53a) and (53b) are applicable for the most general setup possible, with \(N_{\mathrm{a}}\) atoms such that atom \(m\) has \(M_{m}\) connection points. When \(N_{\mathrm{a}}=1\), the coupling points are equally spaced, and all the coupling strengths are identical, the results reduce to those obtained in Ref. [61].
### Scattering spectra for double giant atoms
Here we focus on the wQED structures with two two-level giant atoms, and each atom couples to a 1D waveguide through two connection points. For this case, from Eqs. (53a) and (53b) we can obtain the explicit expression of the transmission and reflection amplitudes
\[t=\frac{\prod_{m}\left\{i\delta_{k}^{(m)}-i\frac{2}{v_{\rm g}(k)}g_{m1}g_{m2}\sin[k \left(n_{m2}-n_{m1}\right)]\right\}-A(k)}{\prod_{m}\left(i\delta_{k}^{(m)}- \frac{1}{v_{\rm g}(k)}\sum_{l,l^{\prime}}g_{ml}g_{ml^{\prime}}e^{ik|n_{ml^{ \prime}}-n_{m^{\prime}l^{\prime}}|}\right)-\left(\frac{1}{v_{\rm g}(k)}\sum_{l, l^{\prime}}g_{al}g_{bl^{\prime}}e^{ik|n_{ml^{\prime}}-n_{bl^{\prime}}|} \right)^{2}}, \tag{56a}\] \[r=\frac{\left[\frac{1}{v_{\rm g}(k)}\left(i\delta_{k}^{(a)}-\frac{1}{v_{\rm g }(k)}\sum_{l,l^{\prime}}g_{al}g_{al^{\prime}}e^{ik|n_{ml^{\prime}}-n_{ml^{ \prime}}|}\right)\sum_{l,l^{\prime}}g_{bl}g_{bl^{\prime}}e^{ik(n_{bl^{\prime} }+n_{bl^{\prime}})}+(a\leftrightarrow b)\right]+B(k)}{\prod_{m}\left(i\delta_{ k}^{(m)}-\frac{1}{v_{\rm g}(k)}\sum_{l,l^{\prime}}g_{ml}g_{ml^{\prime}}e^{ik|n_{ml^{ \prime}}-n_{ml^{\prime}}|}\right)-\left(\frac{1}{v_{\rm g}(k)}\sum_{l,l^{ \prime}}g_{al}g_{bl^{\prime}}e^{ik|n_{ml^{\prime}}-n_{bl^{\prime}}|}\right)^{2 }}, \tag{56b}\]
with
\[A(k)=\frac{1}{v_{\rm g}^{2}(k)}\sum_{l,l^{\prime},f,f^{\prime}}g_{al}g_{bl^{ \prime}}g_{af}\,g_{bf^{\prime}}\left(e^{ik|n_{ml^{\prime}}-n_{bl^{\prime}}|}- e^{ik(n_{ml^{\prime}}-n_{br^{\prime}})}\right)\left(e^{ik|n_{af}-n_{br^{\prime}}|}-e^{ ik\left(n_{br^{\prime}}-n_{af}\right)}\right), \tag{57a}\] \[B(k)=\frac{2}{v_{\rm g}^{2}(k)}\sum_{l,l^{\prime},f,f^{\prime}}g_{al}g_{bl^{ \prime}}g_{af}\,g_{bf^{\prime}}e^{ik(n_{ml^{\prime}}+n_{br^{\prime}})}e^{ik|n_ {af}-n_{br^{\prime}}|}. \tag{57b}\]
One can further define the transmittance \(T=|t|^{2}\) and the reflectance \(R=|r|^{2}\). Note that \(T+R=1\) is fulfilled due to conservation of photon number. Here the indices \(m=a,b\) and \(l,l^{\prime},f,f^{\prime}=1,2\) are used to label the atoms and coupling points, respectively. \(\delta_{k}^{(m)}=\omega_{k}-\Omega_{m}\) is the detuning between the incident photon and the atom \(m\). Note that Eqs. (56a) and (56b) are general expressions applicable for all the three based configurations of double-giant-atom structures (i.e., separate, braided and nested atoms). In what follows, we consider the maximum symmetric case, with equal atomic frequency \(\Omega_{a}=\Omega_{b}=\Omega\) (i.e., \(\delta_{k}^{(a)}=\delta_{k}^{(b)}=\omega_{k}-\Omega\equiv\delta_{k}\)), equal coupling strength at each coupling point \(g_{ml}=g\), and equal distance between neighboring points, labeled as \(\Delta m\) (i.e., \(\Delta n=\Delta m\) is satisfied for the separate and nested configurations, and \(\Delta n=2\Delta m\) for the braided configuration. \(\Delta n\) and \(\Delta m\) are defined in Sec. V, see Fig. 4.).
We first investigate the weak coupling regime \(g\ll J\), in which one can make the linear approximation and neglect the time-delay effect. Firstly, in this regime only the photons in the vicinity of atomic transition frequencies can effectively interact with the atoms. Thus the group velocity \(v_{\rm g}(k)\) can be approximated as \(v_{\rm g}(k_{\Omega})\), with \(k_{\Omega}=\arccos[(\omega_{c}-\Omega)/(2J)]\) being the wave vector corresponding to atomic frequency \(\Omega\). In this regime, the decay rate at each coupling point can be defined as \(\gamma=2g^{2}/v_{\rm g}(k_{\Omega})\). If \(\gamma\) is much less than the spacing between the band edge and the atomic frequency, the dispersion relation of the modes around the resonance frequency of the atom can be approximated as a linearized one \(\delta_{k}\simeq v_{\rm g}(k_{\Omega})(k-k_{\Omega})\). In addition, when the time delay \(\tau=\Delta m/v_{\rm g}(k_{\Omega})\) for a photon with frequency \(\Omega\) to travel between neighboring coupling points is much smaller than the relaxation time \(1/\gamma\) (i.e., \(\gamma\tau\ll 1\)), the setup works in the Markovian regime. Equivalently, this condition can be written as \(g/J\ll\sqrt{2/\Delta m}\sin(k_{\Omega})\), which can be easily satisfied when \(g\ll J\) and \(k_{\Omega}\sim 1\) (i.e., the atomic
Figure 12: Reflectance \(R\) for two giant atoms as functions of detuning \(\delta_{k}\) for different values of \(\Delta m\) and \(g\). (A): two separate atoms; (B): two braided atoms; (C): two nested atoms. In each panel, the atomic frequency is chosen as \(\Omega=\omega_{c}\).
frequency is not near the band edge). The condition \(\gamma\tau\ll 1\) leads to \(\delta_{k}\tau\ll 1\) because the frequency range of interest and the decay rate are of the same order of magnitude. The phase accumulated by a propagating photon traveling a distance of \(\Delta m\) can be approximated as \(k\Delta m\simeq(k_{\Omega}+\frac{\delta_{k}}{v_{\text{g}}(k_{\Omega})})\Delta m =k_{\Omega}\Delta m+\delta_{k}\tau\simeq k_{\Omega}\Delta m\equiv\phi\), i.e., one can replace \(k\) appearing in the phase factors in Eqs. (56a) and (56b) by \(k_{\Omega}\), which means that the phase-accumulated effects for detuned photons can be neglected. The spectra for this case are determined by some characteristic quantities [45; 51], including the Lamb shifts \(\delta_{\text{L},s}\) (\(s=a,b\)), the individual decays \(\Gamma_{s}\) (\(s=a,b\)), the exchange interaction \(G_{ab}\), and the collective decay \(\Gamma_{ab}\). Specifically, (i) for separate configuration, we have \(\delta_{\text{L},a}=\delta_{\text{L},b}=\gamma\sin\phi\), \(\Gamma_{a}=\Gamma_{b}=2\gamma(1+\cos\phi)\), \(G_{ab}=\gamma(\sin\phi+2\sin 2\phi+\sin 3\phi)/2\) and \(\Gamma_{ab}=\gamma(\cos\phi+2\cos 2\phi+\cos 3\phi)\); (ii) for braided configuration, we have \(\delta_{\text{L},a}=\delta_{\text{L},b}=\gamma\sin 2\phi\), \(\Gamma_{a}=\Gamma_{b}=2\gamma(1+\cos 2\phi)\), \(G_{ab}=\gamma(3\sin\phi+\sin 3\phi)/2\) and \(\Gamma_{ab}=\gamma(3\cos\phi+\cos 3\phi)\); (iii) for nested configuration, we have \(\delta_{\text{L},a}=\gamma\sin 3\phi,\delta_{\text{L},b}=\gamma\sin\phi\), \(\Gamma_{a}=2\gamma(1+\cos 3\phi)\), \(\Gamma_{b}=2\gamma(1+\cos\phi)\), \(G_{ab}=\gamma(\sin\phi+\sin 2\phi)\) and \(\Gamma_{ab}=2\gamma(\cos\phi+\cos 2\phi)\). Here we assume that \(\Omega=\omega_{c}\) (i.e., \(k_{\Omega}=\pi/2\)), thus the phase delay should take _discrete_ values \(\phi=\Delta m\pi/2\). The corresponding reflection spectra in weak coupling regime (with \(g=0.01J\)) for three configurations are shown in left columns of Figs. 12A-12C. Firstly, when the individual decays vanish (with \(\Gamma_{a}=\Gamma_{b}=0\)), the atoms decouple from the waveguide, resulting in zero reflection (total transmission) for all \(\delta_{k}\) [see Figs. 12(a2), 12(b1) and 12(c2)]. In addition, for all the Lorentzian spectra in the left columns of Fig. 12A-12C, the linewidth is twice as large as the the individual decay of a single giant atom, which is a clear signature of superradiance. For the nested configuration, when \(\Delta m=2l-1\) (\(l\in\mathbb{N}^{+}\)), the reflection spectrum exhibits a double-peak structure with a dip at \(\delta_{k}=(-1)^{l+1}\gamma\) due to destructive interference [see Figs. 12(c1)].
Note that in the long-wavelength regime with \(k\ll 1\), the phase delay \(\phi\) becomes _continuous_. If \(g/J\ll k_{\Omega}^{3/2}\) is further satisfied, the linewidth of atom becomes much smaller than the distance between the frequencies of atom and band edge so that the linear approximation is valid at the same time, the results reduce to those obtained in Ref. [51].
Now we investigate the spectra in the strong coupling regime \(g\sim J\). Firstly, in this regime, the influence of the nonlinear dispersion must be considered, i.e., the group velocity \(v_{\text{g}}(k)\) can not be replaced by \(v_{\text{g}}(k_{\Omega})\). Moreover, the threshold condition of non-Markovianity \(\gamma\tau\sim 1\) [or equally, \(g/J\sim\sqrt{2/\Delta m}\sin(k_{\Omega})\)] is also easily satisfied. For instance, when \(\Omega=\omega_{c}\) (corresponding to \(k_{\Omega}=\pi/2\)) and \(g\sim J\), this condition is fulfilled even for small \(\Delta m\) (On the contrary, for small \(g\), to obtain non-Markovianity, large \(\Delta m\) is required). Thus the wave vector \(k\) appearing in the phase factors can also not be replaced by \(k_{\Omega}\). As a result, the spectra exhibit more abundant structures due to the influence of nonlinear dispersion and the phase-accumulation effect for detuned photons, as shown in the right columns of Figs. 12A-12C. At first, the spectra around the atomic frequency deviate from the line shape for the case of weak coupling, and in the large-detuning regime, some additional reflection peaks and dips appear. Besides, it also shows \(R=1\) at both band edges for symmetric spectra (or \(R=1\) at the lower band edge, \(R=0\) at the upper one for asymmetric spectra) due to the band-edge effects.
Interestingly, the spectra in the non-Markovian regime can be used to demonstrate the decoherence-free interaction [45] between two braided atoms. For this configuration, when \(\Omega=\omega_{c}\) and \(\Delta m=2l+1\) (\(l\in\mathbb{N}\)), the phase delay accumulated by the resonant photons can be approximated as \(\pi(2l+1)/2\), resulting in vanished individual decays and a nonzero exchange interaction \(|G_{ab}|=\gamma=g^{2}/J\) between atoms. When the coupling strength is weak, this kind of decoherence-free interaction cannot be probed by the photon scattering spectra [see Fig. 12(b1)], because the atoms are totally decoupled from the waveguide. However, in the non-Markovian regime, the phase accumulated by the detuned photons around \(\omega_{c}\) is not exactly \(\pi(2l+1)/2\). Thus the atom can obtain a tiny nonzero individual decay, and meanwhile maintain an exchange interaction \(|G_{ab}|\simeq g^{2}/J\). As a result, one can obtain a double-peak structure, with distance \(2g^{2}/J\), characterizing the nearly decoherence-free interaction, as shown in Fig. 12(b3).
## VIII Conclusion
In summary, we analyze in detail the bound and the scattering states in the single-excitation subspace of a system of multiple giant atoms coupled to a common structured photonic bath. The most general analytical expressions possible for these states and the corresponding energy spectra are obtained. Based on these, we obtain some essential properties of these states that could be tested in near-future experiments. These results reflect unconventional light-matter interactions when giant-atom systems are coupled to a structured environment. Specifically, the threshold conditions for the appearance of the BSs and the photon-mediated interactions between atoms for different configurations are analyzed. For large atom number and strong atom-photon coupling, the BSs in the photonic band gap can form different types of metaband structures (e.g., SSH-type energy band with nontrivial topological properties), depending on the arrangement of the coupling points. These structures can be explained in terms of photon-mediated interactions between atoms. These results can serve as a starting point to further explore other non-trivial many-body phases, making the system a useful platform for quantum simulations. Besides, we study the scattering behavior, and find that in the weak coupling regime, the spectra are mainly influenced by interference effects between coupling points, whereas in the strong coupling regime other factors such as the nonlin
ear dispersion, the band-edge effect, and the detuning-dependent phase-accumulation (non-Markovian) effect should be taken into account and can lead to unconventional spectral structures.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (NSFC) under Grants No. 61871333 and No.12147208.
## Appendix A Bound-state conditions for a single giant atom
Here we consider the case that the distance between neighboring coupling points is a constant \(\Delta n\) and the coupling strengths are all the same, labeled as \(g\). From Eq. (21), we can obtain the self-energy for this case
\[\Sigma_{\beta}(E)=\frac{g^{2}\sum_{l,l^{\prime}=0}^{N_{c}-1}(-\beta)^{|l-l^{ \prime}|\Delta n}e^{-\frac{|l-l^{\prime}|\Delta n}{\lambda(E)}}}{E\sqrt{1- \frac{4J^{2}}{E^{2}}}}. \tag{100}\]
The solutions of Eq. (20) can be determined by the intersection points of the curves \(y=E-\delta\) and \(y=\Sigma_{\beta}(E)\). We first consider the BS below the continuum, i.e., \(\beta=-1,\ E<-2J\). For this case, \(\Sigma_{-1}(E)\) monotonically decreases with \(E\) and \(\Sigma_{-1}(-2J)\) diverges to \(-\infty\). Thus the value taken by the linear function \(y=E-\delta\) at \(E=-2J\) always lies above \(\Sigma_{-1}(-2J)\), a single BS of energy \(E_{-1}<-2J\) certainly occurs.
Now we consider the BS above the continuum, i.e., \(\beta=1,\ E>2J\). In this region, \(\Sigma_{1}(E)\) still monotonically decreases with \(E\), thus if the value taken by the function \(y=E-\delta\) at \(E=2J\) is smaller than \(\Sigma_{1}(2J)\), Eq. (20) has a solution at the region \(E>2J\). This condition thus explicitly reads \(2J-\delta<\Sigma_{1}(2J)\). Based on this, we can further obtain the threshold condition for different parameters: (i) if \(N_{c}\in\mathbb{B}^{*}\) and \(\Delta n\in\mathbb{O}^{+}\), the limit value of the self-energy at the band edge can be calculated as \(\Sigma_{1}(2J)=N_{c}\Delta ng^{2}/(2J)\). Thus for \(\delta<2J\), an upper BS of energy \(E_{1}>2J\) exists when \(g>\sqrt{2J(2J-\delta)/(N_{c}\Delta n)}\). Instead, for \(\delta>2J\), an upper BS of energy \(E_{1}>2J\) always exists since \(2J-\delta\) is negative while \(\Sigma_{1}(2J)\) is positive anyway. (ii) If \(N_{c}\) and \(\Delta n\) take other values, \(E_{1}\) always exists since \(\Sigma_{1}(2J)\) diverges to \(+\infty\).
In summary, the lower BS always exists. If \(N_{c}\in\mathbb{B}^{+}\), \(\Delta n\in\mathbb{O}^{+}\), and \(\delta<2J\), the upper BS exists when \(g>\sqrt{2J(2J-\delta)/(N_{c}\Delta n)}\), and for other parameters, the upper BS always exists.
## Appendix B Bound-state conditions for double giant atoms
### Two separate giant atoms
The solutions of Eq. (30) can be determined by the intersection points of the curves \(y=E-\delta\) and \(y=\Sigma_{\beta\alpha}(E)\). Note that \(\Sigma_{1\alpha}(E)\) [\(\Sigma_{-1\alpha}(E)\)] is positive (negative) and monotonically decreases with \(E\) in the domain \(E>2J\) (\(E<-2J\)). Thus, there are at most four solutions to Eq. (30). The existence of an upper (lower) BS with parity labeled by \(\alpha=\pm 1\) requires that the value taken by the linear function \(y=E-\delta\) at \(E=2J\) (\(E=-2J\)) always lies below (above) the limit value of the energy-correction function at the band edge \(\Sigma_{1\alpha}(2J)\) [\(\Sigma_{-1\alpha}(-2J)\)]. This condition thus explicitly reads \(2J-\delta<\Sigma_{1\alpha}(2J)\) [\(-2J-\delta>\Sigma_{-1\alpha}(-2J)\)] for upper (lower) BSs. Clearly, if \(\Sigma_{1,\alpha}(2J)\) [\(\Sigma_{-1,\alpha}(-2J)\)] diverges to \(+\infty\) (\(-\infty\)), it is always fulfilled, thus an \(\alpha\)-parity BS of energy \(E_{1,\alpha}>2J\) (\(E_{-1,\alpha}<-2J\)) certainly occurs for any value of \(g\). Instead, if \(\Sigma_{1,\alpha}(2J)\) [\(\Sigma_{-1,\alpha}(-2J)\)] is finite, the appearance of the BSs depends on the parameter \(\delta\). Specifically, for \(\delta<2J\) (\(\delta>-2J\)), by using the condition \(2J-\delta<\Sigma_{1,\alpha}(2J)\) [\(-2J-\delta>\Sigma_{-1,\alpha}(-2J)\)] we can obtain a threshold coupling strength, beyond which the BS of energy \(E_{1,\alpha}>2J\) (\(E_{-1,\alpha}<-2J\)) exists. Instead, for \(\delta>2J\) (\(\delta<-2J\)), this condition is always fulfilled since \(2J-\delta\) (\(-2J-\delta\)) is negative (positive) while \(\Sigma_{1,\alpha}(2J)\) [\(\Sigma_{-1,\alpha}(-2J)\)] is positive (negative) anyway, i.e., an \(\alpha\)-parity BS of energy \(E_{1,\alpha}>2J\) (\(E_{-1,\alpha}<-2J\)) always exists.
Based on the general analysis given above, we further discuss in detail the threshold conditions of the BSs under different parameters. We first discuss _the BSs below the continuum_, labeled by \(\beta=-1\) and \(\alpha=\pm 1\). From Eq. (29), one can see that \(\Sigma_{ab}^{(-1)}(E)<0\) and therefore \(\Sigma_{-1,1}(E)<\Sigma_{-1,-1}(E)\) [see Eq. (31)] in the domain \(E<-2J\). In addition, the limit values of these energy-correction functions at the lower band edge are \(\Sigma_{-1,1}(-2J)=-\infty\) and \(\Sigma_{-1,-1}(-2J)=-(\Delta n+2\Delta m)g^{2}/J\), respectively. Based on the analysis in the first paragraph in this subsection, an even-parity BS of energy \(E_{-1,1}<-2J\) always occurs. While the threshold condition of another BS with odd-parity and higher energy depends on the parameter \(\delta\). Specifically, for \(\delta>-2J\), an odd-parity BS of energy \(E_{-1,-1}<-2J\) occurs when \(g>\sqrt{J(2J+\delta)/(\Delta n+2\Delta m)}\). Instead, for \(\delta<-2J\), an odd-parity BS of energy \(E_{-1,-1}<-2J\) always exists.
Next we analyze _the BSs above the continuum_, labeled by \(\beta=1\) and \(\alpha=\pm 1\). the corresponding threshold conditions are derived as follows:
(i) If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), we have \(\Sigma_{ab}^{(1)}(E)<0\) [\(\Sigma_{ab}^{(1)}(E)>0\)] and therefore \(\Sigma_{1,1}(E)<\Sigma_{1,-1}(E)\) [\(\Sigma_{1,1}(E)>\Sigma_{1,-1}(E)\)] in the domain \(E>2J\). In addition, we have \(\Sigma_{1,1}(2J)=\Sigma_{1,-1}(2J)=\Delta ng^{2}/J\). Thus, for \(\delta<2J\), both the even- and the odd-parity BSs of energy \(E_{1,\pm 1}>2J\) occur when \(g>\sqrt{J(2J-\delta)/\Delta n}\). And for \(\delta>2J\), the even- and the odd-parity BSs of energy
\(E_{1,\pm 1}>2J\) always exist. For both the cases, the energy of the odd BS is higher (lower) than that of the even one.
(ii) If \(\Delta n\in\mathbb{B}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), we can see from Eqs. (29) and (31) that \(\Sigma_{ab}^{(1)}(E)<0\) [\(\Sigma_{ab}^{(1)}(E)>0\)] and therefore \(\Sigma_{1,1}(E)<\Sigma_{1,-1}(E)\) [\(\Sigma_{1,1}(E)>\Sigma_{1,-1}(E)\)] in the domain \(E>2J\). Moreover, at the upper band edge, these functions take the values \(\Sigma_{1,1}(2J)=(\Delta n+2\Delta m)g^{2}/J\) [\(\Sigma_{1,1}(2J)=+\infty\)] and \(\Sigma_{1,-1}(2J)=+\infty\) [\(\Sigma_{1,-1}(2J)=(\Delta n+2\Delta m)g^{2}/J\)], respectively. Thus an odd-parity (even-parity) BS of energy \(E_{1,-1}>2J\) (\(E_{1,1}>2J\)) certainly occurs. While the threshold condition of another BS with even-parity (odd-parity) and lower energy depends on the parameter \(\delta\). Specifically, for \(\delta<2J\), an even-parity (odd-parity) BS of energy \(E_{1,1}>2J\) (\(E_{1,-1}>2J\)) occurs when \(g>\sqrt{J(2J-\delta)/(\Delta n+2\Delta m)}\). Instead, for \(\delta>2J\), an even-parity (odd-parity) BS of energy \(E_{1,1}>2J\) (\(E_{1,-1}>2J\)) always exists.
The results derived in this subsection are summarized in Sec. V.1.
### Bound-state conditions for two braided atoms
The analysis in the first paragraph in Appendix B.1 is also applicable to present case of two braided atoms.
Based on this, we first discuss the _BSs below the continuum_ with \(\beta=-1\) and \(\alpha=\pm 1\). From Eq. (34), we have \(\Sigma_{ab}^{(-1)}(E)<0\) and therefore \(\Sigma_{-1,1}(E)<\Sigma_{-1,-1}(E)\) [see Eq. (31)] in the domain \(E<-2J\). In addition, the limit values of these energy-correction functions at the lower band edge are \(\Sigma_{-1,1}(-2J)=-\infty\) and \(\Sigma_{-1,-1}(-2J)=-(\Delta n-\Delta m)g^{2}/J\), respectively. Thus, an even-parity BS of energy \(E_{-1,1}<-2J\) certainly occurs. While the threshold condition of another BS with odd-parity and higher energy depends on the parameter \(\delta\). Specifically, for \(\delta>-2J\), an odd-parity BS of energy \(E_{-1,-1}<-2J\) occurs when \(g>\sqrt{J(2J+\delta)/(\Delta n-\Delta m)}\). Instead, for \(\delta<-2J\), an odd-parity BS of energy \(E_{-1,-1}<-2J\) always exists.
Next we analyze _the BSs above the continuum_, labeled by \(\beta=1\) and \(\alpha=\pm 1\).
(i) If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), we have \(\Sigma_{1,\pm 1}(2J)=(\Delta n\pm\Delta m)g^{2}/J\) [\(\Sigma_{1,\pm 1}(2J)=(\Delta n\mp\Delta m)g^{2}/J\)]. In addition, one can prove from Eqs. (31) and (34) that when \(\Delta n<2\Delta m\), \(\Sigma_{ab}^{(1)}(E)>0\) [\(\Sigma_{ab}^{(1)}(E)<0\)] is satisfied, and therefore \(\Sigma_{1,1}(E)>\Sigma_{1,-1}(E)\) [\(\Sigma_{1,1}(E)<\Sigma_{1,-1}(E)\)] is fulfilled in the domain \(E>2J\). And when \(\Delta n>2\Delta m\), one can prove that in this domain, the transcendental equation \(\Sigma_{ab}^{(1)}(E)=0\), or equally \(\exp[\frac{\Delta n-2\Delta m}{4(E)}]+\exp[-\frac{\Delta n}{4(E)}]-2=0\), has a solution \(E=E_{c}\) in the domain \(E>2J\), and thereby the curves \(y=\Sigma_{1,1}(E)\) and \(y=\Sigma_{1,-1}(E)\) have an intersection point \(P[E_{c},\Sigma_{1}(E_{c})]\). Moreover, in the domain \(E>E_{c}\), \(\Sigma_{ab}^{(1)}(E)<0\) [\(\Sigma_{ab}^{(1)}(E)>0\)], which means \(\Sigma_{1,1}(E)<\Sigma_{1,-1}(E)\) [\(\Sigma_{1,1}(E)>\Sigma_{1,-1}(E)\) is fulfilled for \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), and if \(2J<E<E_{c}\) the opposite results can be obtained. Note that the position of point \(P\) depends on the parameters \(\delta\) and \(g\). Specifically, for \(\delta>E_{c}\), the point \(P\) always lies above the line \(y=E-\delta\) since \(E_{c}-\delta\) is negative while \(\tilde{\Sigma}_{1}(E_{c})\) is positive anyway. Instead, for \(\delta<E_{c}\), if the coupling strength takes the value
\[g_{c}=\sqrt{\frac{(E_{c}-\delta)\sqrt{E_{c}^{2}-4J^{2}}}{2\left(1-e^{-\frac{ \Delta n}{4(E_{c})}}\right)}}, \tag{35}\]
we can obtain from Eq. (28) that \(E_{c}-\delta=\tilde{\Sigma}_{1}(E_{c})\), i.e., the intersection point \(P\) is located on the line \(y=E-\delta\), while the point \(P\) lies above \(y=E-\delta\) if \(g>g_{c}\), and below \(y=E-\delta\) if \(g<g_{c}\).
Based on the geometrical picture described above, we at first summarize the threshold conditions of the upper BSs when \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), and \(\Delta n<2\Delta m\) are fulfilled. For \(\delta<2J\), two BSs with parities labeled by \(\alpha=\pm 1\) of energy \(E_{1,\pm 1}>2J\) occurs when \(g>\sqrt{J(2J-\delta)/(\Delta n\pm\Delta m)}\) [\(g>\sqrt{J(2J-\delta)/(\Delta n\mp\Delta m)}\)], and \(E_{1,1}>E_{1,-1}\) (\(E_{1,1}<E_{1,-1}\)) if they coexist. For \(\delta>2J\), there always exist two upper BSs with opposite parities with \(E_{1,1}>E_{1,-1}\) (\(E_{1,1}<E_{1,-1}\)) for all \(g\).
Next we consider the case of \(\Delta n\in\mathbb{O}^{+}\), \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), and \(\Delta n>2\Delta m\). Firstly, for \(\delta<2J\), the threshold conditions for the upper BSs are the same as the case of \(\Delta n<2\Delta m\). Secondly, for \(2J<\delta<E_{c}\), both the even- and the odd-parity BSs of energy \(E_{1,\pm 1}>2J\) exist for any value of \(g\). In these two cases, i.e., for \(\delta<E_{c}\), the two BSs are degenerate at \(g=g_{c}\), with \(E_{1,1}=E_{1,-1}=E_{c}\), when \(g<g_{c}\), the energies of BSs, if they coexist, satisfy \(E_{1,-1}<E_{1,1}<E_{c}\) (\(E_{1,1}<E_{1,-1}<E_{c}\)), and when \(g>g_{c}\), \(E_{1,-1}>E_{1,1}>E_{c}\) (\(E_{1,1}>E_{1,-1}>E_{c}\)) is fulfilled. Finally, for \(\delta>E_{c}\), both the even- and the odd-parity BSs always exist in the domain \(E>E_{c}\), with \(E_{1,1}<E_{1,-1}\) (\(E_{1,1}>E_{1,-1}\)).
(ii) If \(\Delta n\in\mathbb{B}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), we can see from Eqs. (31) and (34) that \(\Sigma_{ab}^{(1)}(E)<0\) [\(\Sigma_{ab}^{(1)}(E)>0\)] and thereby \(\Sigma_{1,1}(E)<\Sigma_{1,-1}(E)\) [\(\Sigma_{1,1}(E)>\Sigma_{1,-1}(E)\)] in the domain \(E>2J\). Moreover, at the upper band edge, these functions take the values \(\Sigma_{1,1}(2J)=(\Delta n-\Delta m)g^{2}/J\) [\(\Sigma_{1,1}(2J)=+\infty\)] and \(\Sigma_{1,-1}(2J)=+\infty\) [\(\Sigma_{1,-1}(2J)=(\Delta n-\Delta m)g^{2}/J\)], respectively. Thus an odd-parity (even-parity) BS of energy \(E_{1,-1}>2J\) (\(E_{1,1}>2J\)) always occurs. While the threshold condition of another BS with even-parity (odd-parity) and lower energy depends on the parameter \(\delta\). Specifically, for \(\delta<2J\), an even-parity (odd-parity) BS of energy \(E_{1,1}>2J\) (\(E_{1,-1}>2J\)) occurs when \(g>\sqrt{J(2J-\delta)/(\Delta n-\Delta m)}\). Instead, for \(\delta>2J\), an even-parity (odd-parity) BS of energy \(E_{1,1}>2
monotonically decreases with \(E\) in the domain \(E>2J\) (\(E<-2J\)). Thus, there are at most four solutions to Eq. (37), labeled by \(E_{\beta\xi}\). In addition, from Eq. (38), one can see that \(\Sigma_{1,1}(E)>\Sigma_{1,-1}(E)\) [\(\Sigma_{-1,1}(E)>\Sigma_{-1,-1}(E)\)] is satisfied in the domain \(E>2J\) (\(E<-2J\)). Thus we have \(E_{1,1}>E_{1,-1}\) [\(E_{-1,1}>E_{-1,-1}\)] if the two upper (lower) BSs coexist. Note that the analysis similar to that in the first paragraph in Appendix B.1 is also applicable to present case of two nested atoms. Based on this, in what follows we further discuss in detail the threshold conditions of the BSs under different parameters.
We first discuss _the BSs below the continuum_ with \(\beta=-1\) and \(\zeta=\pm 1\). The limit values of these energy-correction functions at the lower band edge are \(\Sigma_{-1,1}(-2J)=-\Delta mg^{2}/J\) and \(\Sigma_{-1,-1}(-2J)=-\infty\), respectively. Thus, a BS of energy \(E_{-1,-1}<-2J\) certainly occurs. While the threshold condition of another BS with higher energy depends on the parameter \(\delta\). Specifically, for \(\delta>-2J\), a BS of energy \(E_{-1,1}<-2J\) occurs when \(g>\sqrt{J(2J+\delta)/\Delta m}\). Instead, for \(\delta<-2J\), a BS of energy \(E_{-1,1}<-2J\) always exists. Note that for lower BSs, \(\Sigma_{ab}^{(-1)}(E)<0\) [see Eq. (36)] is always fulfilled, thus we have \(\tan\Theta_{-1,1}<0\) (\(\tan\Theta_{-1,-1}>0\)) [see Eq. (27)] for the state of energy \(E_{-1,1}\) (\(E_{-1,-1}\)), namely, the excitation amplitudes of the two atoms have the opposite signs (same sign).
Next we analyze _the BSs above the continuum_, labeled by \(\beta=1\) and \(\zeta=\pm 1\).
(i) If \(\Delta n\in\mathbb{O}^{+}\), the limit values of these energy-correction functions at the upper band edge are \(\Sigma_{1,\pm 1}(2J)=\left(\Delta n+\Delta m\pm\sqrt{\left(\Delta n\right)^{2}+ \left(\Delta m\right)^{2}}\right)g^{2}/J\). Thus, for \(\delta<2J\), a BS of energy \(E_{1,\pm 1}>2J\) occurs when \(g>\sqrt{\frac{J(2J-\delta)}{\Delta n\Delta m}}\). Instead, for \(\delta>2J\), a BS of energy \(E_{1,\pm 1}>2J\) always exists.
(ii) If \(\Delta n\in\mathbb{B}^{+}\), the limit values of these energy-correction functions at the upper band edge are \(\Sigma_{1,1}(2J)=+\infty\) and \(\Sigma_{1,-1}(2J)=\Delta mg^{2}/J\), respectively. Thus, a BS of energy \(E_{1,1}>2J\) always exists. While the threshold condition of another BS with lower energy depends on the parameter \(\delta\). Specifically, for \(\delta<2J\), a BS of energy \(E_{1,-1}>2J\) occurs when \(g>\sqrt{J(2J-\delta)/\Delta m}\). Instead, for \(\delta>2J\), a BS of energy \(E_{1,-1}>2J\) always exists.
In addition, for both the cases (i) and (ii), when \(\Delta m\in\mathbb{B}^{+}\), \(\Sigma_{ab}^{(1)}(E)>0\) is fulfilled for \(E>2J\) [see Eq. (36)], thus we have \(\tan\Theta_{1,1}>0\) (\(\tan\Theta_{1,-1}<0\)) [see Eq. (27)] for the state of energy \(E_{1,1}\) (\(E_{1,-1}\)), namely, the excitation amplitudes of the two atoms have the same sign (opposite signs). When \(\Delta m\in\mathbb{O}^{+}\), we can obtain the opposite results.
The results derived in this subsection are summarized in Sec. V.3.
## Appendix C Bound-state conditions for the case of \(N_{\rm a}\gg 1\) giant atoms
Similar to the analysis for the \(N_{a}=2\) case, we can obtain the threshold conditions of the metabands for the \(N_{\rm a}\gg 1\) case, as shown in the following subsections.
### Atomic array with separate giant atoms
We first discuss _the BSs below the continuum_, labeled by \(\beta=-1\) and \(\gamma=\pm 1\). From Eq. (43) we can find that \(\Sigma_{-1,1}^{(N_{\rm a}\gg 1)}(E)<\Sigma_{-1,1}^{(N_{\rm a}\gg 1)}(E)\) is fulfilled in the domain \(E<-2J\). In addition, the limit values of the energy-correction functions at the lower band edge are \(\Sigma_{-1,1}^{(N_{\rm a}\gg 1)}(-2J)=-\infty\) and \(\Sigma_{-1,-1}^{(N_{\rm a}\gg 1)}(-2J)=-\Delta mg^{2}/J\), respectively. Thus a BS of energy \(E_{-1,1}<-2J\), which gives the lower metaband edge, always occurs. While the threshold condition of the BS of energy \(E_{-1,-1}\), which gives the upper metaband edge, depends on the parameter \(\delta\). Specifically, for \(\delta>-2J\), a BS of energy \(E_{-1,-1}<-2J\) occurs when \(g>\sqrt{J(2J+\delta)/\Delta m}\). Instead, for \(\delta<-2J\), a BS of energy \(E_{-1,-1}<-2J\) always exists.
Next we analyze _the BSs above the continuum_, labeled by \(\beta=1\) and \(\gamma=\pm 1\).
(i) If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(E)<\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(E)\) [\(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(E)>\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(E)\)] is fulfilled in the domain \(E>2J\). In addition, we have \(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(2J)=\Delta m\Delta mg^{2}/(1(\Delta n+\Delta mJ))\), \(\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(2J)=\Delta mg^{2}/J\) (\(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(2J)=\Delta nS^{2}/J\), \(\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(2J)=\Delta nS^{2}/[(\Delta n+\Delta mJ)]\)). Thus, for \(\delta<2J\), the BS of energy \(E_{1,1}>2J\) occurs when \(g>\sqrt{\frac{J(2J-\delta)(\Delta n+\Delta m)}{\Delta m\Delta m}}\) [\(g>\sqrt{J(2J-\delta)/\Delta n}\) [\(g>\sqrt{J(2J-\delta)/\Delta n}\)]. And for \(\delta>2J\), the BS of energy \(E_{1,\pm 1}>2J\) always exists. For both the cases of \(\delta<2J\) and \(\delta>2J\), \(E_{1,-1}\) (\(E_{1,1}\)) forms the upper metaband edge, and \(E_{1,1}\) (\(E_{-1,-1}\)) forms the lower metaband edge.
(ii) If \(\Delta n\in\mathbb{B}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(E)<\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(E)\) [\(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(E)>\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(E)\) is fulfilled in the domain \(E>2J\). Moreover, at the upper band edge, these functions take the values \(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(2J)=\Delta mg^{2}/J\) [\(\Sigma_{1,1}^{(N_{\rm a}\gg 1)}(2J)=+\infty\)] and \(\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(2J)=+\infty\) [\(\Sigma_{1,-1}^{(N_{\rm a}\gg 1)}(2J)=\Delta mg^{2}/J\)], respectively. Thus the BS of energy \(E_{1,-1}>2J\) (\(E_{1,1}>2J\)), which gives the upper metaband edge, certainly occurs. While the threshold condition of another BS, which gives the lower metaband edge, depends on the parameter \(\delta\). Specifically, for \(\delta<2J\), a BS of energy \(E_{1,1}>2J\) (\(E_{1,-1}>2J\)) occurs when \(g>\sqrt{J(2J-\delta)/\Delta m}\). Instead, for \(\delta>2J\), a BS of energy \(E_{1,1}>2J\) (\(E_{1,-1}>2J\)) always exists.
The results derived in this subsection are summarized in Sec. VI.1.1.
### Atomic array with braided giant atoms
We first discuss _the BSs below the continuum_, labeled by \(\beta=-1\) and \(\gamma=\pm 1\). From Eq. (45) we can find that \(\Sigma_{-1,1}^{(N_{a}\gg 1)}(E)<\Sigma_{-1,-1}^{(N_{a}\gg 1)}(E)\) is fulfilled in the domain \(E<-2J\), and at the lower band edge, \(\Sigma_{-1,1}^{(N_{a}\gg 1)}(-2J)=-\infty\) and \(\Sigma_{-1,-1}^{(N_{a}\gg 1)}(-2J)=-\Delta mg^{2}/J\) are satisfied, respectively. Thus the threshold conditions for this case are the same as the array of separate atoms discussed in Sec. C.1.
Next we analyze the _BSs above the continuum_, labeled by \(\beta=1\) and \(\gamma=\pm 1\).
(i) If \(\Delta n\in\mathbb{O}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Sigma_{1,1}^{(N_{a}\gg 1)}(E)<\Sigma_{1,-1}^{(N_{a}\gg 1)}(E)\)\([\Sigma_{1,1}^{(N_{a}\gg 1)}(E)>\Sigma_{1,-1}^{(N_{a}\gg 1)}(E)]\) is fulfilled in the domain \(E>2J\). In addition, we have \(\Sigma_{1,1}^{(N_{a}\gg 1)}(2J)=\frac{(\Delta n-2\Delta m)\Delta mg^{2}}{( \Delta n-\Delta m)J}\), \(\Sigma_{1,-1}^{(N_{a}\gg 1)}(2J)=(\Delta n-2\Delta m)g^{2}/J\), \([\Sigma_{1,1}^{(N_{a}\gg 1)}(2J)=(\Delta n-2\Delta m)g^{2}/J\), \(\Sigma_{1,-1}^{(N_{a}\gg 1)}(2J)=\frac{(\Delta n-2\Delta m)mg^{2}}{(\Delta n-\Delta m)J}]\). Thus the threshold conditions for this case can be obtained from the results for the separate atoms by replacing \(\Delta n\) with \(\Delta n-2\Delta m\).
(ii) If \(\Delta n\in\mathbb{B}^{+}\) and \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Sigma_{1,1}^{(N_{a}\gg 1)}(E)<\Sigma_{1,-1}^{(N_{a}\gg 1)}(E)\)\([\Sigma_{1,1}^{(N_{a}\gg 1)}(E)>\Sigma_{1,-1}^{(N_{a}\gg 1)}(E)]\) is fulfilled in the domain \(E>2J\). Moreover, at the upper band edge, these functions take the values \(\Sigma_{1,1}(2J)=\Delta mg^{2}/J\)\([\Sigma_{1,1}(2J)=+\infty]\) and \(\Sigma_{1,-1}(2J)=+\infty\)\([\Sigma_{1,-1}(2J)=\Delta mg^{2}/J]\), respectively. Thus the threshold conditions for this case are the same as the array of separate atoms.
The results derived in this subsection are summarized in Sec. VI.1.2.
### Atomic array with nested giant atoms
We first discuss _the BSs below the continuum_, labeled by \(\beta=-1\) and \(\gamma=\pm 1\). From Eq. (46) we can find that \(\Sigma_{-1,1}^{(N_{a}\gg 1)}(E)<\Sigma_{-1,-1}^{(N_{a}\gg 1)}(E)\) is fulfilled in the domain \(E<-2J\). Moreover, at the lower band edge, \(\Sigma_{-1,1}^{(N_{a}\gg 1)}(-2J)=-\infty\) and \(\Sigma_{-1,-1}^{(N_{a}\gg 1)}(-2J)=-\Delta mg^{2}/(2J)\) are satisfied. One can see that the characteristics of \(\Sigma_{-1,1}^{(N_{a}\gg 1)}(E)\) are the same as those of a separate-atom array. Thus the threshold conditions for the lower metaband can be obtained straightforwardly from the corresponding discussions in Sec. C.1.
Next we analyze _the BSs above the continuum_, labeled by \(\beta=1\) and \(\gamma=\pm 1\). If \(\Delta m\in\mathbb{O}^{+}\) (\(\Delta m\in\mathbb{B}^{+}\)), \(\Sigma_{1,1}^{(N_{a}\gg 1)}(E)<\Sigma_{1,-1}^{(N_{a}\gg 1)}(E)\)\([\Sigma_{1,1}^{(N_{a}\gg 1)}(E)>\Sigma_{1,-1}^{(N_{a}\gg 1)}(E)]\) is fulfilled in the domain \(E>2J\). In addition, we have \(\Sigma_{1,1}^{(N_{a}\gg 1)}(2J)=\Delta mg^{2}/J\)\([\Sigma_{1,1}^{(N_{a}\gg 1)}(2J)=+\infty]\) and \(\Sigma_{1,-1}^{(N_{a}\gg 1)}(2J)=+\infty\)\([\Sigma_{1,-1}^{(N_{a}\gg 1)}(2J)=\Delta mg^{2}/J]\), respectively. Clearly, these results are the same as those of a separate-atom array with \(\Delta n\in\mathbb{B}^{+}\). Thus the threshold conditions for this case can also be obtained from the corresponding discussions in Sec. C.1.
The results derived in this subsection are summarized in Sec. VI.1.3.
|
2303.09180 | Features of a nano-twist phase in the nanolayered Ti3AlC2 MAX phase | Complex intermetallic materials known as MAX phases exhibit exceptional
properties from both metals and ceramics, largely thanks to their nanolayered
structure. With high-resolution scanning transmission electron microscopy
supported by atomistic modelling, we reveal atomic features of a nano-twist
phase in the nanolayered \MAX. The rotated hexagonal single-crystal is
encompassed within basal symmetric twist interfaces similar to grain
boundaries. In particular, we show that air-oxidation at \SI{1000}{\celsius}
can form a twisted phase that leads to the formation of interfacial dislocation
networks with screw characters or to severe interfacial reconstructions.
Additionally, we explore the contribution of disclinations to the
representation by continuum models of the stress field generated by such
nano-twist defect in the \MAX{} bulk phase. The occurrence of this unexpected
defect is expected to impact the physical response of this nanolayered-based
material as such supports property-by-design approaches. | Julien Guénolé, Vincent Taupin, Maxime Vallet, Wenbo Yu, Antoine Guitton | 2023-03-16T09:48:53Z | http://arxiv.org/abs/2303.09180v1 | # Features of a nano-twist phase in the nanolayered Ti\({}_{3}\)AlC\({}_{2}\) MAX phase
###### Abstract
Complex intermetallic materials known as MAX phases exhibit exceptional properties from both metals and ceramics, largely thanks to their nanolayered structure. With high-resolution scanning transmission electron microscopy supported by atomistic modelling, we reveal atomic features of a nano-twist phase in the nanolayered Ti\({}_{3}\)AlC\({}_{2}\). The rotated hexagonal single-crystal is encompassed within basal symmetric twist interfaces similar to grain boundaries. In particular, we show that air-oxidation at 1000 \({}^{\circ}\)C can form a twisted phase that leads to the formation of interfacial dislocation networks with screw characters or to severe interfacial reconstructions. Additionally, we explore the contribution of disclinations to the representation by continuum models of the stress field generated by such nano-twist defect in the Ti\({}_{3}\)AlC\({}_{2}\) bulk phase. The occurrence of this unexpected defect is expected to impact the physical response of this nanolayered-based material as such supports property-by-design approaches.
+
Footnote †: journal: Scripta Materialia
+
Footnote †: journal: Scripta Materialia
Ti\({}_{3}\)AlC\({}_{2}\) belongs to the wide family of MAX phases [1]. These ternary compounds arise a keen interest, because they are stiff, lightweight, machinable, made from relatively inexpensive raw materials, resistant to oxidation and thermal shock, and capable of remaining strong up to temperatures above 1,300\({}^{\circ}\)C in air [2]. More than fifty compounds are thermodynamically stable and all of them exhibit the same range of promising properties. Because of their composition, they were called M\({}_{n+1}\)AX\({}_{n}\) phases (\(n=1\) to 3, M is a transition metal, A is an A-group element and X is nitrogen and/or carbon) [3].
As typical MAX phase, Ti\({}_{3}\)AlC\({}_{2}\) has a nanolayered structure with an hexagonal lattice, whose the space group is \(P6_{3}/mmc\)[2, 4]. The primitive cell can be described as a stacking of two Ti\({}_{6}\)C octahedron layers with one layer of Al [1]. Furthermore, measurements of lattice parameters with numerous methods reveal that Ti\({}_{3}\)AlC\({}_{2}\) exhibits elevated high crystalline anisotropy. The \(c/a\) ratio is slightly higher than 6 [5].
It is established, that MAX phases experience plastic deformation by the glide on basal planes of a-dislocations confined between the M\({}_{6}\)X and Al layers, thus forming pile-ups and walls [6, 5, 7]. The latter can form local disorientation areas, known as kink bands [8, 9]. Furthermore, numerous dislocation interactions forming networks (dipoles, reactions) and high lattice friction have been observed [10, 11]. Note, that out-of-basal plane dislocations have been observed as well, but they do not play a role at room temperature in standard deformation conditions [12, 13]. At high temperature, it has been revealed, that out-of-basal plane \(\langle a\rangle\)-dislocations are common events and hence cross-slip (from basal planes to prismatic or pyramidal planes) plays a key role in the deformation [14, 15]. This increase of available glide systems is likely to originate the brittle-to-ductile transition of MAX phases [14]. Note also that Frank partial \(\langle c\rangle\)-dislocations correlated with a diffusion mechanism were recently reported [16]. Moreover, observations of stacking faults is another major microstructural feature, but their role in deformation both at room and high temperatures remains unclear [6, 13, 15].
Of all MAX phases, those containing Al, such as Ti\({}_{3}\)AlC\({}_{2}\), are the most resistant to oxidation [2]. During exposure of Ti-Al-C MAX phases to oxidizing environment at high temperatures, the outward diffusion of the weakly bonded Al atoms is 2much faster compared to the more covalently bonded Ti atoms [2, 17, 18, 19]. Therefore, it results in a regime, where a superficial protective layer of Al\({}_{2}\)O\({}_{3}\) is formed. However, TiO\({}_{2}\) can be formed as well, leading, this time, to a catastrophic regime. Both mechanisms are strongly depend on the oxidation conditions and the initial microstructure [18, 19]. Despite the description of these oxides, operating oxidation mechanisms of Ti\({}_{3}\)AlC\({}_{2}\) and more generally of MAX phases remain poorly documented, especially on how the crystallographic structure is evolving. During decomposition of 312 MAX phases (such as Ti\({}_{3}\)AlC\({}_{2}\), Ti\({}_{3}\)SiC\({}_{2}\)) at high temperatures in an oxygen-containing atmosphere or in species having a high affinity to A-element, A-element is observed to de-intercalate, thus forming Ti\({}_{3}\)C\({}_{2}\) platelets [20, 21, 22, 23]. More precisely, in case of Ti\({}_{2}\)AlC-Cu composite, this de-intercalation of Al is associated with a Frank partial dislocation-based mechanism [16]. This formation of Ti\({}_{3}\)C\({}_{y}\) platelets is then viewed as reinforcement of the composites [20].
Such is the background of the current letter. This work is motivated by the analysis of original phases that were observed after oxidation in a Ti\({}_{3}\)AlC\({}_{2}\) matrix by high-resolution
scanning transmission electron microscopy (HR-STEM). Such nanolamellar phases appear to be twisted with respect to the surrounding matrix and also associated with diffusion mechanisms in its neighborhood. Hence, by interpreting our HR-STEM observations using molecular dynamics simulations of model nano-twist phases, we study their atomic scale features including interfacial crystal defects, excess energy, and mechanical fields. Concerning the latter, a continuous description of the twist phase using a disclination based mechanical framework is conducted complementary to atomistic simulations to further evidence the complexity of such phases.
The specimen was prepared by hot isotropic pressing (HIP). Briefly, powders of Ti, Al and TiC were mixed in stoeichiometric proportions 2Ti:1.05Al:0.85C and then cold-compacted into cylindrical steel dies using an uniaxial pressure. Powder compacts were encapsulated into pyrex containers under high vacuum for reactive sintering in the HIP machine. Afterwards, the specimen were oxidized at 1000 \({}^{\circ}\)C for 25 h in a flowing air atmosphere. More details are given in [24; 19]. Cross-sectional TEM samples were prepared by focused ion beam (FIB) on a FEI Helios dual beam Nanolab 600i. Atomically-resolved high-angle annular dark field scanning TEM (HAADF-STEM) was performed on an Titan\({}^{3}\) G2 S/TEM fitted with a double aberration corrector for probe and image corrections and operating at 300 kV.
HR-STEM in HAADF mode of the lamella is shown in Fig. 1 and at lower magnification in Fig. SM1 in supplementary materials. As HAADF detector senses a greater signal from atoms with a higher atomic number Z, Ti columns appear brighter in the resulting micrograph [25]. The nano-layered structure of Ti\({}_{3}\)AlC\({}_{2}\) highlighted in the circular inset is consistent with the expected one for a 312-MAX phase projected along [\(\overline{1}\)\(\overline{1}\)\(\overline{2}\)\(0\)] _i.e._ zigzag stacking of three TiC planes followed by one Al plane, the Al plane being the axis of symmetry of the zigzag. Several large Ti\({}_{3}\)C\({}_{y}\) laths are clearly visible as well. These sub-phases are probably similar to Ti\({}_{3}\)C\({}_{2}\) phases as they are formed by the diffusion of the Al interlayers of the Ti\({}_{3}\)AlC\({}_{2}\) phase. However, the experiments we used are not able to characterize them precisely, and the mechanisms responsible for their formation is out of the scope of this letter. The region of interest shows blurred atomic layers (ROI, white square in Fig. 1). The blur is along the [1 \(\overline{1}\)\(\overline{0}\)\(0\)] direction, localized within the basal plane. This indicates that this part of the lattice has undergone a rotation around the [0 0 0 1] direction. The boundaries between this rotated phase and the other phase are marked with dashed white lines. Particular orientation relationships for MX platelets in MAX interfaces have been reported in the literature [26; 27]. However, such orientations will not produce the high-resolution contrast we observe in Fig.1, in particular, the blurred atomic layer within the NTP. Note, that the upper extremity of the ROI shows a large, blurred area and visible inter-diffusion between Al and Ti planes. Such mechanism that induce severe local lattice strain might be responsible for the difference in the contrast of the Ti\({}_{x}\)C\({}_{y}\) phase on both side of the NTP (Fig. 1).
To gain more insights on the observed defect and to characterize precisely the structure of the twist boundary, we modeled the nano-twist phase by atomistic simulations. Molecular dynamics (MD) based on interatomic potentials is an excellent modelling method to investigate the atomic-scale configuration of defects, in particular interfaces [28; 29]. It has been widely used over the past decade to investigate grain boundaries [30; 31; 32; 33] and phase boundaries [34; 35; 36; 37]. The only published model suitable to describe Ti\({}_{3}\)AlC\({}_{2}\) atomic interactions is the bond-order potential (BOP) recently adjusted by Plummer and Tucker [38]. It is based on the formalism initially proposed by Tersoff [39] and described in detail by Albe _et al._[40]. Fig. 2 shows the atomistic simulation setup to compute the energy and the structure of the rotated phase boundaries. Within a fully periodic bulk Ti\({}_{3}\)AlC\({}_{2}\) phase, a cookie shape region in the center is rotated with an angle \(\Psi\) around an axis normal to the basal plane, as indicated in blue in Fig. 2. The inset shows as an example a rotation of the atomic structure for \(\Psi=3^{\circ}\). Note that structures with similar contrasts to HAADF-STEM ones can be obtained with different values of \(\Psi\) (Fig. 1).
It is important to mention that we do not intend here to model the exact system observed in our experiments, for several reasons. (1) The limitation of our experimental approaches do not reveal the exact composition of the Ti\({}_{x}\)C\({}_{y}\) phase. (2) The interaction potential used in our MD simulations has been designed to model the Ti\({}_{3}\)AlC\({}_{2}\) phase and, thus, might be less reliable for any non-stochiometric Ti\({}_{x}\)C\({}_{y}\) phase. (3) The local atomic environment of the interface, up to three TiC interlayers, is identical for both Ti\({}_{3}\)AlC\({}_{2}\) and Ti\({}_{x}\)C\({}_{y}\) phases. In this context, changing the material that forms the interfaces of the NTP
Figure 1: Experimental observation of the nano-twist phase (region of interest, ROI) in the nanolamayered Ti\({}_{3}\)AlC\({}_{2}\). Filtered HR-STEM micrography in HAADF mode with electron direction along [1 \(\overline{1}\)\(\overline{2}\)\(0\)]. Boundaries of the nano-twist phase are indicated by dashed lines. A model of the Ti\({}_{3}\)AlC\({}_{2}\) crystallographic structure is shown in the inset. Some Ti\({}_{x}\)C\({}_{y}\) similar to Ti\({}_{3}\)C\({}_{2}\) laths are also observed.
in our MD simulations should influence the value of the interfacial energy, but will have a negligible impact on what is the focus of our work: the energetic profile and the crystallographic structure of the interfaces. The consideration of idealized systems or surrogate materials is common practice with atomistic simulations, including for direct comparisons with experimental results [35; 41; 37; 16].
The as-formed nano-twist phase (NTP) exhibits prismatic and basal interfaces with the Ti\({}_{3}\)AlC\({}_{2}\) phase. The prismatic interface is not clearly defined in our experimental observation, whereas the basal interface appears atomically sharp (See Fig. 1). In this work, we thus focus on the characterisation of the NTP basal boundary (NTB). Fig. 3 shows the interfacial energy of the NTB as function of the twist angle \(\Psi\). The energy is computed by considering a cylindrical region in the center of the NTP, that does not encompass the prismatic boundaries. For each twist angle, the system is statically relaxed by using conjugate gradients and fire [42; 43] methods. The dimensions of the box are adapted to ensure a globally stress-free system. The energy of such as-relaxed NTB is shown with blue small dots in Fig. 3. Selected configurations have been annealed at 600 K for 100 ps. A Nose-Hoover thermostat control the temperature and a Nose-Hoover barostat ensure constant zero pressure. The systems are subsequently quenched and their energies are shown in Fig. 3 with orange large dots.
The insets in Fig. 3 present the out-of-plane stress component \(\sigma_{zz}\) of the as-relaxed NTB for different \(\Psi\). By showing the distortion of the crystal lattice at the basal boundary, it reveals the structures of the interface. More details are presented in Fig. 4 for \(\Psi=1^{\circ}\), the smallest twist angle considered in this work. The distortion of the crystal lattice at the NTB is captured by the out-of-plane stress component \(\sigma_{zz}\) (Fig. 4a,b). This evokes the signature of an interfacial dislocation network, with the three-fold symmetry of the basal plane and the nodes of interacting interfacial dislocations. The atomic positions for different basal interlayers shown in Fig. 4(c)(d) confirm this observation, as the relative displacement of the atoms is characteristic of screw dislocations.
The NTP we observed experimentally is not a grain on itself, but is located within one crystallographically homogeneous area containing perfectly coherent Ti\({}_{3}\)AlC\({}_{2}\) and Ti\({}_{x}\)C\({}_{y}\) phases. The phase Ti\({}_{x}\)C\({}_{y}\) being directly issued from the Ti\({}_{3}\)AlC\({}_{2}\) phase, this area can be considered as a Ti\({}_{3}\)AlC\({}_{2}\)-base grain. This nano-phase exhibits limited degree of freedom, with its twist axis and at least one interface being precisely defined: the \(\langle c\rangle\) axis and the basal planes, respectively. It is clear that the basal interface, we called NTB, shares similarities with pure twist grain boundaries, such as (1) an interfacial energy evolution with a typical bell-shape, (2) an interfacial structure that can be described by a network of screw dislocations lying on basal planes and cross-sliping to 1st-order prismatic planes (consistent with previous experimental observations [14]) for low twist angles, (3) a spacing between interfacial dislocations inversely proportional to low twist angle values and (4) a description of the interface in term of dislocations network that vanished for high twist angles. The transition from a low angle configuration to a high angle configuration is for \(\Psi\approx 15^{\circ}\) (Fig. 3, insets). However, the NTB also exhibits clear differences towards grain boundaries, namely (1) only one macroscopic degree of freedom (DoF) instead of five for grain boundaries and (2) interfacial reconstructions for high twist angle leading to non-monotonous structural patterns.
Interestingly, such NTP were already reported by Drouelle _et al._ in Ti\({}_{3}\)AlC\({}_{2}\) deformed by creep at high temperature. Indeed, they observed by conventional TEM, highly disorganized lenticular defects with a very high density of screw dislocations [15]. In addition, Zhang _et al._ reported low-angle twist grain boundaries (disorientation around 0.5\({}^{\circ}\)) in Ti\({}_{3}\)AlC\({}_{2}\) compressed uniaxially at 1200 \({}^{\circ}\)C [44]. Such sub-grain boundaries are formed by screw dislocation networks originating from basal dislocation reactions, like those predicted here in Fig. 4.a. Both studies conclude that such features may play a role in the plasticity of Ti\({}_{3}\)AlC\({}_{2}\).
It is worth noticing that, within the time frame considered by our atomistic simulations, neither the interfacial energy nor
Figure 3: Energy of the basal interface of the nano-twist phase as function of the twist angle \(\Psi\). Interfacial energy of the ground state and annealed structure in blue and orange dots, respectively. The dotted line is a guide for the eyes. Insets show magnified views of the stress field \(\sigma_{zz}\) for \(\Psi=3,13,14,15,30\).
Figure 2: Modeling of the nano-twist phase in the nano-layered Ti\({}_{3}\)AlC\({}_{2}\) by means of a cookie-shape rotation region. (a) atomistic simulation setup of the cookie-shape region (in blue) parallel to the basal planes, with [0 0 0 1] rotation axis and \(\phi\) rotation angle. The black square indicate the location of the inset. : magnified projected view of a 10 nm thin slice of the side of the cookie-shape nano-twist phase for \(\Psi=3^{\circ}\). Ti, Al and C are colored in green, red and black, respectively.
the interfacial structure is significantly altered by annealing. The configuration of the NTB predicted by atomistic modelling appears as such stable.
For high twist angles (\(\Psi>15^{\circ}\)), the interfacial configuration exhibits severe reconstructions. By comparison to configurations with low twist angles, this reconstruction releases localized atomic stresses but does lead to lower interfacial energies. Interfacial energies for tilt and twist boundaries in crystals with cubic structures often show clear drops for particular \(\Psi\) values that correspond to high symmetry misorientations. Yet, such stable high angle configuration are not observed with the NTB investigated in this work. This can be related to the limited DoF accessible to this boundary, as described in the following. Grain boundaries are classically considered within the coincidence site lattice theory (CSL), which predict periodic interfaces for particular misorientation. As such, they are defined by five macroscopic DoF (directions and rotation), among which some lead to particularly stable configurations with low interfacial energy. Additionally, it is known that considering the microscopic DoF (translation) accessible to any grain boundaries is crucial to fully determine the most stable configurations. However, the NTB we observed does not have so many DoF. In particular, the NTP being confined within a well defined bulk Ti\({}_{3}\)AlC\({}_{2}\) phase, it does not allows the NTB for translation DoF. Allowing the NTB for microscopic DoF might thus lead to lower energy configurations. This is out of the scope of the present work, which is focused on the characterization of the NTP we observed experimentally.
For large angles \(\Psi\), the NTB cannot be conveniently described by dislocations anymore as discussed above and illustrated in the insets of Fig. 3. We propose that for high-angles of rotation, the NTB can be appropriately represented by disclination loops. As the rotational counterpart of dislocations, disclinations are line defects which introduce a discontinuity of the elastic rotation field, referred to as the Frank vector. The concept of disclinations is appropriate for the description of elastic fields of tilt boundaries with high misorientations [45] and for nanotwin microstructures [46; 47]. Regarding the cookie shape NTP considered in this work, the stress field due to the network of overlapping dislocation arrays can be equivalently and more simply described by using two disclination loops delimiting the NTP. Such representation was succesfully applied for nanomanellar twins with tilt misorientations [47]. Here, we propose to apply this concept to twist interfaces with high misorientations, which has never been attempted so far to our knowledge. More details can be found as Supplementary Material.
As shown in Fig. 5(a), a NTP with misorientation 30\({}^{\circ}\) is modelled by a dipole of twist disclination loops with Frank vector magnitude 30\({}^{\circ}\) about the Z ([0 0 0 1]) axis. A recent field disclination mechanics framework is used to built this loop dipole [45; 47]. The two main Cauchy stress tensor components predicted by the model are shear stresses \(\sigma_{xz}\) and \(\sigma_{yz}\). The latter are characteristic of screw dislocation out of plane shear stresses and clearly arise from the twist rotation discontinuity applied within the NTB.
Fig. 5(b) shows \(\sigma_{xz}\) and \(\sigma_{yz}\) as predicted by atomistic simulations and evidenced a fair match with what is predicted by the disclination model in Fig. 5(a). The similarities are (1) the morphology of the surrounding stress field maxima with two inverse dipoles in a particular crystallographic direction, and (2) the magnitude of these stress dipoles around \(\pm 1\) GPa. This match between atomistic and continuum representation is however partial. The differences are (1) the direction of stress maxima rotated by 90\({}^{\circ}\), and (2) stresses within the NTB predicted by MD but not by the disclinations. These differences clearly originate from characteristics not considered by the disclination based model. Typically, dislocation cores effects are only considered by atomistic modelling and leads to \(\epsilon_{xz}\) eigenstrains. Additionally, the reconstruction of the NTB for high twist angles we observed by MD, appears to play a crucial role in the distribution of the stress field. Disclination based models will have to be enriched to consider such atomistic mechanisms, but they are yet much promising to model general interfaces in complex materials at the continuum scale.
Regarding the formation mechanism of the NTP, our preliminary observations suggest a strong influence of the Ti-Al diffusion in the basal planes at the onset of the NTB. The diffusion could be favoured by the interfacial dislocation network that locally induce a variation of free volume within basal planes.
The exact impact of a nano-twist phase on the properties of the Ti\({}_{3}\)AlC\({}_{2}\) nanolayered structure is yet unclear. But it seems that they are not anecdotal events, as reported in [18; 44]. Similarly to what has been observed with the formation of Frank partial dislocations [16], the interfacial dislocation network could
Figure 4: Interfacial structure of the Ti\({}_{3}\)AlC\({}_{2}\) nano-twist phase for \(\Psi=1^{\circ}\). (a) global and (b) magnified views of the local stress field \(\sigma_{xz}\) revealing the typical dislocation network of a low angle grain boundary. (c) identical view as (b) but showing chemical species as in Fig. 2. (d) magnified view of (c) revealing the atomic displacements induced by a screw dislocation. (c) Atomic radius are proportional to the stacking position in the [0 0 0 1] direction: light green, red and dark green, from top to bottom. The screw dislocation line is indicated as a blue dashed line.
limit the propagation of \(\langle a\rangle\) dislocations and thus participate to the strengthening of Ti\({}_{3}\)AlC\({}_{2}\) phases. The prismatic interfaces of the NTP would require more investigations as they should play an important role by hindering the propagation of basal dislocation more efficiently than the basal interfaces. From a more general perspective, nano-twist crystals have been shown to exhibits peculiar optical properties [48].
Additional investigations are ongoing to gain a comprehensive understanding on the formation mechanisms of this nano-twist phase, in order to opens up new possibilities for tailored properties MAX phases.
## Acknowledgments
This project has received financial supports from the CNRS through the MITI interdisciplinary programs and from the National Natural Science Foundation of China (no. 52175284). This work was performed using HPC resources from GENCI-TGCC (grant 2020-A0080911390 and 2021-A0100911390) and from the EXPLOR center of the Universite de Lorraine.
|
2306.14069 | Waypoint Transformer: Reinforcement Learning via Supervised Learning
with Intermediate Targets | Despite the recent advancements in offline reinforcement learning via
supervised learning (RvS) and the success of the decision transformer (DT)
architecture in various domains, DTs have fallen short in several challenging
benchmarks. The root cause of this underperformance lies in their inability to
seamlessly connect segments of suboptimal trajectories. To overcome this
limitation, we present a novel approach to enhance RvS methods by integrating
intermediate targets. We introduce the Waypoint Transformer (WT), using an
architecture that builds upon the DT framework and conditioned on
automatically-generated waypoints. The results show a significant increase in
the final return compared to existing RvS methods, with performance on par or
greater than existing state-of-the-art temporal difference learning-based
methods. Additionally, the performance and stability improvements are largest
in the most challenging environments and data configurations, including AntMaze
Large Play/Diverse and Kitchen Mixed/Partial. | Anirudhan Badrinath, Yannis Flet-Berliac, Allen Nie, Emma Brunskill | 2023-06-24T22:25:29Z | http://arxiv.org/abs/2306.14069v2 | # Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets
###### Abstract
Despite the recent advancements in offline reinforcement learning via supervised learning (RvS) and the success of the decision transformer (DT) architecture in various domains, DTs have fallen short in several challenging benchmarks. The root cause of this underperformance lies in their inability to seamlessly connect segments of suboptimal trajectories. To overcome this limitation, we present a novel approach to enhance RvS methods by integrating intermediate targets. We introduce the Waypoint Transformer (WT), using an architecture that builds upon the DT framework and conditioned on automatically-generated waypoints. The results show a significant increase in the final return compared to existing RvS methods, with performance on par or greater than existing state-of-the-art temporal difference learning-based methods. Additionally, the performance and stability improvements are largest in the most challenging environments and data configurations, including AntMaze Large Play/Diverse and Kitchen Mixed/Partial.
## 1 Introduction
Traditionally, offline reinforcement learning (RL) methods that compete with state-of-the-art (SOTA) algorithms have relied on objectives encouraging pessimism in combination with value-based methods. Notable examples of this approach include Batch Conservative Q-Learning (BCQ), Conservative Q-Learning (CQL), and Pessimistic Q-Learning (PQL) (Fujimoto et al., 2019; Kumar et al., 2020; Liu et al., 2020). However, these methods can be challenging to train and often require intricate hyperparameter tuning and various tricks to ensure stability and optimal performance across tasks.
Reinforcement learning via supervised learning (RvS) has emerged as a simpler alternative to traditional offline RL methods (Emmons et al., 2021). RvS approaches are based on behavioral cloning (BC), either conditional or non-conditional, to train a policy. Importantly, these methods eliminate the need for any temporal-difference (TD) learning, such as fitted value or action-value functions. This results in a simpler algorithmic framework based on supervised learning, allowing for progress in offline RL to build upon work in supervised learning. There are several successful applications of RvS methods, including methods conditioned on goals and returns (Kumar et al., 2019; Janner et al., 2021; Ding et al., 2019; Chen et al., 2021; Emmons et al., 2021).
However, RvS methods have typically struggled in tasks where seamlessly connecting (or "stitching") appropriate segments of suboptimal training trajectories is critical for success (Kumar et al., 2022). For example, when tasked with reaching specific locations in the AntMaze maze navigation environment or completing a series of tasks in the FrankaKitchen environment, RvS methods typically perform significantly worse than TD learning methods such as Implicit Q-Learning (Fu et al., 2020; Kostrikov et al., 2021). In the most challenging tasks, such as AntMaze Large, RvS methods often achieve no success whatsoever.
In this study, we leverage the transformer architecture (Vaswani et al., 2017) to construct an RvS method. As introduced by Chen et al. (2021), the decision transformer (DT) has demonstrated the ability to perform conditional behavioral cloning in the context of offline RL. However, similar to other RvS methods, DT proves inferior in performance across popular Gym-MuJoCo benchmarks compared to other value-based offline RL methods, with a 15% relative reduction in average return and lowered stability (Table 1).
To tackle these limitations of existing RvS methods, we introduce a waypoint generation technique that produces intermediate goals and more stable, proxy rewards, which serve as guidance to steer a policy to desirable outcomes. By conditioning a transformer-based RvS method on these generated targets, we obtain a trained policy that learns to follow them, leading to improved performance and stability compared to prior offline RL methods. The highlights of our proposed approach are presented and summarized as follows:
* We propose a novel RvS method, Waypoint Transformer, using waypoint generation networks and establish new state-of-the-art performance, surpassing all existing methods, in challenging tasks such as AntMaze Large and Kitchen Partial/Mixed (Fu et al., 2020) (Table 1). On tasks from Gym-MuJoCo, our method rivals the performance of TD learning-based methods such as Implicit Q-Learning and Conservative Q-Learning (Kostrikov et al., 2021, Kumar et al., 2020), with significant improvements over existing RvS methods.
* We motivate the benefit of conditioning RvS on intermediate targets using a chain-MDP example and an empirical analysis of maze navigation tasks. By providing such additional guidance on suboptimal datasets, we show that a policy optimized with a behavioral cloning objective chooses more optimal actions compared to conditioning on fixed targets (as in Chen et al. (2021), Emmons et al. (2021)), facilitating improved stitching capability.
* Our work also provides practical insights for improving RvS, such as significantly reducing training time, solving the hyperparameter tuning challenge in RvS posed by Emmons et al. (2021), and notably improved stability in performance across runs.
## 2 Related Work
Many recent offline RL methods have used fitted value or action-value functions (Liu et al., 2020, Fujimoto et al., 2019, Kostrikov et al., 2021, Kumar et al., 2020, Kidambi et al., 2020, Lyu et al., 2022) or model-based approaches leveraging estimation of dynamics (Kidambi et al., 2020, Yu et al., 2020, Argenson and Dulac-Arnold, 2020, Shen et al., 2021, Rigter et al., 2022, Zhan et al., 2021).
RvS, as introduced in Emmons et al. (2021), avoids fitting value functions and instead leverages behavioral cloning. In many RvS-style methods, the conditioning variable for the policy is based on the return (Kumar et al., 2019, Srivastava et al., 2019, Schmidhuber, 2019, Chen et al., 2021), but other methods use goal-conditioning (Nair et al., 2018, Emmons et al., 2021, Ding et al., 2019, Ghosh et al., 2019) or leverage inverse RL (Eysenbach et al., 2020). Recent work by Brandforbrener et al. (2022) has explored the limitations of reward conditioning in RvS. In this study, we consider both reward and goal-conditioning.
Transformers have demonstrated the ability to generalize to a vast array of tasks, such as language modeling, image generation, and representation learning (Vaswani et al., 2017, Devlin et al., 2018, He et al., 2022, Parmar et al., 2018). In the context of offline RL, decision transformers (DT) leverage a causal transformer architecture to fit a reward-conditioned policy (Chen et al., 2021). Similarly, (Janner et al., 2021) frame offline RL as a sequence modeling problem and introduce the Trajectory Transformer, a model-based offline RL approach that uses the transformer architecture.
Algorithms building upon the DT, such as online DT (Zheng et al., 2022), prompt DT (Xu et al., 2022) and Q-Learning DT (Yamagata et al., 2022), have extended the scope of DT's usage. Furuta et al. (2021) introduce a framework for hindsight information matching algorithms to unify several hindsight-based algorithms, such as Hindsight Experience Replay (Andrychowicz et al., 2017), DT, TT, and our proposed method.
Some critical issues with the DT unresolved by existing work are (a) its instability (i.e., large variability across initialization seeds) for some tasks in the offline setting (Table 1) and (b) lowered performance on tasks due to an inability to stitch segments of suboptimal trajectories (Kumar et al.,
2022), outperformed by value-based methods such as Implicit Q-Learning (IQL) (Kostrikov et al., 2021) and Conservative Q-Learning (CQL) (Kumar et al., 2020). We address both these concerns with our proposed approach, demonstrating notably improved performance and reduced variance across seeds for tasks compared to DT and prior RvS methods (Table 1).
One of the areas of further research in RvS, per Emmons et al. (2021), is to address the complex and unreliable process of tuning hyperparameters, as studied in Zhang and Jiang (2021) and Nie et al. (2022). We demonstrate that our method displays low sensitivity to changes in hyperparameters compared to Emmons et al. (2021) (Table 2). Further, all experiments involving our proposed method use the same set of hyperparameters in achieving SOTA performance across many tasks (Table 1).
## 3 Preliminaries
We assume that there exists an agent interacting with a Markov decision process (MDP) with states \(s_{t}\in\mathcal{S}\) and actions \(a_{t}\in\mathcal{A}\) with unknown transition dynamics \(p(s_{t+1}\mid s_{t},a_{t})\) and initial state distribution \(p(s_{0})\). The agent chooses an action sampled from a transformer policy \(a_{t}\sim\pi_{\theta}(a_{t}\mid s_{t-k..t},\Phi_{t-k..t})\), parameterized by \(\theta\) and conditioned on the known history of states \(s_{t-k..t}\) and a conditioning variable \(\Phi_{t-k..t}\). Compared to the standard RL framework, where the policy is modeled by \(\pi(a_{t}\mid s_{t})\), we leverage a policy that considers past states within a fixed context window \(k\).
The conditioning variable \(\Phi_{t}\) is a specification of a goal or reward based on a target outcome \(\omega\). At training time, \(\omega\) is sampled from the data, as presented in Emmons et al. (2021). At test time, we assume that we can either generate or are provided global goal information \(\omega\) for goal-conditioned tasks. For reward-conditioned tasks, we specify a target return \(\omega\), following Chen et al. (2021).
To train our RvS-based algorithm on an offline dataset \(\mathcal{D}\) consisting of trajectories \(\tau\) with conditioning variable \(\Phi_{t}\), we compute the output of an autoregressive transformer model based on past and current states and conditioning variable provided in each trajectory. Using a negative log-likelihood loss, we use gradient descent to update the policy \(\pi_{\theta}\). This procedure is summarized in Algorithm 1.
```
Input: Training dataset \(\mathcal{D}=\{\tau_{1},\tau_{2},\tau_{3},...,\tau_{n}\}\) of training trajectories. for each \(\tau=(s_{0},a_{0},\Phi_{0},s_{1},a_{1},\Phi_{1},...)\) in \(\mathcal{D}\)do Compute \(\pi_{\theta}(a_{t}\mid s_{t-k..t},\Phi_{t-k..t})\) for all \(t\) Calculate \(L_{\theta}(\tau)=-\sum_{t}\log\pi_{\theta}(a_{t}\mid s_{t-k..t},\Phi_{t-k..t})\) Backpropagate gradients w.r.t \(L_{\theta}(\tau)\) to update model parameters endfor
```
**Algorithm 1** Training algorithm for transformer-based policy trained on offline dataset \(\mathcal{D}\).
## 4 Waypoint Generation
In this section, we propose using intermediate targets (or waypoints) as conditioning variables as an alternative to fixed targets, proposed in Emmons et al. (2021). Below, we motivate the necessity for waypoints in RvS and present a practical technique to generate waypoints for goal and reward-conditioned tasks in Sections 4.2 and 4.3 respectively.
### Illustrative Example
To motivate the benefits of using waypoints, consider an infinite-horizon, deterministic MDP with \(H+1\) states and two possible actions at non-terminal states. A graphical representation of the MDP is shown in Figure 1. For this scenario, we consider the goal-conditioned setting where the target goal state during train and test time is \(\omega=s^{(H)}\), and the episode terminates once we reach \(\omega\).
In offline RL, the data is often suboptimal for achieving the desired goal during testing. In this example, suppose we have access to a dataset \(\mathcal{D}\) that contains an infinite number of trajectories
Figure 1: Chain MDP to motivate the benefit of intermediate goals for conditional BC-based policy training.
collected by a random behavioral policy \(\pi_{b}\) where \(\pi_{b}(a_{t}=a^{(1)}\mid s_{t})=\lambda>0\) for all \(s_{t}\). Clearly, \(\pi_{b}\) is suboptimal with respect to reaching \(\omega\) in the least number of timesteps; in expectation, it takes \(\frac{H}{1-\lambda}\) timesteps to reach \(s^{(H)}\) instead of \(H\) (optimal) since the agent "stalls" at the current state with probability \(\lambda\) and moves to the next state with probability \(1-\lambda\).
Consider a global goal-conditioned policy \(\pi_{G}(a_{t}\mid s_{t},\omega)\) that is optimized using a behavioral cloning objective on \(\mathcal{D}\). Clearly, the optimal policy \(\pi_{G}^{*}(a_{t}\mid s_{t},\omega)=\pi_{b}(a_{t}\mid s_{t})\)\(\forall s_{t}\) since \(\omega=s^{(H)}\) is a constant. Hence, the global goal-conditioned policy \(\pi_{G}^{*}\) is as suboptimal as the behavioral policy \(\pi_{b}\).
Instead, suppose that we condition a policy \(\pi_{W}(a_{t}\mid s_{t},\Phi_{t})\) on an intermediate goal state \(\Phi_{t}=s_{t+K}\) for some chosen \(K<\frac{1}{1-\lambda}\) (expected timesteps before \(\pi_{b}\) executes \(a_{2}\)), optimized using a behavioral cloning objective on \(\mathcal{D}\). For simplicity, suppose our target intermediate goal state \(\Phi_{t}\) for some current state \(s_{t}=s^{(h)}\) is simply the next state \(\Phi_{t}=s^{(h+1)}\). Based on data \(\mathcal{D}\) from \(\pi_{b}\), the probability of taking action \(a^{(2)}\) conditioned on the chosen \(\Phi_{t}\) and \(s_{t}\) is estimated as:
\[\Pr_{\pi_{b}}[a_{t}=a^{(2)}\mid s_{t}=s^{(h)},s_{t+K}=s^{(h+1)}] =\frac{\Pr_{\pi_{b}}[a_{t}=a^{(2)},s_{t+K}=s^{(h+1)}\mid s_{t}=s^{ (h)}]}{\Pr_{\pi_{b}}[s_{t+K}=s^{(h+1)}\mid s_{t}=s^{(h)}]}\] \[=\frac{(1-\lambda)\lambda^{K-1}}{\binom{K}{1}\left[(1-\lambda) \lambda^{K-1}\right]}=\frac{1}{\binom{K}{1}}=\frac{1}{K}.\]
Hence, for the optimal intermediate goal-conditioned policy \(\pi_{W}^{*}\) trained on \(\mathcal{D}\), the probability of choosing the optimal action \(a^{(2)}\) is:
\[\pi_{W}^{*}(a_{t}=a^{(2)}\mid s_{t}=s^{(h)},\Phi_{t}=s^{(h+1)})=\frac{1}{K}.\]
Since \(\pi_{G}^{*}(a_{t}=a^{(2)}\mid s_{t}=s^{(h)},\omega)=1-\lambda\) and we choose \(K\) such that \(\frac{1}{K}>1-\lambda\), we conclude:
\[\pi_{W}^{*}(a_{t}=a^{(2)}\mid s_{t},\Phi_{t})>\pi_{G}^{*}(a_{t}=a^{(2)}\mid s_ {t},\omega).\]
The complete derivation is presented in Appendix A. Based on this example, conditioning the actions on reaching a desirable intermediate state is more likely to result in taking the optimal action compared to a global goal-conditioned policy. Effectively, the conditioning acts as a "guide" for the policy, directing it toward desirable intermediate targets in order to reach the global goal.
### Intermediate Goal Generation for Spatial Compositionality
In this section, we address RvS's inability to "stitch" subsequences of suboptimal trajectories in order to achieve optimal behaviour, based on analyses in Kumar et al. (2022). In that pursuit, we introduce a technique to generate effective intermediate targets to better facilitate stitching and to guide the policy towards desirable outcomes, focusing primarily on goal-conditioned tasks.
Critically, the ability to stitch requires considering experiences that are more relevant to achieving appropriate short-term goals before reaching the global goal. To illustrate this, we show a maze navigation task from the AntMaze Large environment in Figure 2, where the evaluation objective is to reach a target location from the start location (Fu et al., 2020).
Analyzing the training trajectories that pass through either the start location (blue) or target location (red), less than 5% of trajectories extend beyond the stitching region into the other region, i.e., the target or start regions respectively. Since trajectories seldom pass through both the start and target regions, the policy must "stitch" together subsequences from the blue and red trajectories within the stitching region, where the trajectories overlap the most. By providing intermediate targets within this region, rather than conditioning solely on the global goal, we can guide the policy to connect the relevant subsequences needed to reach the target effectively.
Figure 2: ant maze-large-play-v2 task to navigate from the start location (circle) to the target location (star). Blue and red colored lines are training trajectories passing through the start or end locations respectively.
To obtain effective intermediate targets, we propose the goal waypoint network, explicitly designed to generate short-term, intermediate goals. Similar to the illustrative example in Section 4.1, the purpose of these intermediate targets is to guide the policy network \(\pi_{\theta}\) towards states that lead to the desired global goal by facilitating stitching of relevant subsequences.
To that end, we represent the goal waypoint network \(W_{\phi}\), parameterized by \(\phi\), as a neural network that makes approximate \(K\)-step predictions of future observations conditioned on the current state, \(s_{t}\), and the target goal, \(\omega\). Formally, we attempt to minimize the objective in Equation 1 across the same offline dataset \(\mathcal{D}\), where \(L_{\phi}\) is a mean-squared error (MSE) loss for continuous state spaces:
\[\arg\min_{\phi}\sum_{\tau\in\mathcal{D}}L_{\phi}(W_{\phi}(s_{t},\omega),s_{t+ K}). \tag{1}\]
While our approach to intermediate target generation seems simple in relation to the complex problem of modeling both the behavioral policy and transition dynamics, our goal is to provide approximate short-term goals to facilitate the downstream task of reaching the global goal \(\omega\), rather than achieving perfect predictions of future states under the behavioral policy.
### Proxy Reward Generation for Bias-Variance Reduction
In this section, we address the high bias and variance of conditioning variables used by prior RvS methods in reward-conditioned tasks, such as Emmons et al. (2021) and Chen et al. (2021). Analogously to Section 4.2 (i.e., for goal-conditioned tasks), we propose a technique to generate intermediate reward targets for reward-conditioned tasks to mitigate these issues.
Existing methods rely on either an initial cumulative reward-to-go (desired return) or an average reward-to-go target, denoted as \(\omega\). Importantly, the former is updated using rewards obtained during the rollout, while the latter remains constant over time (Emmons et al., 2021; Chen et al., 2021). However, using these conditioning variables during evaluation gives rise to two main issues: (a) the Monte Carlo estimate of return used to compute the cumulative reward-to-go exhibits high variance and (b) the constant average reward-to-go target introduces high bias over time. Based on our analyses of the bias and variance of these approaches in Appendix C, we observe that these issues contribute to decreased performance and stability across runs when evaluating RvS methods.
Although a potential approach to mitigate these issues is to leverage TD learning, such as the Q-Learning Transformer (Yamagata et al., 2022), we restrict our work to RvS methods utilizing behavioral cloning objectives. To address the aforementioned concerns, we introduce a reward waypoint network denoted as \(W_{\phi}\), parameterized by \(\phi\). This network predicts the average and cumulative reward-to-go (ARTG, CRTG) conditioned on the return, \(\omega\), and current state, \(s_{t}\), using offline data \(\mathcal{D}\). To optimize this network, we minimize the objective shown in Equation 2 using an MSE loss:
\[\arg\min_{\phi}\sum_{\tau\in\mathcal{D}}(\left[\tfrac{1}{T-t}\sum_{t^{\prime} =t}^{T}\gamma^{t}r_{t}\quad\sum_{t^{\prime}=t}^{T}\gamma^{t}r_{t}\right]^{ \top}-W_{\phi}(s_{t},\omega))^{2}. \tag{2}\]
By modeling both ARTG and CRTG, we address the high bits of a constant ARTG target and reduce the variance associated with Monte Carlo estimates for CRTG. The construction of the reward waypoint network is similar in motivation and the prediction task to a baseline network, used to mitigate high variance in methods like REINFORCE (Sutton and Barto, 1999). However, the distinguishing feature of our reward waypoint network lies in its conditioning on the return, which allows for favorable performance even on suboptimal offline datasets.
## 5 Waypoint Transformer
We propose the waypoint transformer (WT), a transformer-based offline RL method that leverages the proposed waypoint network \(W_{\phi}\) and a GPT-2 architecture based on multi-head attention (Radford et al., 2019). The WT policy \(\pi_{\theta}\) is conditioned on past states \(s_{t-k..t}\) and waypoints (either generated goals or rewards) \(\Phi_{t-k..t}=W_{\phi}(s_{t-k..t},\omega)\) with a context window of size \(k\), as shown in Figure 3.
Figure 3: Waypoint Transformer architecture, where \(\Phi_{t}=W_{\phi}(s_{t},\omega)\) represents the output of the goal or reward waypoint network.
We train the goal/reward waypoint network \(W_{\phi}\) on offline dataset \(\mathcal{D}\) independently of the policy. To train the WT policy, we use Algorithm 1 to iteratively optimize its parameters \(\theta\). During this process, the trained weights \(\phi\) of \(W_{\phi}\) are frozen to ensure the interpretability of the waypoint network's generated goal and reward waypoints. To further simplify the design and improve computational efficiency, the WT is not conditioned on past actions \(a_{t-k.t}\) (i.e., unlike the DT) and we concatenate \(\Phi_{t}\) with \(s_{t}\) to produce one token per timestep \(t\) instead of multiple tokens as proposed in Chen et al. (2021).
## 6 Experiments
We present a series of evaluations of WT across tasks involving reward and goal-conditioning, with comparisons to prior offline RL methods. For this, we leverage D4RL, an open-source benchmark for offline RL, consisting of varying datasets for tasks from Gym-MuJoCo, AntMaze, and FrankaKitchen (Fu et al., 2020).
Tasks in the AntMaze and FrankaKitchen environments have presented a challenge for offline RL methods as they contain little to no optimal trajectories and perform critical evaluations of a model's stitching ability (Fu et al., 2020). Specifically, in FrankaKitchen, the aim is to interact with a set of kitchen items to reach a target configuration, but the partial and mixed offline datasets consist of suboptimal, undirected data, where the demonstrations are unrelated to the target configuration. Similarly, AntMaze is a maze navigation environment with sparse rewards, where the play and diverse datasets contain target locations unaligned with the evaluation task. For our experiments on these environments, we use goal-conditioning on the target goal state (i.e., \(\omega=s_{\mathrm{target}}\)).
Gym-MuJoCo serves as a popular benchmark for prior work in offline RL, consisting of environments such as Walker2D, HalfCheetah, and Hopper. Importantly, we evaluate across offline datasets with varying degrees of optimality by considering the medium, medium-replay, and medium-expert datasets (Fu et al., 2020). For these tasks, we use reward-conditioning given a target return.
Across all environments and tasks, we use the same set of hyperparameters, as reported in Appendix B. To measure the stability (i.e., variability) in our method across random initializations, we run each experiment across 5 seeds and report the mean and standard deviation.
### Comparing WT with Prior Methods
To evaluate the performance of WT, we perform comparisons with prior offline RL methods, including conditional BC methods such as DT and RvS-R/G; value-based methods such as Onestep RL (Brandfonbrener et al., 2021), TD3 + BC (Fujimoto and Gu, 2021), CQL, and IQL; and standard
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline
**Experiment** & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}} \bmbmbm{\bmbm{\mathbf{\mathbf{ }}}}}}}}}}}}}}\) & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline halfcheetah-medium-v2 & **38.83** & **6.34** & **6.34** & **6.34** & **10.40** & 5.44 & 7.42 & 4.26 & 4.25 & 4.16 \(\pm\) 0.30 & 4.24 \(\pm\) 0.2 & 4.30 \(\pm\) 0.2 \\ hopper-medium-v2 & 59.3 & 42.9 & 59.6 & 2.55 & 58.5 & 2.1 & **6.02** & 57.2 & 52.9 & 56.9 & 60.2 \(\pm\) 3.0 & **63.85** & **62.82** & **63.82** \\ walk2/medium-v2 & 37.3 & 2.1 & **38.82** & **37.25** & 7.5 & 7.83 & 8.37 & 75.3 & 75.0 & 71.7 \(\pm\) 1.8 & 69.2 \(\pm\) 4.9 & 74.8 \(\pm\) 1.0 \\ luflehchen-medium-replay-v2 & 44.6 & 4.5 & 38.3 & 4.3 & **4.55** & 4.4 & 12.2 & 36.6 & 40.6 & 30.7 & 35.4 \(\pm\) 1.6 & 39.7 \(\pm\) 0.3 \\ hopper-medium-replay-v2 & 60.9 & 1.88 & **37.85** & **40.25** & 95.0 & 6.44 & 94.7 \(\pm\) 8.6 & 18.1 & 75.9 & 73.5 \(\pm\) 1.8 & 43.3 \(\pm\) 23.9 & 85.9 \(\pm\) 2.4 \\ walk2/medium-expert-v2 & 88.9 & **38.85** & 95.9 & 12.0 & 72.5 & 75.3 & 73.1 & 7.60 & 62.5 & 60.6 \(\pm\) 6.7 & 58.9 \(\pm\) 7.1 & 67.9 \(\pm\) 3.4 \\ luflehchen-medium-expert-v2 & 90.7 \(\pm\) 4.3 & **93.81** & 91.6 & 91.6 & 91.8 & 86.7 \(\pm\) 5.3 & 55.2 & **92.5** & 92.2 \(\pm\) 0.2 & 84.9 \(\pm\) 1.6 & **93.22** & **93.22** & **93.22** & **93.22** \\ hopper-medium-expert-v2 & 10.0 & 8.0 \(\pm\) 9.4 & 10.3 & 1.3 & 1.9 & 1.6 \(\pm\) 4.6 & 91.5 \(\pm\) 4.3 & 52.5 & **10.0** & 10.7 \(\pm\) 1.6 & 100.8 \(\pm\) 8.3 & **110.98** & **10.66** \\ walk2/medium-expert-v2 & 11.0 & 4.0 & **10.5** & **10.08** & 10.8 & 4.0 & 90.6 \(\pm\) 1.0 & 10.75 & 10.0 & 100.6 \(\pm\) 0.9 & 89.6 \(\pm\) 3.4 & **10.98** & **10.6** \\ gym-avg-v2 & **75.38** & **4.99** & **76.14** & **2.55** & **77.66** & **76.98** & **51.9** & 74.0 & 71.7 \(\pm\) 4.9 & 65.3 \(\pm\) 1.0 & **76.88** & **12.2** \\ \hlinehline antMaze-v2 & 78.6 & 64.3 & 74.0 & **87.58** & **72.26** & **54.6** & 62.8 & 63.4 & 43.6 & 53.6 \(\pm\) 3.7 & 64.9 \(\pm\) 6.1 \\ antMaze-v2-v2 & 71.4 & 60.7 & **80.0** & 72.0 & **73.18** & 43.6 & 50.2 & 60.9 \(\pm\) 2.5 & 42.2 \(\pm\) 5.4 & 71.5 \(\pm\) 7.6 \\ antMaze-medium-play-v2 & 10.6 & 0.3 & 61.2 & 72.2 & 72.2 & 60.0 & 5.4 & 58.1 \(\pm\) 12.7 & 0.4 \(\pm\) 0.0 & 62.8 \(\pm\) 5.8 \\ antMaze-medium-expert-v2 & 3.0 & 0.0 & 53.7 & **70.08** & **10.01** & 0.0 & 9.8 & **87.1** & **80.0** & 0.0 \(\pm\) 0.0 & **66.77** & **33.9** \\ antMaze-large-play-v2 & 0.2 & 0.0 & 15.8 & 39.6 \(\pm\) 5.8 & 0.0 & 0.0 & 32.4 \(\pm\) 10.5 & 0.0 \(\pm\) 0.0 & 72.5 \(\pm\) **2.8** \\ antMaze-large-play-v2 & 0.0 & 0.0 & 14.9 & 4.75 \(\pm\) 9.5 & 0.0 & 6.0 & 36.9 \(\pm\) 4.8 & 0.0 \(\pm\) 0.0 & 72.0 \(\pm\) **3.4** \\ antMaze-large-play-v2 & 27.3 & 20.9 & 50.6 & 63.0 \(\pm\) 5.3 & 16.7 & 22.5 & 53.5 \(\pm\) 7.2 & 16.0 \(\pm\) 2.1 & **63.4** & **4.9** \\ \hline hkchen-complete-v0 & - & - & 43.8 & **6.25** & **6.00** & 4.0 & 20.2 \(\pm\) 3.6 & 46.5 \(\pm\) 3.0 & 49.2 \(\pm\) 4.6 \\ kikchen-partial-v0 & - & - & 49.8 & 46.3 & 38.0 & **66.0** & 51.4 \(\pm\) 2.6 & 31.4 \(\pm\) 19.5 & **93.83** \\ kikchen-mixed-0 & - & - & 51.0 & 51.0 & 51.0 & 51.5 & 40.0 & 60.3 \(\pm\) 9.4 & 25.8 \(\pm\) 5.0 & **70.99** & **-21.1** \\ \hline hkchen-avg-0 & - & - & 48.2 & 53.3 \(\pm\) 7.5 & 51.5 & 36.7 & 54.0 \(\
BC baselines. For all methods except DT, we use reported results from Emmons et al. (2021) and Kostrikov et al. (2021). We evaluate DT using the official implementation provided by Chen et al. (2021) across 5 random initializations, though we are unable to reproduce some of their results.
Table 1 shows the results of our comparisons to prior methods. Aggregated across all tasks, WT (71.4 \(\pm\) 2.8) improves upon the next best method, IQL (68.3 \(\pm\) 6.9), with respect to average normalized score and achieves equal-best runtime. In terms of variability across seeds, there is a notable reduction compared to IQL and most other methods.
In the most challenging tasks requiring stitching, our method demonstrates performance far exceeding the next best method, IQL. On the AntMaze Large datasets, WT demonstrates a significant relative percentage improvement of 83.1% (play) and 51.6% (diverse). On Kitchen Partial and Mixed, the improvement is 37.8% and 39.0% respectively. WT's standard deviation across seeds is reduced by a factor of more than 2x compared to IQL for these tasks.
Similarly, on reward-conditioned tasks with large performance gaps between BC and value-based methods such as hopper-medium-replay-v2, WT demonstrates increased average performance by 105.3% compared to DT and 21.0% compared to RvS-R, with standard deviation reduced by a factor of 10.0x and 5.3x respectively.
### Utility of Waypoint Networks
To analyze the utility and behavior of waypoint networks, we qualitatively evaluate an agent's performance across rollouts of trained transformer policies on antmaze-large-play-v2. For this analysis, we consider a WT policy (using a goal waypoint network with \(K=30\)) and a global goal-conditioned transformer policy (i.e., no intermediate goals). Across both models, the architecture and hyperparameters for training are identical.
The ant's locations across 100 rollouts of a WT policy (Figure 3(a)) and a global goal-conditioned transformer policy (Figure 3(b)) demonstrate that WT shows notably higher ability and consistency in reaching the goal location. Specifically, without intermediate goals, the ant occasionally turns in the wrong direction and demonstrates a lesser ability to successfully complete a turn based on the reduction of density at each turn (Figure 3(b)). Consequently, the WT achieves more than twice the evaluation return (72.5 \(\pm\) 2.8) compared to the global goal-conditioned policy (33.0 \(\pm\) 10.3) and completes the task more quickly on average (Figure 3(d)).
Based on Figure 3(c), we observe that the goal waypoint network provides goals that correspond to the paths traversed in Figure 3(a) for the WT policy. This shows that the waypoint network successfully guides the model toward the target location, addressing the stitching problem proposed in Figure 2. While the global goal-conditioned policy is successful in passing beyond the stitching region into the target region in only 45% of the rollouts, accounting for 82% of its failures to reach the target, WT is successful in this respect for 87% of rollouts.
### Ablation Studies
Goal-Conditioned TasksOn goal-conditioned tasks, we examine the behavior of the goal waypoint network as it relates to the performance of the policy at test time by ablating aspects of its configuration and training. For this analysis, we consider antmaze-large-play-v2, a challenging task that critically evaluates the stitching capability of offline RL techniques.
Figure 4: Shows the ant’s location across 100 rollouts of **(a)** a WT policy and **(b)** a global goal-conditioned transformer policy; **(c)** generated intermediate goals by the waypoint network \(W_{\phi}\), **(d)** the proportion of all successful runs completed by timestep \(t\).
To understand the effect of the configuration of the goal waypoint network on test performance, we ablate two variables relevant to generating effective intermediate goals: the temporal proximity of intermediate goals (\(K\)) and the validation loss of the goal waypoint network. Additionally, we perform comparisons between the goal waypoint network and manually constructed waypoints, for which the methodology and results are shown in Appendix D.
The normalized score attained by the agent is shown as a function of \(K\) and the validation loss of the goal waypoint network in Figure 5. For this environment and dataset, an ideal choice for \(K\) is around 30 timesteps. For all nonzero \(K\), the performance is reduced at a reasonably consistent rate on either side of \(K=30\). Importantly, when \(K=0\) (i.e., no intermediate goals), there is a notable reduction in performance compared to all other choices of \(K\); compared to the optimal \(K=30\), the score is reduced by a factor of 2.2x.
In Figure 5 (right), the normalized score shows the negligible change for values of held-out RMSE between 0.4 and 0.6, corresponding to at least 1,000 gradient steps or roughly 30 sec of training, with a sharper decrease henceforth. As the RMSE increases to over 1, we observe a relative plateau in performance near an average normalized score of 35-45, roughly corresponding to performance without using a waypoint network (i.e., \(K=0\) in Figure 5 (left)).
Reward-Conditioned TasksOn reward-conditioned tasks, we ablate the choice of different reward-conditioning techniques. Specifically, we examine the performance of WT and variance of the reward waypoint network in comparison to CRTG updated using the rewards obtained during rollouts and a static ARTG (i.e., as done in Chen et al. (2021) and Emmons et al. (2021)). We consider the hopper-medium-replay-v2 task for this analysis as there is (a) a large performance gap between RvS and value-based methods, and (b) high instability across seeds for RvS methods (e.g., DT) as shown in Table 1. For all examined reward-conditioned techniques, the transformer architecture and training procedure are identical, and the target (normalized) return is 95, corresponding to SOTA performance.
To examine the distribution of normalized scores across different seeds produced by each of the described reward-conditioning techniques, we construct performance profiles, displaying the proportion of runs greater than a certain normalized score (Agarwal et al., 2021). As shown in Figure 6 (left), WT demonstrates increased performance and stability across random initializations compared to the remaining reward-conditioning techniques.
Additionally, we perform an analysis to determine whether using a reward waypoint network to predict the CRTG as opposed to updating the CRTG using attained rewards as in Chen et al. (2021) affects the variability of the conditioning variable passed to the policy network (i.e., not of the performance as that is examined in Figure 6). Importantly, to account for performance differences between the policies trained with either method that may influence the variability of the attained CRTG, we sample a subset of runs for both methods such that the average performance is constant. Based on Figure 6 (right), it is clear that as a function of the timestep, when accounting for difference in average performance, the standard deviation in the CRTG predicted by WT grows at a slower rate compared to updating CRTG with attained rewards.
Figure 5: Normalized score attained by WT on antmaze-large-play-v2 based on varying **left**: the temporal proximity of generated goals, \(K\), and **right**: goal waypoint network RMSE on a held-out dataset.
Figure 6: Comparison of different reward-conditioning methods on hopper-medium-replay-v2. **Left**: Performance profiles for transformers using ARTG, CRTG, and WT across 5 random seeds. **Right**: Standard deviation in CRTG inputted to the model when updated with attained rewards (\(\texttt{CRTG}_{t}=\omega-\sum_{t}\gamma^{t}r_{t}\)) and using predictions from the reward waypoint network (\(W_{\phi}(s_{t},\omega)_{2}\)) when average return is approximately held constant.
Transformer ConfigurationBased on the work in Emmons et al. (2021), we balance between expressiveness and regularization to maximize policy performance. We ablate the probability of node dropout \(p_{\mathrm{drop}}\) and the number of transformer layers \(L\). To further examine this balance, we experiment with conditioning on past actions \(a_{t-k.t-1}\), similarly to the DT, to characterize its impact on performance and computational efficiency. Similarly to previous sections, we consider antmaze-large-play-v2.
Based on Table 2, we observe that the sensitivity to the various ablated hyperparameters is relatively low in terms of performance, and removing action conditioning results in reduced training time and increased performance. In the context of prior RvS work where dropout (\(p_{\mathrm{drop}}=0.1\)) decreased performance compared to no dropout (\(p_{\mathrm{drop}}=0.0\)) by 1.5-3x on AntMaze, the largest decrease in average performance on WT is only by a factor of 1.1x (Emmons et al., 2021).
## 7 Discussion
In this study, we address the issues with existing conditioning techniques used in RvS, such as the "stitching" problem associated with global goals and the high bias and variance of reward-to-go targets, through the automatic generation of intermediate targets. Based on empirical evaluations, we demonstrate significantly improved performance and stability compared to existing RvS methods, often on par with or outperforming TD learning methods. Especially on challenging tasks with suboptimal dataset composition, such as AntMaze Large and Kitchen Partial/Mixed, the guidance provided by the waypoint network through intermediate targets (e.g., as shown in Figure 4) significantly improves upon existing state-of-the-art performance.
We believe that this work can present a pathway forward to developing practical offline RL methods leveraging the simplicity of RvS and exploring more effective conditioning techniques, as formalized by Emmons et al. (2021). In addition to state-of-the-art performance, we demonstrate several desirable practical qualities of the WT: it is less sensitive to changes in hyperparameters, significantly faster to train than prior RvS work, and more consistent across initialization seeds.
However, despite improvements across challenging tasks, WT's margin of improvement on AntMaze U-Maze and Kitchen Complete (i.e., easier tasks) is lower. We believe this likely due to stitching being less necessary in such tasks compared to difficult tasks. Further characterizing the performance of WT on such tasks is an interesting direction for future work. In addition, similar to other RvS work such as Chen et al. (2021), Emmons et al. (2021), we tune the target return at test time for reward-conditioned tasks using a grid search, which is not as principled or computationally efficient would be desirable.
## 8 Conclusion
We propose a method for reinforcement learning via supervised learning, Waypoint Transformer, conditioned on generated intermediate targets for reward and goal-conditioned tasks. We show that RvS with waypoints significantly surpasses existing RvS methods and achieves on par with or surpasses previous state-of-the-art methods across a wide range of tasks from Gym-MuJoCo, AntMaze, and Kitchen. With improved stability across runs and competitive computational efficiency, we believe that our method advances the performance and applicability of RvS within the context of offline RL.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(p_{\mathrm{drop}}\) & **Score** & **Conditioning** & **Score** & **Runtime** \\ \hline
0.000 & 68.3 \(\pm\) 5.9 & \((s_{t-k.t},a_{t-k.t-1})\) & 66.5 \(\pm\) 5.6 & 30 min \\
**0.075** & 70.8 \(\pm\) 4.5 & **(no actions)** & 72.5 \(\pm\) 2.8 & 20 min \\
**0.150** & 72.5 \(\pm\) 2.8 & & & \\ \hline \hline \end{tabular}
\begin{tabular}{c c} \hline \hline \(L\) & **Score** \\ \hline
1 & 72.1 \(\pm\) 5.7 \\
**2** & 72.5 \(\pm\) 2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of transformer configuration with normalized score on antmaze-large-play-v2, including dropout probability (\(p_{\mathrm{drop}}\)), number of transformer layers (\(L\)), and conditioning on actions, where bolded selections are used for final models. |
2310.16198 | Effects of the size and concentration of depleting agents on the
stabilization of the double-helix structure and DNA condensation: a single
molecule force spectroscopy study | We perform a single molecule force spectroscopy study to characterize the
role of the size (molecular weight) and concentration of depleting agents on
DNA condensation and on the stabilization of the double-helix structure,
showing that important features such as the threshold concentration for DNA
condensation, the force in which the melting plateau occurs and its average
length strongly depend on the depletant size chosen. Such results are
potentially important to understand how the presence of surrounding
macromolecules influences DNA stabilization inside living cells and therefore
advance in the understanding of the crowded cell environment on DNA-related
functions. | R. M. de Oliveira, M. S. Rocha | 2023-10-24T21:31:00Z | http://arxiv.org/abs/2310.16198v1 | Effects of the size and concentration of depleting agents on the stabilization of the double-helix structure and DNA condensation: a single molecule force spectroscopy study.
###### Abstract
We perform a single molecule force spectroscopy study to characterize the role of the size (molecular weight) and concentration of depleting agents on DNA condensation and on the stabilization of the double-helix structure, showing that important features such as the threshold concentration for DNA condensation, the force in which the melting plateau occurs and its average length strongly depend on the depletant size chosen. Such results are potentially important to understand how the presence of surrounding macromolecules influences DNA stabilization inside living cells and therefore advance in the understanding of the crowded cell environment on DNA-related functions.
keywords: Polyethylene-glycol (PEG), DNA, single molecule force spectroscopy, optical tweezers, depletion interactions +
Footnote †: journal:
## 1 Introduction
Macromolecular crowding is an important phenomenon related to the presence of a relatively high concentration of macromolecules in solution, which in general drastically modify its properties [1]. In fact, the presence
of such macromolecules at high concentrations reduces the available volume inside solutions, interfering on the solvent activity and the thermodynamics of all other solute components. From a biological perspective, the intracellular medium is a crowded environment due to the presence of many types of proteins, sugars, nucleic acids and others. These molecules interfere in the relevant chemical reactions that occur inside cells, including the interactions involving nucleic acids, proteins and ligands [2; 3]. Therefore, to fully understand such interactions, they should be preferentially studied _in vitro_ using a crowded solution that mimics the intracellular medium.
In particular, when the crowding macromolecules are relatively large, depletion interactions can play a fundamental role in many colloidal and polymer solutions, modulating the general behavior of solute particles diluted in a given solvent containing these "depletant agents". In general, such depletants present intermediate sizes between the solute and the solvent molecules, promoting relevant volume-exclusion effects and thus allowing the depletion phenomena to occur efficiently [4; 5]. They can lead for example to particle aggregation and consequently to phase separation phenomena [6; 4]. Common depletant agents used in various _in vitro_ studies are neutral polymers, proteins, micelles, osmolytes and others [4; 7].
Concerning specifically DNA solutions, it is well known that depletants such as neutral polymers [8; 9; 10; 11] and proteins [12; 10; 13] can promote relevant depletion interactions between different DNA segments and, depending on the depletant concentration and ionic composition of the solution, lead to the aggregation of different DNA molecules (in concentrated DNA solutions) or, alternatively, to the collapse of a single DNA molecule by effective segment-segment attraction (in dilute DNA solutions), a phenomenon known as "polymer-salt-induced" (psi, \(\psi\)) condensation [9; 10; 14].
DNA \(\psi\)-condensation is a very important phenomenon from various different perspectives. From the biological point of view, the high concentration of depletants around collapsed DNA molecules in solution simulates two key features found inside cells: the crowded aspect of such environment as well the compacted DNA shape itself, which is somewhat similar to what occurs in prokaryotes [15; 16; 3]. From the physicochemical point of view, on the other hand, such a system can be used for studying _in vitro_ interesting phenomena, _e. g._, the coil-globule transition that results in the condensation process itself [17; 18] and furthermore, the effects of DNA condensation on its interactions with other compounds such as chemotherapeutic drugs [2; 3].
Although many aspects behind DNA \(\psi\)-condensation and depletion inter
actions (the driving force behind the phenomenon) are nowadays well understood and characterized, to the best of our knowledge a study concerning the role of depletion interactions on the stabilization of the secondary structure of the double-helix, performed at the single molecule level, is still lacking. Furthermore, the effects of the size of the depleting agents on the condensation process itself were not characterized at the single molecule level either, but only in a few bulk studies under specific conditions [9; 11]. Here we fill these gaps by performing single molecule force spectroscopy with DNA molecules under various different crowded solutions, using optical tweezers (OT). We use the neutral polymer polyethylene-glycol (PEG) with different molecular weights as the depletant molecule, varying its concentration in a fixed solvent (a phosphate buffered saline (PBS) buffer). The experiments have shown that the depletant presents a strong size-dependent tendency to condense DNA and to stabilize the double-helix structure, being able to hinder the well-known force-induced melting plateau that occurs at \(\sim\) 65 pN depending on its molecular weight and concentration. Such results are potentially important to understand how the presence of surrounding macromolecules influences DNA stabilization inside living cells and therefore advance in the understanding of the crowded cell environment on DNA-related functions.
## 2 Materials and Methods
The samples prepared for single molecule OT assays consist of biotin-labeled \(\lambda\)-DNA (48,502 base-pairs, \(\sim\) 16.5 \(\mu\)m contour length, New England Biolabs) attached by the ends to a streptavidin-coated polystyrene bead (3 \(\mu\)m diameter, Bangs Labs) and to a streptavidin-coated glass coverslip (used to construct the sample chamber). The samples are firstly prepared only with the DNA molecules in a Phosphate Buffered Saline (PBS) buffer composed of 4.375 mM of Na\({}_{2}\)HPO\({}_{4}\), 1.25 mM of NaH\({}_{2}\)PO\({}_{4}\) and 140 mM of NaCl (ionic strength \(I\) = 154 mM).
The experimental procedure is then performed as follows. Firstly, a particular DNA molecule is chosen and tested for integrity by performing force-extension measurements in the low-force entropic regime (\(<\) 5 pN), measuring the persistence and contour lengths to verify if the values of such parameters are within the expected values [19; 20]. Then, PEG is introduced in the sample chamber at a desired concentration, and we wait at least 20 minutes for equilibration. Finally, the same DNA molecule previously tested is stretched from an initial extension of 5 \(\mu\)m until reaching high stretching
forces in the enthalpic (elastic) regime (some tens of pN), verifying how the presence of PEG in the sample modifies the expected force-extension curve of the biopolymer and, in particular, the melting plateau found at \(\sim\) 65 pN for bare DNA molecules in our buffer.
## 3 Results and Discussion
In Fig. 1 we show some representative force-extension curves (FECs) for various different PEG concentrations with molecular weights of \(\sim\) 2,000 g/mol (PEG2k, panel _a_), \(\sim\) 8,000 g/mol (PEG8k, panel _b_) and \(\sim\) 20,000 g/mol (PEG20k, panel _c_). The PEG concentrations are expressed here as mass fractions (% m/m) between the PEG mass and the total solution mass. Observe that for bare \(\lambda\)-DNA (without PEG in the sample) the FEC exhibits the expected behavior well described by the Worm-Like Chain (WLC) model, with the melting plateau at \(\sim\) 65 pN, where the biopolymer have its contour length increased by about \(\sim\) 1.7\(\times\) during the melting transition (_red circles_ in all panels).
For PEG2k and PEG8k the qualitative behavior is similar: the FECs readily lose the melting plateau, which becomes rare, appearing sporadically only in a few curves; and in those cases much smaller in "length" (the plateau horizontal extension, in micrometers) and much higher in "height" (the average force where the plateau occurs, in pN) than the results found for bare DNA (\(\sim\) 12 \(\mu\)m length, \(\sim\) 65 pN), as can be noted in Fig. 1\(a\) and \(b\). Furthermore, observe that the maximum DNA extension reached for a given force decreases when PEG is present, confirming the condensation phenomenon. In the case of PEG2k (Fig. 1_a_), the condensation typically occurs in the PEG concentration range of 40-70% m/m, much higher than the corresponding range found for PEG8k (Fig. 1_b_), which is 10-26% m/m. Such result is in agreement with previous single molecule and bulk studies performed with various types of DNA depletants [12; 13; 21] and attest the important role of the PEG size on the DNA condensation process. In the case of PEG20k (Fig. 1_c_), the behavior is qualitatively different: here the melting plateau is much more common and disappears gradually as the PEG concentration is increased in the sample. At 10% of PEG20k (_blue squares_ in Fig. 1_c_), observe that the melting plateau already starts to disappear: the average force needed to promote DNA melting increases and the plateau length decreases, indicating that the melting process had become more difficult to occur. Such situation is intensified for higher PEG20k concentrations (see for example
Figure 1: Representative force-extension curves (FECs) for various different PEG concentrations with molecular weights of \(\sim\) 2,000 g/mol (PEG2k, panel _a_), \(\sim\) 8,000 g/mol (PEG8k, panel _b_) and \(\sim\) 20,000 g/mol (PEG20k, panel _c_). The PEG concentrations are expressed here as mass fractions (% m/m) between the PEG mass and the total solution mass.
14% m/m; _green diamonds_ in Fig. 1_c_) and, finally, at 18% of PEG20k the plateau completely disappears, showing that the PEG-induced depletion interactions have hindered the force-induced melting transition. Observe that PEG20k also promoted the expected DNA compaction, with the measured extension for a given force decreasing with the PEG concentration in the sample, as can be clearly seen in Fig. 1\(c\).
In order to get a better overview on the DNA condensation process promoted by PEG under the different studied situations, in Fig. 2 we show the DNA extension measured at two characteristic forces (20 pN and 60 pN) as a function of the PEG concentration for the three different PEGs used: PEG2k (panel _a_), PEG8k (panel _b_) and PEG20k (panel _c_). Such data was directly extracted from the measured FECs (whose some examples were already shown in Fig. 1). These data explicitly show the very different qualitative behavior already mentioned concerning the DNA condensation process by PEG: for the two PEGs with lower molecular weights (2k and 8k) the DNA extension decreases abruptly as a function of the PEG concentration, remaining practically constant for concentrations \(>40\%\) m/m (PEG2k, Fig. 2_a_) and \(>10\%\) m/m (PEG8k, Fig. 2_b_). For PEG20k, on the other hand, the decrease of the DNA extension as a function of the PEG concentration is much more smooth, occurring gradually (Fig. 2_c_). Unfortunately, it was not possible to increase the PEG20k concentration anymore in order to clearly view a saturation in the DNA extension decrease, due to the limited solubility of this high molecular weight PEG.
Finally, in Fig. 3 we show the average results found for some key quantities concerning the characterization of the PEG-induced DNA condensation process and the modification of the melting plateau already mentioned. Panel \(a\) complements the analysis performed in the former figure, showing the average threshold concentration for DNA condensation (calculated as the average over the concentration range where DNA condenses) as a function of the PEG molecular weight. Observe that the higher the molecular weight, the lower the concentration one needs to condense DNA in solution, a result that corroborates with previous works concerning depletion interactions and DNA \(\psi\)-condensation [9]. Panel \(b\) shows the average force in which the melting plateau occurs as a function of the PEG concentration. We were able to obtain some data for PEG20k and PEG8k in this case. Observe that, independent on the molecular weight, the melting plateau tends to occur at higher forces when one increases the PEG concentration in the sample, indicating that the depletion interactions promoted by the neutral
Figure 2: DNA extension measured at two characteristic forces (20 pN and 60 pN) as a function of PEG concentration for the three different PEGs used: PEG2k (panel _a_), PEG8k (panel _b_) and PEG20k (panel _c_). These data explicitly show the very different qualitative behavior concerning the DNA condensation process induced by PEG: for the two PEGs with lower molecular weights (2k and 8k) the DNA extension decreases abruptly as a function of the PEG concentration, remaining practically constant for concentrations \(>40\%\) m/m (PEG2k) and \(>10\%\) m/m (PEG8k). For PEG20k, on the other hand, the decrease of the DNA extension as a function of the PEG concentration is much more smooth, occurring gradually.
polymer in solution stabilizes the DNA double-helix structure, hindering the denaturation of the biopolymer. Such conclusion is confirmed by the data show in panel \(c\), where we plot the average length of the melting plateau, in micrometers, as a function of the PEG concentration. Observe that in this case such length rapidly decreases when the PEG concentration is increased, which shows that the denatured DNA portion strongly decreases for higher PEG concentrations, therefore confirming the stabilization of the double-helix structure against force-induced melting. Such a result also corroborates with studies performed using temperature-induced melting assays, which is the straightforward bulk method for characterizing the melting of double-stranded nucleic acids and have pointed that PEGs with molecular weights larger than 1,000 g/mol stabilizes DNA and RNA duplexes against temperature-induced melting [22, 23, 24, 25].
In summary, the results presented here allows us to draw some important information on the nature of the DNA structure. The results presented here are shown in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figure as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same as the ones in Fig. 3. The results are shown in the same figures as the ones in Fig.
tant new conclusions concerning the DNA \(\psi\)-condensation process and the stabilization of the double-helix structure against force-induced melting due to the depletion interactions promoted by the presence of PEG in solution. Key quantities such as the average threshold concentration for DNA condensation to occur, the average force in which the melting plateau occurs and the average length of this plateau strongly depend of the PEG size (molecular weight) chosen: higher molecular weight PEGs (20k g/mol) promote a more gradual condensation and melting stabilization, while lower molecular weight PEGs (2k and 8k g/mol) tend to interact more directly with DNA via depletion interactions, rapidly condensing the biopolymer and stabilizing the double-helix structure against force-induced melting, although higher PEG concentrations are needed in this case. This fact occurs because lower size PEGs promote weaker volume-exclusion effects and, although such a feature decreases the ability to condense DNA (requiring higher concentrations), it allows a higher number of molecules to stay close to the double-helix in solution, stabilizing this structure against melting.
## 4 Conclusions
We performed single molecule force spectroscopy with optical tweezers to study the effects of the size (molecular weight) and concentration of depleting agents on the DNA \(\psi\)-condensation and on the stabilization of the double-helix structure under force-induced melting. To perform such study we used the classic neutral polymer polyethylene-glycol (PEG) as the depleting agent and \(\lambda\)-DNA in a simple phosphate buffered saline solution with a fixed ionic strength of physiological relevance. The results achieved have shown that the intrinsic details related to the DNA condensation process (average threshold concentration for DNA condensation) and the stabilization of the double-helix structure (average force in which the melting plateau occurs, average length of this plateau) strongly depend of the PEG size chosen. In particular, higher molecular weight PEGs (20k g/mol) promote a more gradual condensation and melting stabilization, while lower molecular weight PEGs (2k and 8k g/mol) tend to interact more directly with DNA via depletion interactions, rapidly condensing DNA and stabilizing the double-helix structure against force-induced melting, although higher PEG concentrations are needed in this case.
## 5 Acknowledgements
This research was funded by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq); Fundacao de Amparo a Pesquisa do Estado de Minas Gerais (FAPEMIG); and Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) - Finance Code 001.
|
2305.05434 | RAAD: LIGHT-1 CubeSat's Payload for the Detection of Terrestrial
Gamma-Ray Flashes | The Rapid Acquisition Atmospheric Detector (RAAD), onboard the LIGHT-1 3U
CubeSat, detects photons between hard X-rays and soft gamma-rays, in order to
identify and characterize Terrestrial Gamma Ray Flashes (TGFs). Three detector
configurations are tested, making use of Cerium Bromide and Lanthanum
BromoChloride scintillating crystals coupled to photomultiplier tubes or
Multi-Pixel Photon Counters, in order to identify the optimal combination for
TGF detection. High timing resolution, a short trigger window, and the short
decay time of its electronics allow RAAD to perform accurate measurements of
prompt, transient events. Here we describe the overview of the detection
concept, the development of the front-end acquisition electronics, as well as
the ground testing and simulation the payload underwent prior to its launch on
December 21st, 2021. We further present an analysis of the detector's in-orbit
system behavior and some preliminary results. | A. Di Giovanni, F. Arneodo, A. Al Qasim, H. Alblooshi, F. AlKhouri, L. Alkindi, A. AlMannei, M. L. Benabderrahmane, G. Bruno, V. Conicella, O. Fawwaz, G. Franchi, S. Kalos, P. Oikonomou, L. Perillo, C. Pittori, M. S. Roberts, R. Torres | 2023-05-09T13:23:56Z | http://arxiv.org/abs/2305.05434v2 | # RAAD: LIGHT-1 CubeSat's Payload for the Detection of Terrestrial Gamma-Ray Flashes
###### Abstract
The Rapid Acquisition Atmospheric Detector (RAAD), onboard the LIGHT-1 3U CubeSat, detects photons between hard X-rays and soft gamma-rays, in order to identify and characterize Terrestrial Gamma Ray Flashes (TGFs). Three detector configurations are tested, making use of Cerium Bromide and Lanthanum BromoChloride scintillating crystals coupled to photomultiplier tubes or Multi-Pixel Photon Counters, in order to identify the optimal combination for TGF detection. High timing resolution, a short trigger window, and the short decay time of its electronics allow RAAD to perform accurate measurements of prompt, transient events. Here we describe the overview of the detection concept, the development of the front-end acquisition electronics, as well as the ground testing and simulation the payload underwent prior to its launch on December 21st, 2021. We further present a preliminary analysis of the detector's housekeeping data collected in orbit to evaluate the health of the instrument in operating conditions.
Keywords:Gamma detectors, X-ray detectors, Scintillators, On-board space electronics, Space instrumentation, Particle detectors
## 1 Introduction
Terrestrial Gamma Ray Flashes (TGFs) are upward directed, highly luminous bursts of photons, with durations of less than 1 ms and energies from 10 keV reaching up to 40 MeV [1; 2; 3; 4; 5; 6]. They are produced in the high electric fields naturally occurring within thunderstorms, at altitudes of 10 - 15 km, when electrons are accelerated to relativistic speeds and produce photons through bremsstrahlung [7; 8; 5; 9]. It is estimated that a sum of more than 400,000 TGFs are produced annually [1], primarily concentrated around the equator [3].
TGFs were first discovered by Fishman et al. using data obtained from the Burst and Transient Source Experiment (BATSE) on NASA's Compton Gamma Ray Observatory (CGRO) in 1994 [3], and were found to be associated with lightning activity. BATSE was built to detect gamma-rays from celestial sources and thus was sensitive to energies of \(\geq\)20 keV. However, BATSE's sampling rate of 64 ms was longer than the typical sub-millisecond duration of a TGF, resulting in very few detections [3; 5].
NASA's Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), originally built for studying solar flares, was able to detect TGFs as well. However, RHESSI's Germanium detectors
were sensitive to photons with energies up to only 18 MeV. [10]. Regardless, RHESSI's detections proved that TGFs can be detected by satellites as high as 600 km in altitude [11; 12]. At a similar altitude, the "Astro-Rivelatore Gamma a Immagini Leggero" (AGILE) and the Gamma-Ray Burst Monitor (GBM) onboard NASA's Fermi Gamma-ray Space Telescope were able to detect the full energy spectrum of TGFs [13; 14; 15; 16; 17; 4; 18]. As the aforementioned missions were optimized to detect fainter phenomena of longer duration, their detectors suffered from pileup and saturation when observing high luminosity and energetic TGFs.
Thus far, TGFs have been primarily studied as by-products of missions designed to detect other phenomena, thus creating an opportunity for specialized missions. One such mission is European Space Agency's Atmosphere-Space Interactions Monitor (ASIM), installed in 2018 on the ISS, which managed to confirm the hypothesized TGF models originating from previous studies by combining observations between 50 keV - 30 MeV with optical data [19; 20].
Even though TGFs' bright and brief nature made them hard to characterise with the non-specialised equipment of the initial missions, it also made their detection possible using detectors small enough to fit within CubeSats (i.e., pico-satellites developed at reduced costs). Thus we proposed the Rapid Acquisition Atmospheric Detector (RAAD), an instrument specifically designed for TGF detection, as the payload of the LIGHT-1 CubeSat mission. Table 1 shows the performance characteristics of LIGHT-1 compared to previous missions. The RAAD acronym was also chosen for its similarity to the Arabic word "Ra'ad" ( \(\upmu\)s) which means thunder.
In this paper, we provide a detailed description of the payload electronics, ground tests and simulations, as well as some preliminary flight data of RAAD. A prototype of RAAD was built and tested at NYUAD, providing proof of concept. Details on the prototype can be found in [21]. Details on the CubeSat's bus and subsystems can be found in [22].
## 2 The LIGHT-1 Mission
### Mission Concept
Absorption by the atmosphere can suppress TGFs, therefore, solely measuring the gamma-ray emission would not provide complete information about their characteristics [25]. As a result, a combined measurement is, in general, preferred over a single detection channel. Combining atmospheric gamma-ray detection with X and gamma-ray surveys or lightning radar measurements
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline \multirow{2}{*}{**Mission**} & **Trigger Window** & **Resolution** & **Dead Time Per Event** & **Sensitivity** \\ & **(ms)** & (\(\upmu\)s) & (\(\upmu\)s) & **(MeV)** \\ \hline \hline BATSE & 64 & 2 & 6 & 0.02 \(\div\) 2 \\ \hline RHESSI & None & 0.95 & 8 & 0.003 \(\div\) 17 \\ \hline AGILE & 0.293 & 1 & 65 & 0.3 \(\div\) 100 \\ \hline FERMI & 16 & 2 & 2.6 & 0.001 \(\div\) 40 \\ \hline LIGHT-1 & 0.5 & 0.5 & 0.04 & 0.02 \(\div\) 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Characteristics of LIGHT-1 compared with previous missions: BATSE [3], RHESSI [23], AGILE [13; 24], and FERMI [16].
could provide insightful information on the production mechanisms of TGFs [26, 27]. Under this premise, a Low Earth Orbit (LEO) operated gamma-ray detector, together with correlated ground and space observations and measurements, would be the ideal probe to pursue a TGF science program.
LIGHT-1 is a 3U CubeSat built primarily to study TGFs. Yet, LIGHT-1's mission extends to measuring orbital radiation and space-qualifying different technologies for the detection of prompt, highly energetic, and intense emissions typical of transient events. For size reference, one U of a CubeSat consists of a cube of about 10 cm \(\times\) 10 cm \(\times\) 10 cm. A little less than two units of LIGHT-1 is dedicated to RAAD, while the rest is dedicated to the subsystems of the CubeSat (reaction wheels, onboard computer, etc.). A review on operating gamma-ray detectors onboard CubeSats is available in [28].
RAAD utilizes three types of detectors, by coupling two types of photosensors to two types of scintillating crystals. In fact, the LIGHT-1 mission is also meant to conduct a direct comparison between different detector configurations to find which is best suited for TGF detection. LIGHT-1 is the first mission to conduct a direct comparison between Cerium Bromide and Lanthanum BromoChloride scintillating crystals, coupled to either photomultiplier tubes or silicon photomultipliers.
### Mission Schedule
The LIGHT-1 CubeSat was launched on the 21st of December, 2021, onboard a Space-X Falcon9/Dragon from the Kennedy Space Center, directed at the International Space Station (ISS). The handling of LIGHT-1 for the launch and subsequent deployment was taken care by the Japan
Figure 1: The life of the LIGHT-1 mission. Apogee and perigee altitudes of the satellite are shown in red and blue respectively. The dotted black line starting on the SpaceX Launch date (2021-12-21) until the deployment date (2022-02-03) is the altitude of the International Space Station where LIGHT-1 was stowed prior to the start of the mission. Different colors denote the different operation regimes: Launch and Early Orbit Phase (LEOP), Payload Commissioning, Science Run, and de-orbit.
Aerospace Exploration Agency (JAXA). Once on the ISS, it was deployed via the Japanese Experiment Module (JEM) on the 3rd of February, 2022, in a 51.6\({}^{\circ}\)orbit at an initial altitude of 408 km.
Figure 1 shows the operation regimes of the LIGHT-1 mission as a function of time. During the Launch and Early Orbit Phase (LEOP), the satellite bus developed by NanoAvionics was fully tested and optimized for the science program operations. The payload was powered on for the completion of one full orbit on the 16th of March, 2022 to check its vital parameters. The commissioning of the payload began on the 6th of April, 2022. On the 25th of May, 2022 LIGHT-1 entered the Science Run mode. Communications with LIGHT-1 stopped on January 18th, 2023, marking the start of the de-orbiting phase and the end of the mission.
## 3 Scientific Payload Specifications
RAAD is designed to resolve events hundreds of ns apart, measure the energy deposited, and assign a timestamp for comparison with TGF and lightning catalogs generated by other experiments.
### Detector Structure
RAAD consists of two detectors, different in size, fitting 1 U and 0.7 U of the spacecraft, respectively. The detector structure is shown in Figure 2. The smaller one, the _MPPC payload_, is equipped with four S13361-6050AE-04 Multi-Pixel Photon Counters (MPPCs) manufactured by Hamamatsu Photonics and coupled to a 4-channel Low Background Cerium Bromide (CeBr\({}_{3}\)(LB)) crystal array, manufactured by Scionix and shown in Figure 3. The larger detector, the _PMT payload_, is equipped with four R11265-200 photomultiplier tubes (by Hamamatsu Photonics) and coupled to one array of four scintillating crystals organized in one pair of CeBr\({}_{3}\)(LB) and one pair of Lanthanum BromoChlororide (LBC), both manufactured by Scionix. We refer to each photosensor array coupled with the corresponding scintillating crystals as the detection target of RAAD. The differences between the two photosensors are shown in Table 3.
The performances of the crystals at different energies, shown in Table 4, make them complementary. The intrinsic activity of LBC, while on one hand an obvious nuisance, on the other hand, may provide an embedded calibration tool.
Each scintillating crystal unit is 23 mm \(\times\) 23 mm \(\times\) 45 mm in dimensions, with the whole scintillating crystal arrays of 60 mm \(\times\) 60 mm\(\times\) 48 mm in dimensions and 615 g and 595 g in weight, for the CeBr\({}_{3}\)(LB) and LBC respectively.
\begin{table}
\begin{tabular}{|c||c|c|} \hline
**Parameter** & **Design value** & **Actual Value** \\ \hline \hline Mass [kg] & 2.05 \(\pm\) 0.07 & 1.981 \(\pm\) 0.001 \\ \hline Average Power Consumption [W] & \(<\) 5.9 W & \(<\) 4.8 W \\ \hline Data Downlink [MB/24 h] & 40 & \(\sim\) 40 \\ \hline Duty Cycle [\%] & 50 & \(\sim\) 50 \\ \hline Life Time [months] & \(\sim\) 6 & \(\sim\) 10 \\ \hline Temperature operative range [\({}^{\circ}\)C] & 0 \(\div\) 45 & 10\(\div\) 40 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Payload specification design in comparison with the values measured on the flight model.
The four scintillating crystals are separated by PTFE to avoid optical cross-talk. In order to prevent the effects of water vapor contamination during the assembling phase, the hygroscopic crystal array is housed in an airtight container. Five sides are covered with aluminum, while on the sixth side, a 2 mm thick layer of fused silica is used to optically connect the photosensor array to the crystals.
The front-end electronics are directly coupled to the back of the photosensor array as seen in Figure 4.
A detailed study and characterization of the detection concept is reported in [21].
\begin{table}
\begin{tabular}{|c||c|c|} \hline
**Parameter** & **R11265-200** & **S13361-6050AE-04** \\ \hline \hline Type of photosensor & PMT & MPPC \\ \hline Dimensions (L \(\cdot\) D \(\cdot\) H) [mm\({}^{3}\)] & \(26\cdot 26\cdot 19\) & \(25\cdot 25\cdot 1.4\) \\ \hline Weight [g] & 24 & 2 \\ \hline Max Sensitivity [nm] & 400 & 450 \\ \hline Quantum Efficiency - Photon Detection Efficiency [\%] & 43 & 40 \\ \hline Operating Voltage [V] & 900 & 55 \\ \hline Gain at working point & \(\sim 10^{6}\div 10^{7}\) & \(\sim 10^{6}\div 10^{7}\) \\ \hline Dark Counting Rate at working point, 300 K [Hz] & \(<1\) & \(>10^{7}\) \\ \hline Operating Temperature [K] & 240 - 320 & 250 - 330 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Characteristics of the photosensors used in RAAD
Figure 2: An explosion view of the MPPC CAD model complete with the 4 photosensors, crystals, and veto, with the important parts annotated.
A VETO system surrounds the detection target on four sides. Its purpose is to partially identify and suppress the background induced by charged particles. It consists of eight independent units. Each is composed of one 5 mm thick plastic scintillator tile with an embedded wavelength shifter fiber and read out at one edge by an SMD Silicon photomultiplier (SiPM) manufactured by AdvanSiD srl (ASD-NUV1C-P-40). The CAD model of the VETO system is shown in Figure 2.
The detection target along with the photosensor array and related electronics are placed inside an aluminum enclosure designed to be mechanically coupled to the CubeSat spacecraft. The final assembly of the PMT payload is shown in Figure 5.
The effect of the aluminum walls has been evaluated by a Geant4 simulation, fully presented in Section 4.2, indicating a hardware detection threshold for gamma rays of about 20 keV.
An aerospace-grade silicon resin (Momentive RTV615) was used for filling the space in between the photomultipliers in the assembly in order to mitigate the effect of vibration on the structure
\begin{table}
\begin{tabular}{|c||c|c|} \hline
**Parameter** & **CeBr\({}_{3}\)(LB)** & **LBC** \\ \hline \hline Density [g \(\cdot\) cm\({}^{-3}\)] & 5.1 & 4.9 \\ \hline Hygroscopic & Yes & Yes \\ \hline Light Emission Peak [nm] & 370 & 380 \\ \hline Energy Resolution at 122 keV [\%] & 10 & 7 \\ \hline Energy Resolution at 662 keV [\%] & 4 & 3 \\ \hline Decay Time [ns] & 20 & 35 \\ \hline Intrinsic Activity [Bq \(\cdot\) cm\({}^{-3}\)] & \textless{} 0.01 & \(\sim\) 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Characteristics of the scintillating crystals used in RAAD.
Figure 3: Top view of the CeBr\({}_{3}\)(LB) crystal array used in the MPPC payload, embedded in the VETO system and its readout electronics.
and to provide effective electrical insulation for the high voltage (-800 V \(\div\) -700 V) required to operate the payload. Overall, the multiple enclosing layers comply with the safety requirements of the mission, mitigating the risk of fragmentation.
Figure 4: The two photosensor arrays from the payload of the LIGHT-1 mission. (left) The four R11265-200 photomultiplier tubes used in the PMT payload. (right) The four S13361-6050AE-04 Multi-Pixel Photon Counters used in the MPPC payload. Both sensors have been manufactured by Hamamatsu Photonics.
Figure 5: The fully assembled PMT payload.
### Payload Readout and Control Electronics
RAAD electronics consist of three main components.
1. VETO readout electronics
2. Photosensor power supply
3. Front-end and Controller boards
A block diagram for the electronics is shown in Figure 6 and its main characteristics are summarized in Table 5.
A
Figure 6: Block diagram of the RAAD electronics
\begin{table}
\begin{tabular}{|c||c||} \hline
**Parameter** & **Design value** \\ \hline \hline Average Power Consumption [W] & \textless{} 5.9 W \\ \hline Weight [g] & 58 (MPPC), 66 (PMT) \\ \hline ADC resolution range [bits] & 10 \(\div\) 16 \\ \hline Components & SMD, all COTS, Automotive grade \\ \hline Operating modes & noise, default, science, custom, safe \\ \hline \hline \end{tabular}
\end{table}
Table 5: The main design characteristics of the RAAD electronics
he power supply electronics of the MPPC and PMT payloads are shown in Figure 7.
With the single exception of the photosensor sockets, no connectors have been used in the LIGHT-1 payload to maximize compactness and reliability.
### Firmware Overview
The onboard firmware is designed in order to assign all data collected to specific categories (i.e. gamma and charged particle-induced events, TGF events, and orbit monitoring data).
The detection strategy can be summarised in a series of steps listed below, shown in Figure 8.
1. Particle interaction and signal generation in the detector;
2. the charge amplifier integrates the signal. Its output is sampled by an ADC (16 bits, 10 MHz) and passed through a matched filter [28] to extract the peak charge value;
3. if the charge value is larger than the detection threshold, an event timestamp is assigned and the event is sent to the spacecraft bus.
The payload data generated are organized into four buffers, each catering to different types of events as shown in Table 6.
The science data is transmitted to the ground using an S-Band antenna during daily passes above ground stations dedicated to the mission [22]. Each contact (up 6 times per day) can last up to 10 minutes. Spacecraft telemetry, which includes housekeeping data, operational commands, and payload scripts, is transmitted using a UHF transceiver.
Figure 7: The custom made power supply electronics of the LIGHT-1 PMT payload which is used to bias the photomultiplier tubes at variable gains, depending on the science case.
## 4 Pre-Flight Simulations and Tests
In order to space qualify RAAD, all the payload components and the payload as a whole have been subjected to several environmental and functionality tests, in order to assess the surviving capability to the various regimes typical of the mission (launch phase, thermal stress, radiation, etc...).
### Stress Analysis
Prior to conducting vibrations tests on the physical assembly of the payload, a stress analysis was conducted to study the effect of the launch conditions on the structure. In particular, a Finite Element Analysis was conducted in order to calculate the frequency and peak amplitude of the first 20 vibrational modes.
The large computational requirement for such analyses forced a geometric simplification of the 3D models used. Yet the simplification was within a reasonable range so as to not contribute more than 10% of the payload's total mass.
\begin{table}
\begin{tabular}{|c||p{284.5pt}||} \hline
**Data Buffer** & **Description** \\ \hline \hline NonVeto & All the events in which there is no VETO flag. The ADC resolution is lowered to 10 bits, timestamp at 1 ms resolution \\ \hline Veto & All the events in which there is at least one (out of eight) VETO flag active. The ADC resolution is lowered to 14 bits, timestamp at 100 \(\upmu\)s resolution \\ \hline TGF & All the events falling into a coincidence window of 500 \(\upmu\)s in which there is no VETO flag. The ADC resolution is nominal (16 bits), 500 ns timestamp resolution \\ \hline Orbit & Payload housekeeping events collected once every 20 s including the payload temperature, the particle rate of each channel and the VETO, the detector operating voltage, the payload operating configuration (e.g. noise, default, flight mode), and all the working parameters required to run diagnostic checks \\ \hline \hline \end{tabular}
\end{table}
Table 6: Data buffer protocol used for the scientific data of the LIGHT-1 mission.
Figure 8: A schematic description of the detection pipeline of LIGHT-1. A particle interaction triggers a signal on the photosensor that is then amplified and digitized. If the signal is above a software set threshold it is recorded and sorted to the corresponding buffer.
Figure 9 shows the MPPC payload deformation under the first and second natural modes. The general requirement for space qualifying such equipment to survive launch is for the natural modes to be above 100 Hz. Through this simulation, we verified that our design fulfills this requirement as all normal modes are between 1200 Hz and 2000 Hz.
### Geant4 Simulations
Using the Geant4 framework a C++ particle physics simulation was developed to evaluate the efficiency of the detectors, as well as the background energy deposition on the crystals and the veto caused by trapped charged particles in the earth's magnetic field. The detector geometry was simplified to increase performance. The simulation code and analysis can be found in [29].
We were able to estimate the hardware threshold of the detector by sending photons with energies ranging from 1 keV to 1 MeV and recording the energy deposited on each component of the detector. Fig. 10 shows the ratio of the energy deposited to the crystals over the incoming energy as a function of the incoming photon energy. We see that we start detecting photons at roughly 20 keV.
Using the European Space Agency's SPace ENVironment Information System (SPENVIS) [30] the flux and energy spectrum for trapped protons and electrons along the orbit of LIGHT-1
Figure 9: External deformation of the MPPC payload of LIGHT-1 under the first two normal modes at 1.2 kHz and 1.9 kHz respectively. The top and bottom rows show the two extremal modes for the first and second normal modes respectively
were estimated, providing the input for a Geant4-based simulation of the detectors' background spectrum. The energy deposition on the crystals vs the original particle energy spectrum is shown in Fig. 11 while a more detailed comparison between the energy deposited on the veto vs the crystals is shown in Fig. 12.
Figure 11: Simulated trapped charged particles energy spectra along the orbit of LIGHT-1. In pink and blue the incoming energy spectrum of electrons and protons is shown. In lighter colors, the energy spectrum deposited on the detector channels is shown.
Figure 10: Geant4 hardware threshold estimation. Single photons were sent to the detector with equiprobable energy ranging from 1 keV to 1 MeV. A histogram of the ratio of energy deposited on the crystals to incoming particle energy is plotted as a function of incoming energy. Brigter colors corespond to higher probability of observing the ratio.
### Vibration Tests
Vibration tests are typically conducted on fully assembled CubeSats to test their ability to survive launch. However, since our payload made use of fragile quartz windows to protect the crystal arrays, JAXA required additional vibration tests to be carried out on these components in isolation, prior to their integration into the main assembly.
The vibration tests were conducted at New York University Abu Dhabi (NYUAD). A permanent magnet shaker was used to emulate the vibrational profile of the launch as well as other parameters specified by JAXA. The testing procedure is outlined below:
1. A 20 Hz - 2 kHz sweep to identify the component's normal modes.
2. A 7s test using a frequency sweep designed to simulate the launch conditions of a possible transport vehicle as provided by JAXA.
3. A 20 Hz - 2 kHz sweep to compare the component's normal modes after exposure to launch conditions.
This test was carried out to simulate three spacecraft candidates, namely H-II Transfer Vehicle (HTV), Cygnus NG, and SpaceX Dragon. The latter became the chosen transport vehicle for LIGHT-1. Both crystal arrays and a PMT unit were subject to these tests, all of which were successfully passed.
### Thermal-Vacuum Tests
Placing four crystals in the same case was a solution to the limited space problem that exists in CubeSat missions, but a concept that was not tested for space missions. Thus, it was crucial to conduct thermal vacuum tests on the crystal arrays to determine their survivability in the space environment. Two types of thermal vacuum tests were carried out on the crystal arrays:
Figure 12: Energy of trapped charged particles deposited on the detector and the veto as a function of the incoming particle energy.
1. One Day Tests: The arrays were left in the thermal vacuum chamber for approximately 24 hours at a set temperature of 40 \({}^{\circ}\)C and a set pressure of 1 \(\upmu\)Torr.
2. Three Days Tests: The arrays were left for approximately 72 hours at a set pressure of 1 \(\upmu\)Torr with a thermal cycle where the temperature oscillates every five hours between -40 \({}^{\circ}\)C and 50 \({}^{\circ}\)C, simulating the conditions faced in space by satellites. An example of this cycle can be found in figure 13.
In both thermal vacuum tests the arrays were placed on top of an insulator material, to prevent the lower side of the arrays from heating up faster than the rest of the array due to their contact with the chamber's inner metal base. These tests helped in reevaluating the structure of the arrays so that they could be optimized for space flight. In particular, a manufacturing error was uncovered that led to condensation inside one of the arrays during the thermal cycle, necessitating its repair by the manufacturer. These tests were carried out both at YahSat Lab in Khalifa University and at NYUAD.
## 5 On-Flight Perfomance
The performance and health of the payload have been evaluated continuously throughout the mission. In this section, we relate some of the experimental data observed in orbit to the parameters we have used to evaluate the operating condition of the LIGHT-1 payload.
### Instrument Health
Throughout the Science Run (see Fig. 1) we have monitored the health of the hardware through onboard sensors. In particular, the Texas Instruments LM71 temperature sensor embedded in each of the readout electronics boards measured the payload temperature over the lifespan of the mission.
Figure 13: This figure shows the last four cycles of the thermal vacuum test conducted in Yah-Sat Lab’s thermal vacuum chamber during one of the CeBr\({}_{3}\) crystal array tests.
The resulting measurements from the commissioning of the payload until de-orbit are shown in Fig. 14.
The payload's temperature data during orbit is important to correct for detector gain drift [21], and for adjusting the operations duty cycle in flight to avoid overheating. As can be seen in the insets of Fig. 14, which represent the temperature recorded while the payload was operational, during a single orbit the temperature rises significantly when the payload is operating. This occurs due to the power dissipated by the electronics. Already in its design phase, the payload mission concept has been tailored around a reduced 50 % duty cycle in order to meet the mission requirements in terms of the power budget. From the commissioning to the de-orbit phase, the electronics have been subjected to 1861 power cycles. Under these conditions and for the entire mission, no emergency procedure has been triggered due to temperature failure.
Pulse Per Second Signal Loss.By design, a key feature of the RAAD electronics is the ability of assigning a sub-microsecond time stamping to each acquired event. To be compliant with the power constraints of the mission, since the design phase, the use of fast electronics capable of coping with the typical time response of MPPCs and PMTs (order of \(\sim\) 1 GS/s) could not been considered.
The solution implemented on the LIGHT-1 timing architecture utilizes a distributed disciplined clock, controlled by a Phase Locked Loop (PLL).
The PLL generates an high frequency output (10 MHz) from a low frequency reference (1 Hz). The RAAD timing circuit concept is based on a PLL using the 1 Hz Pulse-Per-Second (PPS) hardware signal obtained from the spacecraft GPS receiver coupled to a 10 MHz oscillator.
However, an unidentified failure made the PPS signal unavailable. As a consequence, the capability to calculate and assign the timestamp to events on the payload data buffers was compromised.
To overcome this issue, a software patch has been implemented during the Payload Commis
Figure 14: Temperature of the PMT and MPPC Payloads from the Payload Commissioning until the de-orbit of LIGHT-1. The average temperatures of the PMT and MPPC detectors are shown in red and blue, respectively. The actual temperature is overlaid with lighter colors. Using two insets we highlight the temperature increase during two different duty cycles.
sioning phase (see Fig. 1). The patch uses the onboard computer's signal to provide a, less accurate, PPS signal digitally. We used further reconstruction techniques on the ground after obtaining the data in order to retrieve the original accuracy.
Seasonal Temperature Variation.We further verified the health of the onboard electronics by tracking the temperature of the satellite during October as a function of latitude. Fig. 15 shows the probability density of measuring a specific temperature as a function of latitude. In the plot, we observe the expected seasonal variation. Such results were used to verify the integrity of our reconstructed data.
## 6 Conclusions
In this paper, we describe RAAD, the scientific payload of the LIGHT-1 CubeSat mission, designed to detect fast (< 1ms) X and gnmma-ray transients with two detectors fitting in 1.7 CubeSat units. The payload underwent rigorous physical testing and simulation in order to space-quality the instrument. In particular, the thermal vacuum and vibration tests of the individual components and the assembly certified RAAD according to JAXAs specifications for surviving typical launch conditions for such instrumentation. This was further validated by carrying out an extensive modal analysis of the apparatus, uncovering vibrational modes within the approved range for withstanding launch and
Figure 15: Temperature of the PMT payload as a function of latitude during 1\({}^{\rm st}\) - 23\({}^{\rm rd}\) of October 2022. The shaded region shows the probability density of the temperature over the latitude, while the red, dotted line is the maximum of this distribution. We observe the characteristic hotter temperatures for the southern hemisphere during October.
orbit conditions. The expected trapped charged particle flux along the orbit of the detector was estimated using data from the European Space Agency's SPace ENVironment Information System (SPENVIS) [30] allowing us to carry out an exhaustive Geant4 particle physics simulation to predict the exact background expected and optimize the detection threshold accordingly as well as the geometry and optical insulation of the instrument.
The LIGHT-1 satellite was launched from the Kennedy Space Center on December 21st, 2021, on a SpaceX rocket, which docked at the International Space Station the day after. It was deployed on February 3rd, 2022. Contact was lost on January 14, 2023. While the analysis of the data is ongoing, the housekeeping data presented show that, with the exception of the failure that led to the PPS loss, RAAD withstood the launch stress as expected, and the detectors operated throughout the mission.
We gratefully acknowledge the support of the UAE Space Agency through the 2018 MiniSat Competition, and the NYUAD Kawader program for supporting one of the authors (L. AlKindi). Special thanks to Sebastien Celestin for providing TGF models. We also thank Khalifa University and NSSA for funding their master's students to work on the CubeSat's Bus system design. Finally, we express our gratitude to the NYUAD Core Technology Platforms for their invaluable assistance and particularly to the machine shop team for the realization of the aluminum enclosures of the detection targets.
|
2304.12261 | Fractional quantum anomalous Hall states in twisted bilayer MoTe$_2$ and
WSe$_2$ | We demonstrate via exact diagonalization that AA-stacked TMD homobilayers
host fractional quantum anomalous Hall (FQAH) states with fractionally
quantized Hall conductance at fractional fillings $n=\frac{1}{3},\,
\frac{2}{3}$ and zero magnetic field. While both states are most robust at
angles near $\theta\approx 2^{\circ}$, the $n=\frac{1}{3}$ state gives way to a
charge density wave with increasing twist angle whereas the $n=\frac{2}{3}$
state survives across a much broader range of twist angles. We show that the
competition between FQAH states and charge density wave or metallic phases is
primarily controlled by the wavefunctions and dispersion of the underlying
Chern band, respectively. Additionally, Ising ferromagnetism is found across a
broad range of fillings where the system is insulating or metallic alike. The
spin gap is enhanced at filling fractions where integer and fractional quantum
anomalous Hall states are formed. | Aidan P. Reddy, Faisal F. Alsallom, Yang Zhang, Trithep Devakul, Liang Fu | 2023-04-24T17:01:52Z | http://arxiv.org/abs/2304.12261v3 | # Fractional quantum anomalous Hall states in twisted bilayer MoTe\({}_{2}\) and WSe\({}_{2}\)
###### Abstract
We demonstrate via exact diagonalization that AA-stacked TMD homobilayers host fractional quantum anomalous Hall (FQAH) states that exhibit fractionally quantized Hall conductance at zero magnetic field at fractional fillings \(n=\frac{1}{3}\), \(\frac{2}{3}\). While both states are most robust at angles near \(\theta\approx 2^{\circ}\), the \(n=\frac{1}{3}\) state gives way to a charge density wave with increasing twist angle whereas the \(n=\frac{2}{3}\) state survives across a much broader range of twist angles. We show that the competition between FQAH states and charge density wave or metallic phases is primarily controlled by the wavefunctions and dispersion of the underlying Chern band, respectively. Additionally, Ising ferromagnetism is found across a broad range of fillings where the system is insulating or metallic alike. The spin gap is enhanced at filling fractions where integer and fractional quantum anomalous Hall states are formed.
The discovery of the integer and fractional quantum Hall effects (QHE) in two-dimensional electron systems under a magnetic field ushered in the paradigm of topological matter and electron fractionalization [1; 2] over forty years ago. It was recognized shortly thereafter that, while broken time reversal symmetry is a necessary condition for QHE, Landau levels are not: a Bloch band with a nonzero Chern number suffices [3; 4]. The possibility of quantum Hall analogs in which time reversal symmetry is broken _spontaneously_ at zero magnetic field is a subject of fundamental importance and long-standing interest. Advances in quantum materials have brought the search for such phases, known collectively as quantum anomalous Hall (QAH) states, to the forefront of condensed matter physics.
Following theoretical proposals, transport measurements have demonstrated the integer QAH effect in a variety of material systems [5], including magnetically doped topological insulator [6], intrinsic magnetic topological insulator MnBi\({}_{2}\)Te\({}_{4}\)[7], magic-angle twisted bilayer graphene [8], and transition metal dichalcogenide heterobilayer MoTe\({}_{2}\)/WSe\({}_{2}\)[9]. Beyond a proof-of-principle, the experimental demonstration of QHE at zero magnetic field opens a new path to microwave circulators, topological superconductivity, and Majorana fermion.
Even more exciting is the fractional quantum anomalous Hall (FQAH) state, a new phase of matter that exhibits fractionally quantized Hall conductance and hosts fractional quasiparticles (anyons) at zero magnetic field. Physical realization of FQAH states relies on the synergy between band topology, strong correlation, and spontaneous time reversal symmetry breaking. These states can host new types of fractionalization unseen before in Landau levels. Moreover, proximity coupling between FQAH states and superconductors at zero magnetic field may provide a promising route to topological quantum computing [10; 11].
Magic-angle twisted bilayer graphene is a theoretically interesting candidate platform for the FQAH state[12; 13; 14; 15; 16]. Local compressibility measurements demonstrate that it hosts fractional quantum Hall states at magnetic fields above 5 T [17]. However, at zero and small fields, the incompressible states at fractional fillings are observed to be topologically trivial.
Recently, a new moire system, AA-stacked twisted homobilayer of transition metal dichalcogenide (TMD) semiconductors WSe\({}_{2}\) or MoTe\({}_{2}\), has also been predicted to host FQAH [18; 19; 20]. Here, narrow moire bands are formed at small twist angles and acquire nontrivial topology from the layer pseudospin structure of their Bloch wavefunctions [21]. In addition to band topology and narrow bandwidth, strong atomic spin-orbit coupling in TMD results in the locking of electrons' spin to their valley degree of freedom. This makes spontaneous Ising ferromagnetism possible at finite temperature, which is a key requisite for the realization of FQAH states [20].
Very recently, signatures of integer and fractional QAH states in optical measurements of twisted MoTe\({}_{2}\) bilayers have been reported [22].Photoluminescence clearly show a reduction in intensity and a blue shift in peak energy at integer and fractional fillings of the moire unit cell \(n=-1\) and \(-\frac{2}{3}\), indicating the emergence of correlated insulators. Furthermore, magnetic circular dichroism measurements reveal robust ferromagnetism over a wide range of hole fillings \(0.4\lessapprox|n|\lessapprox 1.2\) The coercive field determined from magnetic hysteresis is distinctively enhanced at \(n=-1\) and \(-\frac{2}{3}\). Remarkably, a linear shift in the carrier densities of the optically detected \(n=-1\) and \(-\frac{2}{3}\) states with the applied magnetic field is found, with a slope \(\frac{\partial n}{\partial B}\) in unit of \(\frac{\epsilon}{h}\) matching \(C=-1\) and \(-\frac{2}{3}\) respectively, as expected from Streda formula for states with integer and fractionally quantized Hall conductance
\(\sigma_{xy}=C\frac{e^{2}}{h}\). Importantly, the linear dispersion persists down to zero magnetic field. These observations taken altogether provide clear, strong evidence for integer and fractional QAH in hole-doped twisted bilayer MoTe\({}_{2}\).
In an independent experiment around the same time, integer QAH states were observed in twisted bilayer WSe\({}_{2}\) at hole fillings \(n=-1\) and \(n=-3\) by electronic compressibility measurements [23]. Here, the linear shift in the density of the incompressible state reveals states with quantized Hall conductance \(C=+1\), which persist down to zero magnetic field. The topological gap of the QAH state at \(n=-1\) is found to be around 1 meV.
The discovery of integer and fractional QAH states in twisted bilayer MoTe\({}_{2}\) and WSe\({}_{2}\) following theoretical prediction [18; 19; 20] demonstrates the extraordinary richness at the intersection of of band topology and electron correlation. Many open questions remain to be answered. While prior theoretical studies of FQAH in twisted TMD homobilayers have focused on the filling factor \(n=-\frac{1}{3}\) and at ultrasmall twist angles \(\theta\lessapprox 1.5^{\circ}\)[19; 20; 24], the newly observed FQAH state in twisted MoTe\({}_{2}\) occurs at the filling fraction \(n=-\frac{2}{3}\) in a device with a larger twist angle \(\theta=3.7^{\circ}\). In addition, a weak feature indicative of another FQAH state at \(n=-\frac{3}{5}\) was observed under a magnetic field of \(\sim\)1 T.
In this work, we study ferromagnetism, FQAH, and competing states in AA stacked TMD homobilayers. We begin with a detailed discussion of the system's single particle physics, which evolves dramatically with twist angle. We present original _ab initio_ calculations for tMoTe\({}_{2}\) band structure. Next, we demonstrate robust Ising type ferromagnetism across a broad range of carrier densities in the lowest moire band \(n\leq 1\), independent of whether the system is metallic or insulating at a given carrier density. For a range of twist angles and realistic Coulomb interaction, we demonstrate FQAH states at filling factors \(n=\frac{1}{3},\frac{2}{3}\). These states are fully spin/valley polarized and exhibit fractionally quantized Hall conductance at zero magnetic field. We find for both fillings that the FQAH gap is largest near \(\theta\approx 2^{\circ}\). As twist angle increases, the \(n=\frac{1}{3}\) state gives way to a charge density wave (CDW) while the \(n=\frac{2}{3}\) state survives to significantly higher twist angles, ultimately giving way to a metal. We find that the angle at which the FQAH-CDW transition for \(n=\frac{1}{3}\) occurs is only weakly dependent on interaction strength, whereas that at which the FQAH-metal transition occurs for \(n=\frac{2}{3}\) is strongly interaction-strength-dependent. This suggests that the former is controlled by a change in the wavefunctions of the lowest-band Bloch states whereas the latter is controlled primarily by a change in their dispersion.
## I Topological moire bands
_Continuum model_ - The valence band edges of a TMD monolayer are located at the \(K\) and \(K^{\prime}\) points of the Brillouin zone and have a large effective mass in the range of \(0.5-1\)\(m_{e}\). Due to strong atomic spin-orbit coupling, holes at \(K\) and \(K^{\prime}\) have opposite spin, so that spin and valley degrees of freedom lock into a single two-component "spin" degree of freedom. When two identical \(K\)-valley TMD layers are stacked with \(0^{\circ}\) alignment (AA stacking), holes at a given valley have the same spin in both layers and therefore direct spin-conserving, intra-valley, inter-layer tunneling is present.
Rotational misalignment between the two layers modifies the dispersion of low-energy holes by introducing an intra-layer superlattice potential and inter-layer tunneling that vary spatially with the superlattice periodicity. As shown by Wu _et al_[21], the continuum model Hamiltonian for the spin-\(\uparrow\) component takes the form of a \(2\times 2\) matrix in layer space:
\[\mathcal{H}_{\uparrow}=\begin{pmatrix}\frac{\hbar^{2}(-i\nabla- \kappa_{+})^{2}}{2m}+V_{1}(\mathbf{r})&t(\mathbf{r})\\ t^{\dagger}(\mathbf{r})&\frac{\hbar^{2}(-i\nabla-\kappa_{-})^{2}}{2m}+V_{2}(\mathbf{r })\end{pmatrix} \tag{1}\]
with \(\mathcal{H}_{\downarrow}\) its time reversal conjugate. Note that charge neutrality is the vacuum with respect to which this effective single-particle Hamiltonian is defined. Throughout this work, we consider the case of hole doping into the valence band. Accordingly, we have chosen a natural con
Figure 1: (a) Schematic of AA-stacked homobilayer moiré superlattice and (b) Wannier diagrams showing quantum anomalous Hall states in tMoTe\({}_{2}\) and tWSe\({}_{2}\), which have opposite Chern numbers in a given valley.
vention for \(\mathcal{H}\) to write the continuum model Hamiltonian in terms of the hole operators directly:
\[H_{0}=\sum_{\sigma=\uparrow,\downarrow}\int d\mathbf{r}\;\psi_{\sigma}^{\dagger} \mathcal{H}_{\sigma}\psi_{\sigma}, \tag{2}\]
where \(\psi_{\sigma}^{\dagger}\) creates a hole in the valence band. As such, the single-particle energy spectrum of \(H_{0}\) is bounded from below. We also define \(n\) to be the number of holes per unit cell relative to charge neutrality so that \(n\) is positive, opposite to the definition commonly used in experiments.
Here, the kinetic energy of holes in a given layer follows a quadratic energy-momentum dispersion centered about its \(K\) point. The \(K\) points of the two layers are displaced due to the twist angle and fold into the corners of the moire Brillouin zone, \(\kappa_{+}\) and \(\kappa_{-}\). We choose our moire reciprocal lattice vectors to be \(\mathbf{g}_{i}=\frac{4\pi}{3\sqrt{a_{M}}}(\cos\frac{\pi(i-1)}{3},\sin\frac{\pi(i- 1)}{3})\) and \(\kappa_{-}=\frac{\mathbf{g}_{1}+\mathbf{g}_{0}}{3}\), \(\kappa_{+}=\frac{\mathbf{g}_{1}+\mathbf{g}_{2}}{3}\). Here \(a_{M}=\frac{a_{0}}{2\sin(\theta/2)}\approx\frac{a_{0}}{\theta}\) where \(a_{0}\) is the atomic lattice constant.
The parameters of the moire potential \(V_{l}(\mathbf{r})\), \(t(\mathbf{r})\) can be fitted to first-principles density functional theory calculations given symmetry constraints that we now discuss. The most general Fourier expansion of the intra-layer potential to the lowest harmonic is \(V_{l}(\mathbf{r})=-\sum_{i=1}^{6}V_{\mathbf{g}_{i}l}e^{i\phi_{\mathbf{g}_{i}l}}e^{i\mathbf{g} _{i}\cdot\mathbf{r}}\) where \(V_{\mathbf{g}_{i}}\) is real and the reality of \(V_{l}(\mathbf{r})\) requires \(\phi_{\mathbf{g}_{i}l}=-\phi_{-\mathbf{g}_{i}l}\). It follows from \(C_{3z}\) symmetry that
\[V_{l}(\mathbf{r})=-2V\sum_{i=1,3,5}\cos(\mathbf{g}_{i}\cdot\mathbf{r}+\phi_{l}). \tag{3}\]
Here, the origin of \(\mathbf{r}\) is defined to be at the center of an MM stacking region. Additionally, symmetry under a twofold rotation that interchanges the two layers of a twisted homobilayer requires \(V_{l}(\mathbf{r})=V_{l}(-\mathbf{r})\) and, in turn, \(\phi_{2}=-\phi_{1}\equiv\phi\). The same symmetry consideration also applies to the inter-layer tunneling term, which must take the general form
\[t(\mathbf{r})=w(1+e^{i\mathbf{g}_{2}\cdot\mathbf{r}}+e^{i\mathbf{g}_{3}\cdot\mathbf{r}}). \tag{4}\]
This model Hamiltonian has spin \(U(1)\) symmetry (\([S_{z},\mathcal{H}]=0\)), but _not_ spin \(SU(2)\) symmetry, a property that we will see enables robust Ising type ferromagnetism.
_First principles calculations_ - We now compare the moire band structure of the continuum model with first-principles calculations on twisted bilayer MoTe\({}_{2}\). We perform large-scale density functional theory (DFT) calculations with the SCAN density functional [25] with dDsC dispersion correction method, which captures the intermediate-range vdW interaction through its semi-cal exchange term. We find that lattice relaxation has a dramatic effect on moire bands. Our DFT calculations at \(\theta=4.4^{\circ}\) with 1014 atoms per unit cell show that the layer distance \(d\) varies significantly in different regions of the moire superlattice, as shown in Fig 2(a). \(d=7.0\)A is smallest in MX and XM stacking regions, where the metal atom on top layer is aligned with chalcogen atom on the bottom layer and vice versa, while \(d=7.8\)A is largest in MM region where metal atoms of both layers are aligned. With the fully relaxed structure, the low-energy moire valence bands of twisted bilayer MoTe\({}_{2}\) are found to come from the \(\pm K\) valley (shown in Fig.1b).
In Fig 2c, we compare the band structures of twisted bilayer MoTe\({}_{2}\) at \(\theta=4.4^{\circ}\) computed by large-scale DFT and by the continuum model. Remarkably, the low-energy part of DFT band structure is well fitted with the continuum model band structure with parameters stated in Table 1. Importantly, our direct large-scale DFT calculation reveals a significantly narrower moire bandwidth than reported in the previous model study [21]. Correspondingly, the intralayer potential \(V\) and interlayer tunneling strength \(w\) is significantly larger than previously thought.
_Twist angle dependence_ - As we showed recently [18], the moire band structure of twisted TMD homobilayers is highly tunable by the twist angle \(\theta\), which controls the moire period \(a_{M}\) and thereby the ratio of kinetic to moire potential energy. As the twist angle decreases, the moire band structure evolves from nearly-free-electron-like to the tight-binding regime. In Fig. 3, we show continuum model band structures and the corresponding charge densities at several twist angles. At \(\theta=1.2^{\circ}\), the lowest two moire bands have narrow widths \(\sim 1\) meV, which are well described by the Kane-Mele type tight binding model on the honeycomb lattice as we will elaborate later. The charge density of the lowest band exhibits sharp maxima at MX and XM stacking sites, which are local extrema of the intra-layer moire potential and form a honeycomb lat
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline Materials & \(\phi\) (deg) & \(V\) (meV) & \(w\) (meV) & \(m\) (\(m_{e}\)) & \(a_{0}\) (Å) \\ \hline tMoTe\({}_{2}\) & -91 & 11.2 & -13.3 & 0.62 & 3.52 \\ tWSe\({}_{2}\) & 128 & 9 & -18 & 0.43 & 3.32 \\ \hline \end{tabular}
\end{table}
Table 1: Continuum model parameters extracted from density functional theory calculations. Parameters for tMoTe\({}_{2}\) are from this work and those for tWSe\({}_{2}\) are from [18].
Figure 2: \(a)\) The interlayer distance of the twisted MoTe\({}_{2}\) structure obtained from DFT is shown, demonstrating a large variation between the MM and XM/MX regions. \(b)\) The continuum band structure (blue lines) is plotted in comparison with large scale DFT calculations (black dots) at twist angle \(\theta=4.4^{\circ}\), showing excellent agreement. Note that additional bands in DFT calculations come from \(\Gamma\) valley states.
tice. We note that the layer polarization of these charge density peaks is strong and opposite between the two sublattices.
In the large angle limit where the kinetic energy dominates, the charge density will be more uniform. It exhibits shallower peaks on a triangular lattice of MM stacking regions where inter-layer tunneling is of maximum amplitude. The marked difference in the moire band structures at low and high twist angles is evidenced by the lowest moire band minimum changing from \(\gamma\) to \(\kappa_{+}/\kappa_{-}\). As we showed recently [18], the crossover between the two regimes dictates the existence of a "magic angle" at which the lowest moire band becomes extremely flat.
In Fig. 4, we also show the evolution of the width of the first band \(W_{1}\) as well as the difference between the average energies of the first two bands \(\Delta_{12}\equiv\sum_{\mathbf{k}}(\varepsilon_{2\mathbf{k}\uparrow}-\varepsilon_{1\bm {k}\uparrow})/\sum_{\mathbf{k}}1\) as a function of twist-angle for both WSe\({}_{2}\) and MoTe\({}_{2}\). For angles \(\gtrapprox 3^{\circ}\), the lowest moire bands acquire significant dispersion \(>10\) meV. The bands of MoTe\({}_{2}\) are narrower than those of WSe\({}_{2}\). As we will later elaborate, as long as this bandwidth is small compared to the system's characteristic interaction energy, it plays an insignificant role in determining the many-body ground state. \(\Delta_{12}\) in both cases monotonically increases with twist angle.
_Band topology_ - As first pointed out in the seminal work of Wu _et al_[21], moire bands of a given spin component in the continuum model for twisted bilayer TMD possess nonzero Berry curvature and have finite Chern numbers that satisfy \(C_{\uparrow}=-C_{\downarrow}\equiv C\) due to time reversal symmetry. As we will show later, the existence of topological bands in combination with small bandwidth at small twist angles is crucial for ferromagnetism and (integer and fractional) QAH states in this system.
Remarkably, the topological character of moire bands depends on the twist angle and shows two distinct regimes (see Fig. 3). Previous theoretical studies have mostly focused on the ultra-small twist angle regime (\(\theta<1.5^{\circ}\)), where low-energy states are localized on a honeycomb lattice. Correspondingly, the lowest two moire bands are isolated from higher bands and well described by a honeycomb lattice tight-binding model with Kane-Mele spin-orbit coupling [26]. The first and second band of spin \(\uparrow\) states in valley \(K\) has Chern numbers \(C_{\uparrow}=(-1,+1)\) respectively for MoTe\({}_{2}\)[21].
We now show that topological bands at larger twist angles have a different origin. This regime can be understood from a nearly-free-electron analysis. We treat the spatially varying intralayer moire potential \(V(\mathbf{r})\) and interlayer tunneling \(t(\mathbf{r})\) as perturbations to the free particle gas. The leading effect of these perturbations is to induce superlattice gaps where free particle states with momenta \(\mathbf{k}\) and \(\mathbf{k}+\mathbf{g}\) are degenerate and coupled by \(V(\mathbf{r})\), \(t(\mathbf{r})\), thereby leading to the formation of moire bands. The superlattice gap as well as the Bloch wavefunction of moire bands at high symmetry points can be calculated using degenerate perturbation theory. approach, first introduced in Ref.[27], enables us to determine the Chern number of moire bands in a given valley in terms of the superlattice parameters \(V,w,\phi\).
Fig. 4 shows the Chern number \(C\) thus obtained as
Figure 3: (a) Continuum model bands of tMoTe\({}_{2}\) in valley \(K\) at several twist angles. Chern numbers of the first and second lowest bands are labeled in blue and black respectively. (b) Corresponding particle number density associated with the lowest band, \(\Lambda n(\mathbf{r})=A\sum_{\mathbf{k}}|\psi_{1\mathbf{k}\uparrow}(\mathbf{r})|^{2}\) where \(A\) is the moiré unit cell area. The lowest band, onto which we project the continuum model Hilbert space in the exact diagonalization calculation, is highlighted in blue.
Figure 4: (a) Bandwidth of the lowest moiré band (\(W_{1}\)) and difference in the average energy of two lowest moiré bands (\(\Delta_{12}\)) as a function of inter-layer twist angle \(\theta\) for MoTe\({}_{2}\) and WSe\({}_{2}\). Also shown is a characteristic interaction energy scale \(\frac{e^{2}}{\omega a_{M}}\) for \(\epsilon=10\). (b) Chern number of the lowest moiré band as a function of the dimensionless ratio of the inter- and intra-layer moiré potential strengths \(w/V\) as well as the phase parameter \(\phi\) demonstrating that WSe\({}_{2}\) and MoTe\({}_{2}\) have opposite Chern numbers in a given valley.
a function of continuum model parameters \(w/V\) and \(\phi\) (without loss of generality \(V\) is chosen to be positive). We further confirm by numerical calculation that the energy gap between the first and second moire band remains finite over the entire range of twist angles from large to small, despite that the moire band structure changes dramatically. Trivial bands (\(C=0\)) are found when the minima of the moire potential \(V(\mathbf{r})\) are located at MM sites (which form a triangular lattice), whereas topological bands (\(|C|=1\)) are formed when (1) the intra-layer potential minima are located at MX/XM sites (which form a honeycomb lattice) and (2) the interlayer tunneling \(w\) is not too large compared to the moire potential \(V\).
For a given valley/spin, the Chern numbers of the lowest moire band at \(\phi\) and \(-\phi\) are opposite. Our large-scale DFT calculations find \(\phi=-91^{\circ}\) for twisted bilayer MoTe\({}_{2}\) as presented above, and \(\phi=128^{\circ}\) for twisted bilayer WSe\({}_{2}\) as shown in Ref.[18]. Therefore, our theory predicts that the lowest moire bands in twisted MoTe\({}_{2}\) and WSe\({}_{2}\) homobilayers have opposite Chern numbers in a given valley.
Our conclusion about band topology is further confirmed by examining the Bloch wavefunction of moire bands in large-scale DFT calculation. The Chern number \(\mod 3\) can be computed from the symmetry eigenvalues of spin-\(\frac{1}{2}\) Bloch states under \(C_{3z}\) rotation [28]. \(C\mod 3=\frac{3}{2\pi}\text{are}(-\lambda_{\kappa_{\perp}}\lambda_{\kappa_{ \kappa}}\lambda_{\gamma})\), where \(\mathcal{R}_{\theta}\) is a rotation operator about the \(z\)-axis, and \(\lambda_{\mathbf{k}}=\langle u_{\mathbf{k}}|\,\mathcal{R}_{2\pi/3}\,|u_{\mathbf{k}}\rangle\) at three high symmetry points \(\mathbf{k}=\kappa_{+},\kappa_{-},\gamma\) are \(C_{3z}\) symmetry eigenvalues. The values for the symmetry eigenvalues \(\lambda\) for twisted bilayer MoTe\({}_{2}\) and WSe\({}_{2}\) are determined from DFT calculations (see Table 2). The Chern number is then determined from the symmetry eigenvalues to be \(C=-1\) and \(1\) respectively.
Importantly, this difference in Chern number has observable consequences. According to the Streda formula, the Chern number determines the slope of the linear shift in carrier density with an applied magnetic field: \(\frac{\partial n}{\partial B}=\frac{e}{\hbar}C\). Our theory predicts opposite slopes in the \(n\)-\(B\) dispersion of QAH states in twisted bilayer MoTe\({}_{2}\) and WSe\({}_{2}\), see Fig. 1.
Finally, we note that, as twist angle increases, the second band's Chern number changes sign from \(-C\) to \(C\) (\(C\) is the Chern number of first band) due to inversion with the third band at \(\gamma\) (see Fig. 3) [21]. This applies to both tMoTe\({}_{2}\) and tWSe\({}_{2}\).
## II Ising ferromagnetism and spin gap
We now turn to the many-body problem of a finite density of doped holes in twisted TMD homobilayers that interact with each other through Coulomb repulsion. The many-body continuum model Hamiltonian is given by
\[\begin{split} H&=H_{0}+V\\ V&=\frac{1}{2}\sum_{\sigma,\sigma^{\prime}}\int d\mathbf{r }d\mathbf{r}^{\prime}\psi_{\sigma}^{\dagger}(\mathbf{r})\psi_{\sigma^{\prime}}^{ \dagger}(\mathbf{r}^{\prime})V(\mathbf{r}-\mathbf{r}^{\prime})\psi_{\sigma^{\prime}}(\mathbf{ r}^{\prime})\psi_{\sigma}(\mathbf{r}).\end{split} \tag{5}\]
Here we use a long-range Coulomb interaction \(V(\mathbf{r})=\frac{e^{2}}{er}\) in contrast to previous studies of FQAH in TMD moire superlattices using a dual-gate screened Coulomb interaction [19; 20; 24]. We perform an exact diagonalization calculation within the Fock space of the lowest Bloch band. Upon band projection, the many-body continuum model Hamiltonian is most conveniently written in momentum space as
\[\begin{split}\hat{H}&=\sum_{\mathbf{k},\sigma}\varepsilon _{\mathbf{k}\sigma}c_{\mathbf{k}\sigma}^{\dagger}c_{\mathbf{k}\sigma}\\ &+\frac{1}{2}\sum_{\mathbf{k}^{\prime}\mathbf{p}^{\prime}\mathbf{k}\mathbf{p}, \sigma\sigma^{\prime}}V_{\mathbf{k}^{\prime}\mathbf{p}^{\prime}\mathbf{k}\mathbf{p};\sigma \sigma^{\prime}}c_{\mathbf{k}^{\prime}\sigma}^{\dagger}c_{\mathbf{p}^{\prime}\sigma^{ \prime}}^{\dagger}c_{\mathbf{p}\sigma^{\prime}}c_{\mathbf{k}\sigma}\end{split} \tag{6}\]
where \(c_{\mathbf{k}\sigma}^{\dagger}\) creates a Bloch state in the lowest band at crystal momentum \(\mathbf{k}\) and spin/valley \(\sigma\) with corresponding single-particle energy \(\varepsilon_{\mathbf{k}\sigma}\). \(V_{\mathbf{k}^{\prime}\mathbf{p}^{\prime}\mathbf{k}\mathbf{p};\sigma\sigma^{\prime}}\equiv \bra{\mathbf{k}^{\prime}\sigma;\mathbf{p}^{\prime}\sigma^{\prime}}\hat{V}\ket{\mathbf{k} \mathbf{\kappa}\sigma;\mathbf{p}\sigma^{\prime}}\) are the corresponding matrix elements of the Coulomb interaction.
Projection to the lowest band neglects band mixing and therefore is quantitatively accurate only when the ratio of the characteristic Coulomb energy \(\frac{e^{2}}{\epsilon_{\mathbf{k}\sigma}}\) to the moire band gap is small. However, band projection is known to
\begin{table}
\begin{tabular}{l|l|l|l|l|} \hline Materials & Band, Valley & \(\kappa_{+}\) & \(\kappa_{-}\) & \(\gamma\) \\ \hline tMoTe\({}_{2}\) & 1, \(K\) & \(e^{i\pi/3}\) & \(e^{i\pi/3}\) & \(e^{-i\pi/3}\) \\ & 1, \(K^{\prime}\) & \(e^{-i\pi/3}\) & \(e^{-i\pi/3}\) & \(e^{i\pi/3}\) \\ tWSe\({}_{2}\) & 1, \(K\) & \(e^{i\pi/3}\) & \(e^{i\pi/3}\) & \(e^{i\pi}\) \\ & 1, \(K^{\prime}\) & \(e^{-i\pi/3}\) & \(e^{-i\pi/3}\) & \(e^{i\pi}\) \\ \hline \end{tabular}
\end{table}
Table 2: \(C_{3z}\) eigenvalues of the topmost moire bands from each valley, computed from large-scale DFT wavefunctions at high symmetry momentum points.
Figure 5: Finite sized clusters used in the exact diagonalization calculations in real space (a) and momentum space (b). The 27-unit-cell cluster can be viewed as a 9-unit-cell cluster with a tripled unit cell.
be qualitatively correct in the study of fractional quantum Hall states in lowest Landau level, even when this dimensionless parameter is not small [29; 30]. A follow up study of twisted TMD bilayers addressing band mixing is being prepared and will be presented elsewhere.
In performing the calculation, we take advantage of the model's charge-\(U(1)\), spin-\(U(1)\), and translation symmetries to diagonalize within common eigenspaces of \(N\), \(S_{z}\), and crystal momentum. We perform exact diagonalization (ED) study on three clusters of different sizes and geometries illustrated in Fig. 5 using periodic boundary conditions.
We begin with an analysis of magnetism in tMoTe\({}_{2}\) across a broad range of filling factors \(n\leq 1\). In Fig. 6, we plot \(\Delta_{S}\equiv E_{\text{min}}(S_{z})-E_{\text{min}}(S_{z\text{max}})\) for \(S_{z}\geq 0\) as a function of the filling factor \(n=N_{h}/N_{uc}\) on the 15-unit-cell torus with a fixed value of \(\epsilon^{-1}=0.1\) and several twist angles. Here \(E_{\text{min}}(S_{z})\) is the minimum energy within a given \(S_{z}\) sector, \(N_{h}\) is the number of holes, and \(N_{uc}\) is the number of moire unit cells.
At \(\theta=2^{\circ}\), the lowest energy state is fully spin polarized for all filling factors \(0.27\leq n\leq 1\) (the lower bound precision is limited by system size), showing robust spin/valley ferromagnetism in \(t\)MoTe\({}_{2}\). The spin gap (defined as the minimum of \(\Delta_{S}\) with \(S_{z}\neq S_{z\text{max}}\)), which controls the Curie temperature and coercive field, is maximum at \(n=1\) and generally decreases with decreasing \(n\). Notably, we find the spin gap at \(n=1\) exceeds 10 meV. A similar conclusion was reached for twisted TMD bilayers at smaller twist angles [20; 18]. Here, we find that Ising ferromagnetism and large spin gap (\(>10\) meV) at \(n=1\) persist to much larger twist angles, as shown \(\theta=2.5^{\circ}\) and \(3.5^{\circ}\). On the other hand, ferromagnetism at low filling \(n<0.4\) is less robust and disappears at \(\theta=3.5^{\circ}\) for \(\epsilon^{-1}=0.1\).
At \(\theta=2^{\circ}\), the spin gap clearly exhibits local maxima at filling factors \(n=\frac{1}{3}\) and \(\frac{2}{3}\), where FQAH states are formed as we show later. Notably, the spin gap at \(n=\frac{2}{3}\) is much larger than at \(n=\frac{1}{3}\). For \(\epsilon^{-1}=0.1\), at \(\theta=2.5^{\circ}\), the FQAH state only appears at \(n=\frac{2}{3}\) where the spin gap is still weakly enhanced, but not at \(n=\frac{1}{3}\). At \(\theta=3.5^{\circ}\), the ground state at \(n=\frac{2}{3}\) is a fully polarized Fermi liquid whose spin gap does not show any prominent feature, while the state at \(n=\frac{1}{3}\) is non-magnetic.
Consistent with our numerical findings, magnetic circular dichroism measurements on twisted bilayer MoTe\({}_{2}\)[22] observed robust Ising ferromagnetism over a broad range of hole fillings between \(n\sim 0.4\) and \(1\), with a maximum Curie temperature of 15 K at \(n=1\). Moreover, the coercive field is enhanced at \(n=\frac{2}{3}\), in agreement with the enhanced spin gap as shown in Fig.6, which is due to the formation of FQAH state as we demonstrate below.
Our calculation shows that Ising ferromagnetism in \(t\)MoTe\({}_{2}\) appears not only at \(n=1,\frac{2}{3}\) and \(\frac{1}{3}\), but throughout a broad range of filling factors below \(n=1\) where the system is insulating or metallic alike. As a consequence of Ising ferromagnetism and Berry flux in moire bands, we predict an anomalous Hall effect over a broad of fillings at and below \(n=1\) (as found in \(t\)WSe\({}_{2}\)[20]). In particular, quantized anomalous Hall effect is expected at \(n=1\) and certain fractional filling factors that support FQAH insulators.
From now on, we systematically study the many-body spectrum in the fully spin polarized sector at \(n=\frac{2}{3}\) and \(\frac{1}{3}\), for various twist angles and interaction strengths \(\epsilon^{-1}\). We note that, at \(n=\frac{1}{3}\) and large twist angles, the ground state may not be fully spin-polarized at zero field (see Fig. 6). We leave further investigation of spin physics at \(n=\frac{1}{3}\) to future study.
## III FQAH and competing phases
In Fig. 7 (a), we show the many body spectra obtained for tMoTe\({}_{2}\) on the 30-unit-cell cluster at \(\theta=2^{\circ}\) and \(n=\frac{1}{3}\), \(\frac{2}{3}\) as a function of crystal momentum. We assign each crystal momentum \(\mathbf{k}=k_{1}\mathbf{T}_{1}+k_{2}\mathbf{T}_{2}\) an integer index \(k=k_{1}+N_{1}k_{2}\) where \(N_{i}\) is the number of crystal momenta along axis \(i\). Here \(\mathbf{T}_{i}=\frac{2\pi\epsilon_{ij}\mathbf{L}_{j}\times\hat{z}}{A}\) is a basis vector of crystal momentum, \(\mathbf{L}_{i}\) defines the periodic boundary condition in real space, and \(A=|\mathbf{L}_{1}\times\mathbf{L}_{2}|\) is the system area. At both fillings, we find 3 nearly degenerate ground states separated by a sizable energy gap \(\sim 2\)
Figure 6: \(\Delta_{S}\equiv E_{\text{min}}(S_{z})-E_{\text{min}}(S_{z\text{max}})\) across all possible \(S_{z}\) values within 15 meV cutoff as a function of the filling factor \(n\equiv N_{h}/N_{uc}\) on the 15-unit-cell cluster at fixed \(\epsilon^{-1}=0.1\) and several \(\theta\).
meV from excited states. The approximate ground state degeneracy matches the expectation of a fractional quantum Hall state on a torus. We note that imperfect ground state degeneracy is expected in a finite system. We have tested several cluster sizes with all other parameters fixed and find that the gap remains \(\sim 2\) meV, indicating its presence in the thermodynamic limit. The differences in crystal momenta among states in the ground state manifold respect the "generalized Pauli principle" rules for FQAH states developed in Ref. [31].
In addition to the threefold ground state degeneracy, a necessary property of an \(n=\frac{p}{q}\) fractional quantum Hall state is that its ground states on a torus permute upon insertion of \(2\pi\) magnetic flux such that each state returns to itself only after insertion of \(q\) flux quanta. Flux insertion induces a shift in one component of the kinetic momentum \(\mathbf{\pi}=\mathbf{p}+\frac{\Phi}{2\pi}\mathbf{T}_{i}\) where \(\Phi\equiv\frac{\phi}{\phi_{0}}\), \(\phi\) is the inserted flux, \(\phi_{0}=\frac{hc}{e}\) is the flux quantum. In Fig. 7(a), we show that both \(n=\frac{1}{3}\) and \(\frac{2}{3}\) exhibit this spectral flow, providing definitive evidence of their FQAH nature.
A change in the the twist angle \(\theta\) induces a change in (1) the Bloch wavefunctions of the lowest band, (2) the band dispersion and bandwidth and (3) the system's characteristic interaction energy scale \(\frac{e^{2}}{eaM}\). The band dispersion governs the kinetic energy \(H_{0}\). At large twist angles where the lowest moire band is highly dispersive, the ground state at fractional fillings is expected to be a Fermi liquid. The Bloch wavefunctions determine the form of the band-projected interaction \(V\) through the Coulomb matrix elements. Therefore, a given filling factor, the system can exhibit distinct many-body ground states as a function of twist angle even when the band dispersion is neglected altogether. Thus, the influence of twist angle is multifold and needs systematic study.
An obvious candidate ground state in the presence of strong, long-range Coulomb repulsion is a charge density wave (CDW). Such states are experimentally observed in TMD moire hetero-bilayers where they are known as generalized Wigner crystals [32; 33; 34; 35]. To address the possible competition between FQAH and CDW with exact diagonalization, it is essential to choose a cluster that accommodates a tripled unit cell or, equivalently, samples \(\gamma,\kappa_{+}\), and \(\kappa_{-}\). The 27-unit-cell cluster depicted in Fig. 5 satisfies this criterion. In Fig.7(b) we show spectra obtained at a larger twist angle \(\theta=2.5^{\circ}\) using this cluster. At \(n=\frac{2}{3}\), we find three nearly degenerate ground states at \(\gamma\), indicative of FQAH. On the other hand, at \(n=\frac{1}{3}\), we find three nearly degenerate states with one at each of \(\gamma,\kappa_{+}\) and \(\kappa_{-}\). These are the momenta appropriate to a charge density wave with a tripled unit cell because they fold back to \(\gamma\) in the symmetry-broken Brillouin zone.
To reveal the influence of twist angle on many-body ground states at \(n=\frac{1}{3}\) and \(\frac{2}{3}\), in Fig. 8 we plot the energy gap \(E_{\text{gap}}=E_{4}-E_{3}\) as a function of \(\theta\), where \(E_{i}\) is the \(i^{th}\) lowest energy with maximum spin \(S_{z}\). Two values of the dielectric constant, \(\epsilon^{-1}=0.1\), \(0.2\) are used. When the system is a correlated insulator with threefold ground state degeneracy such as FQAH or CDW, \(E_{\text{gap}}\) is indicative of its robustness.
For \(\epsilon^{-1}=0.1\), we see that both the \(n=\frac{1}{3}\), \(\frac{2}{3}\) states exhibit maxima in \(E_{\text{gap}}\) near \(\theta=1.8^{\circ}\). Beyond \(\theta\approx 1.8^{\circ}\), \(E_{\text{gap}}\) decreases at both fractions, but more rapidly so at \(n=\frac{1}{3}\) where it reaches zero near \(\theta\approx 2.3^{\circ}\) and then increases again. The many-body spectra on both sides of this gap-closing transition (not shown) have three nearly degenerate ground states. However, the ground states at \(\theta<2.3^{\circ}\) have the crystal momenta of the FQAH state as shown in Fig. 7(a), whereas those at \(\theta>2.3^{\circ}\) have the crystal momenta of the CDW state as shown in Fig. 7(b). Thus, we conclude that at the fractional filling \(n=\frac{1}{3}\), a quantum phase transition between FQAH and CDW occurs around \(\theta\approx 2.3^{\circ}\).
The situation is markedly different at \(n=\frac{2}{3}\). In this case, \(E_{\text{gap}}\) remains finite until \(\theta\approx 3.0^{\circ}\), beyond which it is very small. The many-body spectrum shows a continuum of states at low energy, indicating a metallic phase in the thermodynamic limit. These results clearly show that the FQAH state at \(n=\frac{2}{3}\), previously overlooked in theoretical studies of twisted homobilayer TMD [19; 20],
Figure 7: (a) Many-body spectra of tMoTe\({}_{2}\) within the fully polarized sector (\(S_{z}=S_{\text{max}}\)) on the 30-unit-cell cluster at \(n=\frac{1}{3}\), \(\frac{2}{3}\). We use \(\theta=2.0^{\circ}\), \(\epsilon=10\). At the top we show the ground state manifold’s spectral flow under flux insertion demonstrating its FQAH nature. The four lowest states within each crystal momentum sector are shown. (b) Same as (a) except at a larger twist angle \(\theta=2.5^{\circ}\) and on the 27-unit-cell cluster. The lowest 6 states within each momentum sector are shown. The spectrum at \(n=\frac{2}{3}\) indicates FQAH whereas at \(n=\frac{1}{3}\) indicates a CDW.
persists to a substantially higher twist angle than that at \(n=\frac{1}{3}\).
When \(\epsilon^{-1}=0.2\), the dependence of the \(n=\frac{1}{3}\) state on \(\theta\) is largely similar to when \(\epsilon^{-1}=0.1\), save for an expected increase of \(E_{\rm gap}\) due to the increased Coulomb interaction. On the other hand, in the case of \(n=\frac{2}{3}\), the increased interaction pushes the FQAH-metal transition to \(\theta\approx 3.8^{\circ}\), thereby significantly expanding the twist angle range of the FQAH state.
These numerical results provide valuable insight into the competition between FQAH, CDW, and metallic phases. At small twist angles, the bands are narrow enough (see Fig. 4) that for both \(\epsilon^{-1}=0.1\) and \(0.2\) the system is in its flat band limit \(\frac{\epsilon^{2}}{\epsilon a_{M}}/W\gg 1\). The many-body ground state is thus determined primarily by the projected interaction term which is in turn determined by the Bloch wavefunctions. For \(\theta\lessapprox 2.3^{\circ}\), FQAH state is preferred by at both fillings. On the other hand, at large twist angles, the bandwidth becomes sizable and is crucial in the competition between FQAH and metallic phase at \(n=\frac{2}{3}\).
To disentangle the effect of bandwidth from that of Bloch wavefunction, we study the FQAH-metal transition at \(n=\frac{2}{3}\) for a fixed \(\theta=3.5^{\circ}\), tuned by the interaction strength \(\epsilon^{-1}\). Changing \(\epsilon^{-1}\) does not affect the band wavefunction, but tunes the ratio of bandwidth and interaction energy. Fig. 9 shows the many-body spectra at \(\epsilon^{-1}=0.4,0.2,0.05\). While \(\epsilon^{-1}=0.4\) is likely larger than experimental values, it provides useful insight into the strong coupling limit in a similar spirit to the band-flattening approach [36].
Starting from the strong interaction limit \(\epsilon^{-1}=0.4\), it is clear that \(n=\frac{2}{3}\) exhibits FQAH with 3 well isolated, nearly degenerate states at \(\gamma\) as expected from the generalized Pauli principle rules. As the interaction decreases, the energy gap at momenta \(\kappa_{+}\) and \(\kappa_{-}\) softens (\(\epsilon^{-1}=0.2\)), before the metallic state with a continuum of low-lying states appears (\(\epsilon^{-1}=0.05\)). The nature of this FQAH-metal transition at \(n=\frac{2}{3}\) is an interesting and important question that calls for further study.
## IV \(n=\frac{1}{3}\) versus \(\frac{2}{3}\)
We have shown by exact diagonalization study that AA-stacked TMD moire homobilayers exhibit robust Ising-type spin/valley ferromagnetism across a wide range of carrier densities within the lowest moire band. Since the valley-polarized moire bands carry finite Chern number, anomalous Hall effect is expected throughout. At particular fractional filling factors \(n=\frac{p}{q}=\frac{1}{3}\), \(\frac{2}{3}\) we predict fractional quantum anomalous Hall states with corresponding quantized Hall conductances \(\sigma_{H}=\frac{p}{q}\frac{e^{2}}{h}\). Using continuum model parameters obtained from our first-principles band structure calculations, our study finds that the topological gap of FQAH states in twisted bilayer MoTe\({}_{2}\) is largest near \(\theta\approx 2^{\circ}\).
At larger twist angles, the \(n=\frac{1}{3}\) state gives way to a CDW near \(\theta\approx 2.3^{\circ}\) whereas the \(n=\frac{2}{3}\) state persists to a larger angle, eventually becoming metallic. As interaction strength increases, the FQAH regime at \(n=\frac{2}{3}\) extends to higher angles, suggesting that the FQAH-metal transition is primarily bandwidth controlled. On the other hand, the critical angle of the FQAH-CDW transition at \(n=\frac{1}{3}\) is weakly dependent on interaction strength, indicating that it is instead controlled by a change in the Bloch wavefunctions.
The difference between \(n=\frac{1}{3}\) and \(\frac{2}{3}\) filling states is noteworthy and interesting. Recall that the ground states of a Landau level at filling factors \(n\) and \(1-n\) are simply related by a particle-hole transformation that leaves the projected Hamiltonian invariant. This is not the case in our system. In particular, at large twist angles, the \(\frac{1}{3}\)- and \(\frac{2}{3}\)-filling ground states are distinct phases of matter: CDW and Fermi liquid respectively.
To understand the contrast between \(n=\frac{1}{3}\) and \(\frac{2}{3}\), we note that the band-projected Hamiltonian within the
Figure 9: Low-lying spectra at \(\theta=3.5^{\circ}\) for several values of the inverse dielectric constant \(\epsilon^{-1}\) at \(n=\frac{2}{3}\). For small \(\epsilon^{-1}=0.1\) (weak interaction) the system is not in a FQAH state whereas for \(\epsilon^{-1}\gtrapprox 0.2\), it is. All energy levels in the ferromagnetic sector and the given window are shown.
fully spin-polarized sector \(S_{z}=S_{z\text{max}}\) is _not_ symmetric under particle-hole transformation \(c_{\uparrow}^{\dagger}(\mathbf{r})\to d_{\uparrow}(\mathbf{r})\). In particular, unlike a single particle added to an otherwise empty moire band, a single particle removed from an otherwise full moire band has an interaction-induced dispersion present even when the bare bandwidth \(W\) vanishes [37; 38; 39].
We now show that the interaction-induced asymmetry between particle and hole dispersion provides a natural explanation of the difference between \(n=\frac{1}{3}\) and \(\frac{2}{3}\) filling states found in our exact diagonalization study. Notably, we find that at large twist angles, a single hole at \(n=1\) is more dispersive than a single particle at \(n=0\). Therefore, in the presence of Coulomb interaction, the system at low filling \(\delta\ll 1\) is more susceptible to Wigner crystallization into a CDW state than at the filling \(1-\delta\). This explains our finding of CDW at \(n=\frac{1}{3}\) and Fermi liquid at \(\frac{2}{3}\).
## V Discussion
We have also obtained strong numerical evidence for FQAH states at filling factors \(n=\frac{2}{5},\frac{3}{5}\). For \(\theta=2^{\circ}\), \(\epsilon^{-1}=0.1\) and at both fillings, calculations on the 30-unit cell-cluster show fivefold nearly degenerate ground states separated from the continuum by an energy gap \(>1\) meV.
As noted above, previous exact diagonalization studies of FQAH states in AA-stacked TMD homo-bilayers have focused on the ultra-low twist-angle regime \(\theta<1.5^{\circ}\). This is where the lowest moire band satisfies various conditions purported in the literature to support FQAH states, including nearly vanishing bandwidth and quantum geometric properties--Berry curvature uniformity and "trace condition" [40; 41].
Our work has clearly shown that FQAH states in twisted bilayer MoTe\({}_{2}\) extend to significantly larger twist angles, where the lowest moire band has a much larger bandwidth and thus is far from the flat band limit. Nonetheless, the Coulomb energy scale \(e^{2}/(\epsilon a_{M})\) also increases with \(\theta\), driving the formation of FQAH and CDW phases. Remarkably, at large twist angles, FQAH state is found at \(\frac{2}{3}\) filling but not \(\frac{1}{3}\). This contrast is explained naturally in terms of interaction induced renormalization of kinetic dispersion at \(n=1-\delta\). Our findings point to the surprising richness and robustness of FQAH physics beyond flat band and ideal quantum geometry.
Consistent with recent experimental observations [22; 23], our band-projected exact diagonalization study also shows a robust integer quantum anomalous Hall effect at \(n=1\) protected by a large spin gap. Here, we also note the possibility of topologically trivial, layer-polarized states at \(n=1\) as well as fractional fillings, especially at small twist angles where the band gap is small. To faithfully describe such states requires the inclusion of at least two lowest bands [42], which goes beyond our single-band calculation. As mentioned above, an investigation of band mixing effects is currently underway and results will be presented elsewhere.
A straightforward extension of our work is to understand the influence of displacement field on the QAH states. Generally speaking, stronger displacement field should drive the system into topologically trivial, layer-polarized states [21]. Very recently, the ability to tune a topological phase transition from the integer QAH state to a topologically trivial state at \(n=1\) has been demonstrated experimentally in twisted bilayer WSe\({}_{2}\)[23].
Finally, we highlight the prospect of QAH beyond the lowest moire band. Indeed, integer QAH at \(n=3\) has already been observed in tWSe\({}_{2}\)[23], and the possibility of \(n\geq 1\) fractional states is enticing.
## VI Acknowledgement
We thank Xiaodong Xu for collaboration on a closely related experiment on \(t\)MoTe\({}_{2}\)[22], Ben Foutty and Ben Feldman for collaboration on a closely related experiment on \(t\)WSe\({}_{2}\)[23], Valentin Crepel for previous collaboration on related theoretical works [18; 20], as well as Di Luo and Patrick Ledwith for helpful discussions. This work was supported by the Air Force Office of Scientific Research (AFOSR) under award FA9550-22-1-0432 and the David and Lucile Packard Foundation. Y.Z. acknowledges support from the start-up fund at the University of Tennessee. F. A acknowledges support from the KAUST Gifted Students Program and the Undergraduate Research Opportunities Program at MIT.
_Note added_: we recently became aware of independent work on similar topics [43].
|
2305.13256 | TaskWeb: Selecting Better Source Tasks for Multi-task NLP | Recent work in NLP has shown promising results in training models on large
amounts of tasks to achieve better generalization. However, it is not
well-understood how tasks are related, and how helpful training tasks can be
chosen for a new task. In this work, we investigate whether knowing task
relationships via pairwise task transfer improves choosing one or more source
tasks that help to learn a new target task. We provide TaskWeb, a large-scale
benchmark of pairwise task transfers for 22 NLP tasks using three different
model types, sizes, and adaptation methods, spanning about 25,000 experiments.
Then, we design a new method TaskShop based on our analysis of TaskWeb.
TaskShop uses TaskWeb to estimate the benefit of using a source task for
learning a new target task, and to choose a subset of helpful training tasks
for multi-task training. Our method improves overall rankings and top-k
precision of source tasks by 10% and 38%, respectively. We also use TaskShop to
build much smaller multi-task training sets that improve zero-shot performances
across 11 different target tasks by at least 4.3%. | Joongwon Kim, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi | 2023-05-22T17:27:57Z | http://arxiv.org/abs/2305.13256v2 | # TaskWeb: Selecting Better Source Tasks for Multi-task NLP
###### Abstract
Recent work in NLP has shown promising results in training models on large amounts of tasks to achieve better generalization. However, it is not well-understood how tasks are related, and how helpful training tasks can be chosen for a new task. In this work, we investigate whether knowing task relationships via pairwise task transfer improves choosing one or more source tasks that help to learn a new target task. We provide TaskWeb, a large-scale benchmark of pairwise task transfers for 22 NLP tasks using three different model types, sizes, and adaptation methods, spanning about 25,000 experiments. Then, we design a new method TaskShop based on our analysis of TaskWeb. TaskShop uses TaskWeb to estimate the benefit of using a source task for learning a new target, and to choose a subset of helpful training tasks for multi-task learning. Our method improves overall rankings and top-\(k\) precision of source tasks by 12% and 29%, respectively. We also use TaskShop to build smaller multi-task training sets that improve zero-shot performances across 11 different target tasks by at least 4.3%.
## 1 Introduction
Recent studies have revealed that large language models are able to generalize to unseen tasks when jointly trained on many different tasks, with their performance scaling to the size and diversity of the training data Sanh et al. (2022); Wang et al. (2022); Wei et al. (2022); Chung et al. (2022); Longpre et al. (2023). As more and more tasks are added to build general-purpose models, it has been noted that knowing inter-task relationships may be helpful but that it remains unclear how to select helpful tasks for multi-task learning Ye et al. (2021); Min et al. (2022); Asai et al. (2022); Chan et al. (2022).
In this work, we investigate whether quantifying the relationship between different NLP tasks via pairwise task transfer helps _task selection_, which we define as choosing one or more source tasks that better initialize a model for an unseen target task as shown in Figure 1. We begin from a pairwise setup as it is often used to quantify task relationships Zamir et al. (2019); Vu et al. (2020) and is more tractable than larger combinations of tasks.
First, we construct TaskWeb, a large-scale benchmark for pairwise task transfer experiments which span different model architectures (encoder-only, decoder-only, encoder-decoder), parameter count (60M to 770M) and three adaptation methods including finetuning, Adapter-tuning Houlsby et al. (2019) and BitFit Zaken et al. (2022) for every pair of tasks in our setup, resulting in 25,000 transfers. From our results, we discover a _transitive_ property where having strong, positive transfers \(\text{A}\rightarrow\text{B}\) and \(\text{B}\rightarrow\text{C}\) for tasks A, B and C makes it more likely that \(\text{A}\rightarrow\text{C}\) is also a positive transfer.
Then, we introduce a new method TaskShop that predicts the transferability from a source task to an unseen target task only associated with a few examples. TaskShop builds upon the transi
Figure 1: We use pairwise transfer scores in TaskWeb to score (source, target) pairs where the source task is in TaskWeb and the target task is unseen (i.e., access to only a few examples). Then, we select helpful tasks and perform multi-task learning for the target task.
tive behavior described above to construct different paths with "pivot" tasks between the source and target tasks. It combines pairwise transfer scores between the source and pivot in TaskWeb and textual similarity scores between the pivot and target to estimate (source\(\rightarrow\)target) transfer scores.
We evaluate our methods in both single-task and multi-task settings. First, we show that TaskShop assigns better transferability scores both in terms of the overall ranking and identifying top helpful tasks. Then, we demonstrate that models trained on a small multi-task set built with TaskShop outperform models trained on larger sets of tasks. We perform a detailed analysis on the size and contents of the multi-task training sets. Here, we find that choosing five tasks with TaskShop results in the best performance overall, and that the proportion of helpful tasks in the training set affects performance.
To summarize, the contributions of our work are as follows:
1. We build and analyze TaskWeb, a benchmark of pairwise transfer experiments across various tasks, models and adaptation methods.
2. We define task selection for single-task and multi-task setups and propose TaskShop which uses pairwise transfer scores to predict transfer to an unseen target task.
3. We use TaskShop and TaskWeb to choose helpful source tasks and build small multi-task training sets that result in better zero-shot performance for unseen targets.
## 2 Background and Overview
### Background
This work aims to conduct an extensive analysis of pairwise task transfer to discover task similarities and select better source tasks for unseen tasks.
Pairwise Task Transfer.We define pairwise task transfer as a process of sequentially learning one task--the _source task_--and then another task--the _target task_. Given a source task \(s\) and a target task \(t\), we quantify the benefit of initializing a model on \(s\) for learning \(t\) compared to directly learning \(t\). We hypothesize that evaluating this transfer provides a measure of how knowledge contained in the source helps with learning the target.
Pairwise task transfer, also known as intermediate task transfer, is used to explore and quantify relationships between different tasks in computer vision Zamir et al. (2019); Achille et al. (2019). In NLP, it is used to measure task similarity Vu et al. (2020); Poth et al. (2021), analyze factors that impact task transfer Pruksachatkun et al. (2020); Albalak et al. (2022) and identify helpful source tasks for parameter-efficient methods used on target tasks Vu et al. (2022); Su et al. (2022); Asai et al. (2022). Building upon previous work, we address more diverse task categories, models, and adaptation methods to investigate pairwise task transfer.
Task Selection and Multi-Task LearningIn this work, task selection refers to identifying tasks that are helpful for learning a new task. Given a target task \(t\) with \(n\) examples \(x_{1},\dots,x_{n}\) and a set of source tasks \(S\), we select a task \(s\in S\) for \(t\). Here, we assume that the target task is _unseen_, that is, we only have access to a small number of examples from \(t\) (\(n\leq 32\)), but not the full training set or a model finetuned on \(t\). On the other hand, we assume that each source task is _seen_, where we have access to the full training set, associated models and results including our pairwise transfer scores.
Many studies have used task selection to better initialize models for learning new tasks. Some methods assume access to the entire training set and model Vu et al. (2020); Poth et al. (2021); Vu et al. (2022); Su et al. (2022), while other methods only use a small portion of the training data Jang et al. (2023); Paranjape et al. (2023). We build upon the second case and restrict access to a small number of target examples to select helpful source tasks.
A potential but unexplored application of task selection is multi-task learning. By selecting tasks \(k>1\) times, we obtain a set of \(k\) source tasks \(S=\{s_{1},\dots,s_{k}\}\) as a training set for multi-task learning. We use task selection to build small multi-task training sets and quickly train models for \(t\).
Multi-task learning has been used to train models that perform well across many different tasks Khashabi et al. (2020); Mishra et al. (2022); Sanh et al. (2022). While studies report that adding more tasks generally improve performance, Aghajanyan et al. (2021); Wei et al. (2022); Wang et al. (2022), others report that using a subset of tasks provide better performance Padmakumar et al. (2022); Chan et al. (2022) but that it is not clear how to identify such subset Aribandi et al. (2022). Previous work retrieves the nearest-\(k\) source examples similar to target examples Ivison et al. (2022). However, we take a simpler approach by selecting helpful _tasks_ to assemble multitask training sets.
### Overview
Our workflow is visualized in Figure 2, where we show that probing task relationships via pairwise transfer helps us select beneficial source tasks and improve multi-task learning for specific targets with much less training data.
We first introduce TaskWeb, a collection of 22 diverse, high-resource tasks in NLP and their pairwise task transfer scores across seven different training setups (Section 3.2). From our analysis, we discover that pairwise task transfer does not show strong commutativity but indicates _transitive_ behavior between positive transfers (Section 3.3).
Next, we propose a new method TaskShop that uses the transitive behavior to select the best source task to transfer to an unseen target task, even without access to pairwise transfer scores for the target (Section 4.1). We extend task selection from a single-task to a multi-task setting and build small, target-specific multi-task training sets (Section 4.2). We report experiment results in both settings (Section 5.1, 5.2) with detailed analysis (Section 5.3).
## 3 TaskWeb: A Benchmark for Pairwise Task Transfer
While previous studies explore pairwise task transfer, they use a single model and adaptation method, or use a limited number of tasks in a specific task domain (Vu et al., 2020; Poth et al., 2021; Albalak et al., 2022). We introduce TaskWeb, which consists of pairwise task transfer experiments that span a wide variety of tasks, models, and adaptation methods. TaskWeb can be used as a benchmark to evaluate task transferability, and as a repository for selecting helpful source tasks (Section 4).
### Focus and Experimental Setup
Tasks.To build TaskWeb, we choose a set of 22 representative tasks in NLP that span diverse categories and require various forms of knowledge, as shown in Table 1. We perform transfer between every pair of tasks using different models and adaptation settings, leading to about 25,000 transfers.1
Footnote 1: We use SQuAD2.0 as only a source task due to difficulties associated with running SQuAD evaluation for all transfers.
Training Procedure.We finetune a pre-trained language model on the full dataset associated with a source task \(s\), and further finetune the model on a set of 1,000 random examples of the target task \(t\).2
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Category** & **Tasks** \\ \hline NLI/Entailment & ANLI, CB, QNLI, RTE, SciTail, SNLI \\ Paraphrase & MRPC, QQP, STSB \\ Sentiment & IMDB, Rotten Tomatoes \\ Commonsense & COPA, CosmosQA, HellaSwag, PIQA, Quartz, SocialIQA, Winogrande \\ Semantics & WiC, WSC \\ QA & BoolQ, SQuAD2.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: All tasks used in our pairwise transfer experiments, grouped by high-level task categories. Citations for all datasets are provided in Table 7 in the appendix.
Figure 2: Overview of single and multi-task selection using TaskShop and TaskWeb. Section 3 describes the pairwise task transfer involved in TaskWeb as well as its analysis. Section 4 details the TaskShop method we propose for assigning scores to source tasks for a target task, and defines task selection in single task and multi-task setups. Section 5 presents our experiments both in single task and multi-task settings and shares in-depth analyses.
Then, we evaluate the performance gain from using \(s\) as the source task by comparing its performance to a model trained on the same subset of \(t\) without using \(s\). We repeat this process over eight random seeds to reduce variability (Dodge et al., 2020).
Models.We employ three different categories of models--T5 (encoder-decoder; Raffel et al., 2020), GPT-2 (decoder-only; Radford et al., 2019) and RoBERTa (encoder-only; Liu et al., 2019)--to study the impacts of model architecture on task transfer. For T5, we use the LM-adapted version3(Lester et al., 2021) with small/base/large sizes, and we use GPT-2 medium and RoBERTa-base.
Footnote 3: The original T5 checkpoints have been trained on various datasets. We aim to separate the influences of multi-task supervised pretraining in our pairwise transfer analysis.
Adaptation Settings.We investigate pairwise task transfer in different adaptation settings. Hence we experiment with three widely-adopted adaptation methods--full fine-tuning, Adapters (Houlsby et al., 2019) and BitFit (Zaken et al., 2022)--while fixing T5-base as the base model.
Metrics for Task Transferability.We follow Vu et al. (2020) and use the average percentage change in the evaluation metric, labeled as PC, to measure task transfer. Also, we introduce a new metric which accounts for the proportion of models that result in positive transfer across all random seeds, labeled as PM. For a given source \(s\) and target \(t\), evaluation function \(p\), model \(m_{t}\) tuned on \(t\) and a model \(m_{s\to t}\) tuned from \(s\) to \(t\),
\[\text{PC}(s,t)\underset{m\in M}{\propto}\frac{p(m_{s\to t})-p(m_{t})}{p(m_{t})}\] \[\text{PM}(s,t)\underset{m\in M}{\propto}\mathds{1}\left(p(m_{s \to t})>p(m_{t})\right)\]
This is to ensure that the evaluation results accurately reflect the magnitude and consistency of transfers across all random seeds. We then perform linear interpolation to combine the two metrics and use the resulting score for all subsequent analyses.
### Observations from TaskWeb
Results.The results of our pairwise task transfer experiments are summarized in Figure 3, where the figure on the left visualizes the strength of all pairwise transfers, and the figure on the right provides examples of actual transfer scores between tasks. All pairwise transfer scores are averaged over seven training configurations. Refer to Figures 7 to 14 in the appendix for the full results.
As one can observe, positive transfers (blue) are found between intuitively similar tasks such as CosmosQA to SocialIQA (+0.15), both of which are multiple-choice questions for commonsense reasoning. Meanwhile, negative transfers (red) are observed for tasks that seem to require unrelated skills, such as from QQP to CosmosQA (-0.12).
Figure 3: Visualization of TaskWeb, our collection of pairwise transfer between 22 different NLP tasks, averaged over seven training setups. We depict positive transfers in blue and negative transfers in red. For the leftmost figure, we depict all pairwise transfers as arrows pointing from the source to the target, with its width denoting the strength of the transfer. For the rightmost figure, we display actual scores between a subset of source tasks (six more helpful/six less helpful) and target tasks. The full set of scores is given in Figure 7 in the appendix.
Surprisingly, positive transfers also exist between tasks that seemingly do not seem similar, such as a positive transfer from SocialIQA to RTE (+0.10). Out of all 441 pairwise transfers, 246 (55.8%) are positive and 136 (30.8%) are negative.
Effects of Training Setup.We investigate how the choice of models and adaptation methods affect pairwise task transfer. To this end, we build matrices of pairwise transfer scores from source to target tasks for each training setup and compute similarities between between every pair of training setups. A subset of such matrix is shown in the right in Figure 3 and the full sets are shown in Figures 7 to 14 in the appendix.
Figure 4 visualizes the results of our analysis. We observe our pairwise transfers return similar scores when 1) the same adaptation method is applied to models of the same class but with different sizes, or 2) different adaptation methods are applied to the same base model. For example, T5-base fine-tune exhibits more similar transfer behavior with T5-small/large finetune or T5-base adapter/bitfit than with GPT-2 or RoBERTa finetune.
### Analysis of Mathematical Properties
Computing pairwise transfer scores can become costly as more tasks are added. Would it be possible to predict transferability beforehand using existing scores? We formulate pairwise task transfer as a mathematical relationship and investigate two properties--_commutativity_ and _transitivity_.
We define _commutativity_ in our setup as whether \(A\to B\) being a positive/negative transfer implies that \(B\to A\) is also a positive/negative transfer. If \(A\to B\) is known, the commutativity would help us predict \(B\to A\) before performing the transfer.
Meanwhile, we define _transitivity_ in our setup as whether knowing the transfer scores of \(A\to B\) and \(B\to C\) allows us to infer about \(A\to C\). This property would also provide us more flexibility to predict pairwise transfer in advance.
Commutativity often does not hold.Based on the pairwise transfer scores shown in Figure 3, we compute the proportion of transfer pairs that exhibit commutativity. Of the 210 unique transfer pairs available in our setup, we find that 97 demonstrate commutativity and 113 do not. The results are fully visualized in Figure 15 in the appendix. We uniquely observe from our experiments that pairwise transfer does not display strong signs of commutativity. One explanation for this result is that while knowledge acquired from task A may be helpful for task \(B\), the reverse may not be true.
Transitivity holds for positive transfers.We perform a small experiment where we predict transfer \(A\to B\) to be positive if and only if both \(A\to B\) and \(B\to C\) score above a threshold. Here, we call A the source task, C the target task, and B the intermediate or "pivot" task.
Results are shown in Figure 5. We observe that as a stricter criteria is imposed for the pair of source \(\to\) pivot and pivot \(\to\) target transfers, the likelihood of observing a positive transfer steadily increases across all training setups. For example, the probability of observing a positive transfer from source \(\to\) target transfer increases from 88% to 97% when
Figure 4: Similarities between pairwise transfer results in our experiment of 22 tasks obtained for seven different training setups. The abbreviations stand for t5s/b/l: T5-small/base/large, ft: finetuning, ad: adapter-tuning, bf: BitFit, gpt2: GPT-2 medium, rob: RoBERTa-base.
Figure 5: Probability of identifying positive source \(\to\) target transfers as the minimum threshold for intermediate transfers (source \(\to\) pivot, pivot \(\to\) target) is increased. The results with all seven setups can be found in Figure 16 in the appendix.
the intermediate transfer score thresholds increase from 0.01 to 0.04. From these results, we observe a transitive behavior between positive transfers.
## 4 Task Selection for Unseen Target Tasks
For a new target task, pairwise transfer scores from known tasks are not available. We introduce TaskShop to estimate transfer from a source task in TaskWeb to an unseen target task with only a small number of examples (Figure 2). Then, we perform task selection in two settings: a single-task setup where we seek to identify a helpful source task, and a multi-task setup where we seek to locate a set of helpful source tasks for a target task.
### TaskShop: Selecting Helpful Tasks
The objective of task selection in a single-task setup is to predict the benefit of initializing a model on a source task \(s\) for learning a target task \(t\). Here, we introduce a new method TaskShop which uses pairwise transfer scores in TaskWeb to assign scores from source tasks in TaskWeb to a target task that has no pairwise transfer scores. We use the scores to select helpful sources for the target.
Setup.Given an observed source task \(s\in S\) and an unobserved target task \(t\), our objective is to predict the transferability of \(s\) to \(t\). We assume that we have access to pairwise transfer scores between \(s\) and other source tasks \(S\backslash\{s\}\). Meanwhile, we assume that there is a small number of \(n\) examples (\(n\leq 32\)) but no pairwise transfer scores for \(t\).
Overview.On a high level, our method searches over paths from \(s\) to \(t\) via a set of pivot tasks in TaskWeb where each pivot \(p\) forms a path \(s\to p\to t\), and averages the scores assigned to those paths to estimate \(s\to t\). It takes inspiration from our observation that the strengths of transfers \(s\to p\) and \(p\to t\) allow us to reason about the strength of transfer \(s\to t\). We name our method TaskShop.
Method.The process for computing the transfer from \(s\) to \(t\) is as follows (summarized in Algorithm 1). Assume a task \(p\in S\backslash\{s\}\) in the observed tasks for which its transferability from \(s\) is already known. We first use an off-the-shelf task selection method \(F\) to predict the transfer \(F(p\to t)\). Note that \(F\) can be any task selection method that requires a small number of examples from each task and not the full training set or model. Then, we find the pairwise transfer score \(T(s\to p)\) from TaskShop is directional.One interesting feature of our method is its _directionality_ due to the usage of pairwise transfer scores. This implies that our transfer predictions for \(A\to B\) differs from those for \(B\to A\). Our method deviates from conventional techniques that represent tasks as embeddings and retrieve tasks using cosine similarities, which results in predictions that are agnostic to the transfer directions. Hence our method is more aligned with the observation from Section 3.3 that pairwise transfer is often non-commutative.
TaskShop is modular.Another feature of our method is its _modularity_ since any task selection method that only requires a small number of target examples can be used for \(F\). Our method allows us to use both the information captured by \(F\) and the pairwise transfer scores \(T\). We focus on recent methods that only use a small portion of the target task's _dataset_, thereby excluding methods that require access to the fine-tuned model or the full training set. This is to ensure that our method is able to address settings involving new tasks that are associated with a small number of examples. In this work, we use Retrieval-of-Experts (RoE) from Jang et al. (2023) and the LLM similarity method from Paranjape et al. (2023) for \(F\).
### Extension to Multi-Task Selection
While choosing a single, appropriate source task is beneficial for learning a target task (Vu et al., 2020,
2022), it has also been observed that using multiple source tasks provides additional benefits for the target Asai et al. (2022). Hence we extend task selection from a single-task to a multi-task setup, where we select the top-\(k\) helpful source tasks for the target according to the task selection scores.
Given a target task \(t\) and a task selection method, we first select the top-\(k\) highest scoring source tasks \(s_{1},...,s_{k}\) for \(t\). We then randomly sample \(n\) prompted examples from each task, resulting in a small training set of \(kn\) examples total. We perform multi-task learning with a pre-trained model on this training set and evaluate the zero-shot capability of this model on the target task. Here, the task selection method can be replaced by TaskShop or other existing task selection methods.
Table 2 shows examples of top-\(k\) tasks selected by our method with \(k\)=5 and \(F\)=RoE Jang et al. (2023), treating each target as unseen and the source tasks as seen. Our method tends to select source tasks that are overall helpful such as CosmosQA and SocialIQA. At the same time, it also selects tasks within the same category or domain, such as SNLI for ANLI and PIQA for HellaSwag. We combine samples from these source tasks together to form a multi-task training set for each target.
## 5 Experiments and Results
### Single-Task Selection Comparisons.
We compare to **Retrieval-of-Experts (RoE)** from Jang et al. (2023) and **LLM-similarity** described in Paranjape et al. (2023). For Retrieval-of-Experts, we use a similar implementation by taking 100 examples of the source task and 32 examples of the target task and computing the similarity between text embeddings of the prompts. We use PromptSource Bach et al. (2022) to extract prompts and Sentence Transformers Reimers and Gurevych (2019) to obtain text embeddings. For the LLM-similarity method, we write a prompt that contains several pairs of tasks not used in our experiments, where each pair has 1) an example of each task, and 2) an answer noting whether the two tasks are similar or not. Then, for each source-target pair, we pass the prompt concatenated with source and target examples to text-davinci-003 Ouyang et al. (2022). We use the ratio of the log probabilities of the answers "yes" and "no" to assign a score between the source and target tasks.
For TaskShop, we incorporate RoE and LLM-similarity into \(F\) as noted in Algorithm 1.
Metrics.To evaluate the performance of the task selection methods, we use two metrics: the normalized discounted cumulative gain (NDCG, higher is better) and Regret@\(k\) (lower is better), following Poth et al. (2021). We use NDCG to evaluate the overall ranking, while we use Regret@\(k\) to measure the performance drop of the predicted top-\(k\) source tasks from the actual top-\(k\) source tasks. For the evaluation set, we use all target tasks in our setup, grouped by the task category described in Section 3.1. We use TaskWeb scores to evaluate scores assigned to the source tasks.
Experimental Setup.While we use target tasks from TaskWeb to use their transfer scores as labels, we wish to simulate a scenario in which there are only 32 examples for each target. Therefore we perform our experiments in a leave-one-out setup, where for each experiment we assume access to pairwise scores amongst our set of tasks except for the given target task. In this way, we maintain the assumption that only a small number of examples of the target task are available during evaluation. Finally, we evaluate the scores assigned to the source tasks with pairwise transfer scores in TaskWeb.
Results.Table 3 reports the performances of the task selection methods. Combining pairwise transfer scores with both LLM and RoE improves both NDCG and Regret@5 compared to their base methods, with the best gains from RoE. We hypothesize that the improvement occurs because the pairwise transfer scores capture the transferability between each source task and the set of tasks textually similar to the target task. Due to transitive behavior between positive task transfers, these transfer scores would provide additional information about the transferability from the helpful source tasks to the target. Moreover, our method is sensitive to the direction of the pairwise transfer unlike the other methods, thereby better accounting for the non-commutative property as observed in Section 3.3.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Target** & **Selected Tasks** \\ \hline ANLI & SNLI, CB, Cos.QA, Hellasw.,Soc.IQA \\ COPA & Cos.QA, Sc.IQA, Winogr., Hellasw., PIQA \\ Hellasw. & PIQA, Cos.QA, Sc.IQA, COPA, Winogr. \\ RTE & ANLI, QNLI, SQuAD2, MRPC, CB \\ StoryC. & Cos.QA, COPA, Hellasw., Sc.IQA, Winogr. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of top-5 tasks (left to right) selected for a subset of target tasks using TaskShop.
### Multi-Task Selection
Having demonstrated the efficacy of TaskShop for selecting individual source tasks, we now investigate whether selecting multiple source tasks to build a multi-task training set (Section 4.2) helps to improve target task performance.
Comparisons.We use the following baselines. **T0-3B** has the same architecture and size as T5-3B but trained on millions of examples spanning 35 different tasks Sanh et al. (2022). **T5-3B + all tasks** consists of LM-adapted T5-3B Lester et al. (2021) trained with samples from all 22 tasks from TaskWeb except each target task in a leave-one-out setup to represent approaches which indiscriminately train models on large numbers of tasks.
We then train T5-3B models on small training sets with samples from the five highest-scoring source tasks based on the following multi-task selection methods: **Retrieval-of-Experts** from Jang et al. (2023), **LLM-similarity** from Paranjape et al. (2023) and **TaskShop****roe** which uses \(F=\text{RoE}\) in Algorithm 1.
Finally, we consider the case where TaskWeb scores for the target task are available. **TaskWeb** selects the five highest scoring source tasks for each target in terms of pairwise transfer. We similarly train T5-3B on samples from these tasks.
Training setup.Given a target task \(t\) and a task selection method, we first select the five highest-scoring source tasks \(s_{1},...,s_{5}\) for \(t\). We then randomly sample 2,000 prompted examples from each task, resulting in a small training set of 10,000 examples total. For our T5 baseline, we select 21 tasks except the target and similarly sample 2,000 examples from each task. We finetune our models on this multi-task training set for five epochs.
Noting that it is costly to compute pairwise transfer scores as language models get larger, we use scores from T5-large whenever pairwise transfer scores are needed. This is based on our previous observation that models with similar architectures and adaptation methods share more similar transferabilities (Section 3.2). We hypothesize that T5-large is big enough to learn the complexities of the source tasks in our setup and represent their transferabilities to the target tasks--this is supported by how both our T5-large transfers and T5-3B expert models in Jang et al. (2023) found CosmosQA and SocialIQA to be greatly beneficial source tasks.
Evaluation setup.We use the same set of evaluation tasks used by Jang et al. (2023). For ANLI-R1/R2 which are not included in TaskWeb, we apply the subset of tasks chosen for ANLI-R3 for multi-task training. Meanwhile, for the Story Cloze task which is not included in TaskWeb due to its lack of training set, we use a subset of five tasks with the best transfer scores for our upper baseline. For each target task, we perform the evaluation in a leave-one-out setup by removing the target task from TaskWeb along with its scores. This is to maximize the number of available source tasks while ensuring that the target task is unseen in our setup. By doing so, we simulate using TaskShop and TaskWeb across various categories of target tasks with access only to their examples (\(n\leq 32\)). We perform all evaluations in a zero-shot setting.
\begin{table}
\begin{tabular}{l|l|c c c c c c|c} \hline \hline \multicolumn{2}{c}{**Method**} & \multicolumn{1}{c}{NLI/Entailment} & \multicolumn{1}{c}{Paraphrase} & \multicolumn{1}{c}{Commonsense} & Sentiment & QA & Semantics & **Mean** \\ \hline \multirow{8}{*}{T5-3B} & LLM similarity & 54.75 & 47.01 & 66.85 & 65.71 & 41.96 & 56.07 & 58.24 \\ & Retrieval-of-Experts & 66.53 & 49.19 & 65.79 & 78.21 & 84.46 & 54.33 & 64.61 \\ & Ours: TaskShop\({}_{\text{LLM}}\) & 54.11 & **52.9** & 70.24 & 71.38 & 51.12 & **56.48** & 61.04 \\ & Ours: TaskShop\({}_{\text{roE}}\) & **75.14** & 49.29 & **80.21** & **80.53** & **85.74** & 54.22 & **72.16** (\(\uparrow\)) \\ \hline \multirow{8}{*}{T5-3B} & LLM similarity & 3.31 & 1.84 & 6.3 & 0.56 & 3.79 & **0.78** & 3.61 \\ & Retrieval-of-Experts & 4.79 & 1.38 & 6.19 & 0.14 & 4.26 & 1.84 & 4.02 \\ \cline{1-1} & Ours: TaskShop\({}_{\text{LLM}}\) & **3.31** & **0.85** & 4.15 & 0.22 & 3.79 & 0.86 & 2.73 \\ \cline{1-1} & Ours: TaskShop\({}_{\text{roE}}\) & 3.51 & 1.35 & **3.29** & **0.04** & **2.22** & 1.67 & **2.56** (\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of task selection experiments. We use pairwise transfer scores from TaskWeb as a benchmark to evaluate TaskShop, along with two existing task selection methods that only use target task examples : LLM similarity Paranjape et al. (2023) and RoE Jang et al. (2023). Note that TaskShop****llm refers to our method with \(F=\text{LLM}\) and TaskShop****roe** refers to our method with \(F=\text{RoE}\). Using pairwise transfer scores with RoE results in the best performance both in terms of the overall ranking (NDCG) and the precision of top-chosen tasks Regret@5). Note that a higher score is better for NDCG and a lower score is better for Regret@5.
Results.Table 4 summarizes the results of our experiments. The middle section details the performances of task selection methods that assume no access to pairwise transfer scores to the target. Two out of three methods choose source tasks that improve target task performance even compared to the stronger baseline. Most notably, TaskShop outperforms both baselines as well as other task selection methods in terms of overall zero-shot evaluation, improving the score by 14.7% from T0-3B and by 4.3% from our strongest baseline while using a small portion of the training set.
Finally, we observe that using the top-5 source tasks for each target according to our pairwise transfer scores consistently improves zero-shot performance on the targets. Our results support previous observations that it can be useful to build smaller multi-task training sets with a more careful task selection strategy Pruksachatkun et al. (2020); Chan et al. (2022). According to our results, the training set can be built with a small number of helpful source tasks for each target task.
### Discussion
The results of our experiments indicate that scores associated with single-task transfer are useful for multi-task transfers. We perform further experiments to support this hypothesis and address three questions regarding our experiments.
How many source tasks do we need?We investigate how adding different numbers of source tasks to the multi-task training set affects the target task performance. To this end, we repeat the aforementioned procedure for building the training set with different numbers, resulting in training sets with 1, 3, 10 and 21 source tasks in addition to five tasks, which is our default setup.
Table 5 shows the results of our experiment. We first observe that target tasks achieve best performance with different numbers of source tasks taken from TaskShop, with most performance improvements occurring between 3 to 5 source tasks. Our setup of using five source tasks results in the overall highest performance and ranks first or second across most setups. Surprisingly, using ten or more source tasks results in an overall worse performance than using five source tasks--more tasks score better with using five source tasks than ten. The performance drops considerably when 21 tasks (all tasks except the target) are used in the training set. According to our results, most targets only re
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c|c} \hline \hline
**Method** & ANLI-R1 & ANLI-R2 & ANLI-R3 & CB & COPA & Hellasw. & RTE & StoryC. & WiC & Winogr. & WSC & **Mean** \\ \hline T0-3B & 35.62 & 33.36 & 33.10 & 62.20 & 75.50 & 27.30 & 61.87 & 85.13 & 50.88 & 50.65 & **66.02** & 52.88 \\ T5-3B + all tasks & 41.49 & 35.32 & 39.61 & 79.96 & 82.08 & 39.73 & 74.95 & 91.93 & 52.93 & **57.35** & 44.44 & 58.16 \\ Retrieval-of-Experts\({}^{*}\) & 38.38 & 35.44 & 41.24 & 75.2 & 83.17 & 41.86 & 65.08 & 94.04 & **53.22** & 50.09 & 44.76 & 56.59 \\ LLM-similarity\({}^{\diamond}\) & 39.91 & 34.74 & 38.84 & 81.65 & 80.91 & 40.85 & **78.2** & 93.96 & 51.35 & 52.26 & 55.02 & 58.88 \\ Ours: \(\texttt{TaskShop}_{\texttt{ROE}}\) & **42.86** & **36.15** & **41.41** & **84.52** & **86.08** & **41.94** & 76.73 & **94.04** & 51.49 & 53.0 & 59.4 & **60.69** \\ \hline Ours: TaskWeb\({}^{\dagger}\) & 40.16 & 36.15 & 42.15 & 82.24 & 85.25 & 43.73 & 77.71 & 92.69 & 50.75 & 55.84 & 62.82 & 60.86 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of multi-task learning experiments. All evaluations are done in zero-shot settings, with results averaged over multiple prompts. The first group corresponds to our baselines, the second group corresponds to two existing task selection methods, as well as TaskShop without access to TaskWeb for the target task (but access to TaskWeb between other tasks), and the third group uses TaskWeb scores for the target to select source tasks. \(\star\) is from Jang et al. (2023) and \(\diamond\) is from Paranjape et al. (2023). \(\dagger\) has access to TaskWeb scores directly to the target task. All methods below the dotted line use the top-5 scoring source tasks to build multi-task training sets.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c|c} \hline \hline
**Method** & ANLI-R1 & ANLI-R2 & ANLI-R3 & CB & COPA & Hellasw. & RTE & StoryC & WiC & Winogr. & WSC & **Mean** \\ \hline Top-1 & 40.83 & 34.53 & 38.08 & 75.0 & 80.08 & 28.56 & 70.49 & 89.68 & 50.74 & 52.6 & 36.54 & 54.28 \\ Top-3 & 41.78 & **36.54** & 40.86 & 79.46 & **86.16** & **45.54** & 70.54 & 89.66 & 51.32 & 52.61 & 54.81 & 59.03 \\ Top-5 & **42.86** & 36.15 & **41.41** & **84.52** & 86.08 & 41.94 & 76.73 & **94.04** & 51.49 & 53.0 & **59.4** & **60.69** \\ Top-10 & 40.58 & 35.17 & 38.88 & 75.6 & 84.92 & 42.24 & **78.65** & 93.99 & 51.41 & 52.54 & 58.97 & 59.36 \\ Top-21 & 41.49 & 35.32 & 39.61 & 79.96 & 82.08 & 39.73 & 74.95 & 91.93 & **52.93** & **57.35** & 44.44 & 58.16 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of choosing different numbers of source tasks for multi-task learning with TaskShop ROE. For each target task, the highest scoring setup is **bolded**. Results for top-5 are taken from TaskShop ROE in Table 4.
quire a careful selection of three to five source tasks to achieve high performance, with the exception of several tasks such as Winogrande that demonstrate better performance as more tasks are indiscriminately added. Our findings differ from previous work which finds performance to scale with the number of tasks Sanh et al. (2022); Wei et al. (2022); Wang et al. (2022) because while their setups add tasks in a target-agnostic manner, our setup adds helpful tasks based on the specific target.
**Do our methods identify both helpful and unhelpful source tasks?** While we have shown that our methods identify a set of helpful tasks, we also seek to demonstrate that they are able to identify _unhelpful_ tasks in multi-task settings. In other words, we seek to show that our methods return well-calibrated scores for selecting source tasks in the multi-task setting. To this end, we pick the bottom-5 source tasks for each target with TaskShop, and with actual pairwise transfer scores from TaskWeb. We additionally choose five random source tasks for each target and repeat the same process for multi-task learning.
Table 6 summarizes the results. While choosing a random set of source tasks underperforms the T0-3B baseline, choosing the bottom-5 tasks based on TaskShop results in a further decrease in 3.4 accuracy points on average. Finally, choosing the bottom-5 tasks based on TaskWeb results in similarly low target performances. These results demonstrate that our methods help us build both helpful and unhelpful multi-task training sets. Moreover, they indicate that negative pairwise transfers between source and target impact multi-task learning.
**What happens if we mix helpful and unhelpful source tasks?** While grouping helpful sources together increases target performance and grouping unhelpful sources together decreases target performance, it is unclear what happens in between. To address this question, we mix helpful and unhelpful source tasks in different proportions in the multi-task training set and observe the change in target task performance. We repeat this process over four target tasks including ANLI (R3), COPA, HellaSwag and RTE in our evaluation setup. For each task, we start with the top-5 tasks according to the pairwise transfer scores in TaskWeb and replace one task with one of the bottom-5 tasks until all top-5 tasks have been replaced. We perform the same evaluations as Tables 4, 5 and 6.
Figure 6 visualizes the results. As each helpful source task is replaced with an unhelpful source task, the target performance consistently decreases across all four tasks, while some drops are more significant than others. However, there are several instances where replacing a helpful task with an unhelpful task _increases_ performance, as can be seen from 0\(\rightarrow\)1 in HellaSwag and 4\(\rightarrow\)5 in ANLI. These results indicate that while pairwise transferability between the source and target plays heavily impacts the target performance during multi-task learning, other factors such as negative interference between the source tasks may also be involved, which is an interesting direction for future work.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c|c} \hline \hline
**Method** & ANLI-R1 & ANLI-R2 & ANLI-R3 & CB & COPA & Hellasw. & RTE & StoryC & WiC & Winogr. & WSC & **Mean** \\ \hline Random & 34.35 & 35.29 & 36.12 & 65.67 & 72.58 & 29.64 & 73.69 & 55.84 & 49.53 & 51.03 & 51.92 & 50.51 \\ Bottom-5 w/ TaskShop & 33.39 & 34.21 & 35.63 & 67.76 & 55.92 & 25.01 & 62.57 & 59.42 & 50.33 & 50.45 & 43.69 & 47.13 \\ Bottom-5 w/ TaskWeb & 34.33 & 33.56 & 36.28 & 47.02 & 52.92 & 25.37 & 67.2 & 57.3 & 50.05 & 50.1 & 54.59 & 46.25 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of choosing random and worst sets of tasks according to TaskShop and TaskWeb. Refer to the third row in Table 5 for target task performances with the top-5 source tasks selected by TaskShop.
Figure 6: Variations in the zero-shot target performance as the top-5 source tasks for each target are incrementally replaced by the bottom-5 source tasks according to TaskWeb while maintaining the size of the training set.
## 6 Conclusion
In this work, we investigate the efficacy of using prior knowledge of task relationships quantified via pairwise task transfer in selecting helpful source tasks for multi-task NLP. We build TaskWeb, a benchmark of pairwise task transfers across different tasks, models and adaptation methods in NLP. Based on our analysis of TaskWeb, we propose TaskShop, our new method for selecting helpful source tasks for a new target task. We show that TaskShop outperforms existing methods for choosing helpful source tasks across different categories of target tasks. Moreover, we use TaskShop and TaskWeb to build small multi-task training sets and outperform other methods that use much larger training sets.
## Acknowledgements
We thank the UW NLP group members for their helpful discussions. This research was supported by DARPA MCS program through NIWC Pacific (N66001-19-2- 4031), and NSF IIS-2044660. JK is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2140004. AA is funded by the IBM PhD Fellowship.
|
2306.05164 | Energy Efficient Skyrmion based Oscillator on Thermocoupled Nanotrack | The magnetic skyrmion-based spin transfer nano-oscillators (STNO) are the
potential candidates for next-generation microwave signal generator and has
gained popularity due to their performance, integrability and compatibility
with existing CMOS technology. However, these devices suffer from the Joule
heating problem that neglects their non-volatility advantage in spintronic
devices. Therefore, it is necessary to investigate the alternative driving
mechanisms for the development of energy-efficient skyrmion based
nano-oscillators. In this paper, a skyrmion-based nano-oscillator has been
designed that utilizes thermal power to drive skyrmion on a thermocoupled
nanotrack. The thermocoupled nanotrack is designed in such a way that both the
upper and lower nanotracks have different values of damping constants and a
temperature difference is maintained between the extreme ends, in order to
create a temperature gradient in the two nanotracks. By employing this
technique, skyrmion is able to exhibit the periodic motion on the nanotrack
with the maximum achievable frequency of 2.5GHz without any external stimuli.
Moreover, the proposed device offers low thermal energy consumption of
0.84fJ/oscillation. Hence, this work provides the pathway for the development
of energy-efficient future spintronic devices. | Ravish Kumar Raj, Namita Bindal, Brajesh Kumar Kaushik | 2023-06-08T12:55:11Z | http://arxiv.org/abs/2306.05164v1 | # Energy Efficient Skyrmion based Oscillator on Thermocoupled Nanotrack
###### Abstract
The magnetic skyrmion-based spin transfer nano-oscillators (STNO) are the potential candidates for next-generation microwave signal generator and has gained popularity due to their performance, integrability and compatibility with existing CMOS technology. However, these devices suffer from the Joule heating problem that neglects their non-volatility advantage in spintronic devices. Therefore, it is necessary to investigate the alternative driving mechanisms for the development of energy-efficient skyrmion based nano-oscillators. In this paper, a skyrmion-based nano-oscillator has been designed that utilizes thermal power to drive skyrmion on a thermocoupled nanotrack. The thermocoupled nanotrack is designed in such a way that both the upper and lower nanotracks have different values of damping constants and a temperature difference is maintained between the extreme ends, in order to create a temperature gradient in the two nanotracks. By employing this technique, skyrmion is able to exhibit the periodic motion on the nanotrack with the maximum achievable frequency of 2.5GHz without any external stimuli. Moreover, the proposed device offers low thermal energy consumption of 0.84fJ/oscillation. Hence, this work provides the pathway for the development of energy-efficient future spintronic devices.
## I. Introduction
Magnetic skyrmion is a topologically stable chiral spin texture that has a localized excitation of magnetic moments and exists in magnetic materials as a result of Dzyaloshinski-Moriya interaction (DMI) that is associated with broken inversion symmetry and strong spin-orbit coupling [1-4]. Recently, ultra-small (100ns to 1ns) sized skyrmions with a few nanometers dimension have been reported in ultrathin ferromagnetic (FM) and antiferromagnetic (AFM) materials [5]. Later, the skyrmions were observed at the room temperature (RT) in magnetic multilayers like Ta/CoFeB/TaOx and Pt/CoFeB/MgO. Owing to low current density requirements and nanoscale size, the magnetic skyrmions are attracting much interest both in fundamental science and applications in novel devices, especially for non-volatile magnetic memories [6], logics [7], transistors [8], diodes [9], oscillators [10], and neuromorphic computing [11].
In the recent past, very small-sized spin-based oscillators such as spin transfer nano-oscillator (STNO) and spin Hall nano-oscillator (SHNO) have been designed however, these devices limit the scalability and tenability [12-13]. Hence, in view of these shortcomings, the skyrmion-based oscillators that employ vortex-like polarization at the fixed layer of magnetic tunnel junction (MTJ) were proposed [14-15]. Moreover, these are driven by the spin-polarized current that includes spin transfer torque (STT) and spin-orbit torque (SOT) mechanisms that requires a large current and subsequently translate into a large energy dissipation (\(\sim\)100fJ). This is at least 3 orders of magnitude larger than the energy dissipation in CMOS-based oscillators (\(\sim\)100aJ) [16]. Therefore, these devices offer the problem of Joule heating and this neglects the advantage of non-volatility offered by spintronic devices. Hence, there is an urgent need for an alternative driving mechanism to designing skyrmion based nano-oscillators. The alternatives include magnetic anisotropic gradient, DMI gradient, strain gradient as well as temperature gradient (TG). Out of these mechanisms, for the practical application, the TG is the best choice for driving the skyrmions due to its low energy consumption, hence TG driven skyrmion is the basis of a new research direction named as skyrmion-caloritronics [17]. In addition, it contributes to the design of waste heat recovery that can be further utilized for nucleation and manipulation of the skyrmion. However, the investigation behind the dynamics of skyrmion under TG is still in its early stage.
In recent study, it has been found that the dynamics of skyrmion in the presence of TG can be attributed to the competition among different torques and fields including, the magnonic torque, thermal torque, entropy difference, and thermally induced dipolar field [18-19]. However, the experimental results have shown contradictory outcomes, with skyrmions either moving towards colder regions or hotter regions [20]. The magnonic torque that is a result of the propagation of magnons from hotter to colder regions, exerts a spin torque on the skyrmion, pushing it towards the hotter region to minimize the system's free energy [21]. |
2310.07095 | Interesting Clues to Detect Hidden Tidal Disruption Events in Active
Galactic Nuclei | In the manuscript, effects of Tidal Disruption Events (TDEs) are estimated on
long-term AGN variability, to provide interesting clues to detect probable
hidden TDEs in normal broad line AGN with apparent intrinsic variability which
overwhelm the TDEs expected variability features, after considering the unique
TDEs expected variability patterns. Based on theoretical TDEs expected
variability plus AGN intrinsic variability randomly simulated by Continuous
AutoRegressive process, long-term variability properties with and without TDEs
contributions are well analyzed in AGN. Then, interesting effects of TDEs can
be determined on long-term observed variability of AGN. First, more massive
BHs, especially masses larger than $10^7{\rm M_\odot}$, can lead to more
sensitive and positive dependence of $\tau_{TN}$ on $R_{TN}$, with $\tau_{TN}$
as variability timescale ratio of light curves with TDEs contributions to
intrinsic light curves without TDEs contributions, and $R_{TN}$ as ratio of
peak intensity of TDEs expected variability to the mean intensity of intrinsic
AGN variability without TDEs contributions. Second, stronger TDEs contributions
$R_{TN}$ can lead to $\tau_{TN}$ quite larger than 5. Third, for intrinsic AGN
variability having longer variability timescales, TDEs contributions will lead
$\tau_{TN}$ to be increased more slowly. The results actually provide an
interesting forward-looking method to detect probable hidden TDEs in normal
broad line AGN, due to quite different variability properties, especially
different DRW/CAR process expected variability timescales, in different epochs,
especially in normal broad line AGN with shorter intrinsic variability
timescales and with BH masses larger than $10^7{\rm M_\odot}$. | XueGuang Zhang | 2023-10-11T00:22:21Z | http://arxiv.org/abs/2310.07095v1 | # Interesting Clues to Detect Hidden Tidal Disruption Events in Active Galactic Nuclei
###### Abstract
In the manuscript, effects of Tidal Disruption Events (TDEs) are estimated on long-term AGN variability, to provide interesting clues to detect probable hidden TDEs in normal broad line AGN with apparent intrinsic variability which overwhelm the TDEs expected variability features, after considering the unique TDEs expected variability patterns. Based on theoretical TDEs expected variability plus AGN intrinsic variability randomly simulated by Continuous AutoRegressive process, long-term variability properties with and without TDEs contributions are well analyzed in AGN. Then, interesting effects of TDEs can be determined on long-term observed variability of AGN. First, more massive BHs, especially masses larger than \(10^{7}\mathrm{M}_{\odot}\), can lead to more sensitive and positive dependence of \(\tau_{TN}\) on \(R_{TN}\), with \(\tau_{TN}\) as variability timescale ratio of light curves with TDEs contributions to intrinsic light curves without TDEs contributions, and \(R_{TN}\) as ratio of peak intensity of TDEs expected variability to the mean intensity of intrinsic AGN variability without TDEs contributions. Second, stronger TDEs contributions \(R_{TN}\) can lead to \(\tau_{TN}\) quite larger than 5. Third, for intrinsic AGN variability having longer variability timescales, TDEs contributions will lead \(\tau_{TN}\) to be increased more slowly. The results actually provide an interesting forward-looking method to detect probable hidden TDEs in normal broad line AGN, due to quite different variability properties, especially different DRW/CAR process expected variability timescales, in different epochs, especially in normal broad line AGN with shorter intrinsic variability timescales and with BH masses larger than \(10^{7}\mathrm{M}_{\odot}\).
keywords: active galaxies - active galactic nuclei - transient events - tidal disruption event
## 1 Introduction
Variability is one of fundamental characteristics of active galactic nuclei (AGN) (Rees, 1984; Wagner & Witzel, 1995; Ulrich, Maraschi & Urry, 1997; Madejski & Sikora, 2016; Dexter & Begelman, 2019; Baldassare, Geha & Greene, 2020; Burke et al., 2021). Although there is uncertain physical origin of the AGN variability with different timescales, such as the proposed different models in Torricelli-Ciamponi et al. (2000); Hawkins (2002); Favre, Courousiier & Paltani (2005); Li & Cao (2008); Pechacek et al. (2013); Sartori et al. (2018); Panagiota et al. (2022), etc, there is a preferred mathematical process to describe the long-term AGN variability, damped random walk (DRW) process or Continuous AutoRegressive (CAR) process with two basic process parameters of intrinsic variability timescale \(\tau\) and amplitude \(\sigma\). Kelly, Bechtold & Siemiginowska (2009) have firstly proposed the CAR process (Brockwell & Davis, 2002) to described long-term AGN variability. And then, Kozlowski et al. (2010); Zu, Kochanek & Peterson (2011); Bailer-Jones (2012); Kelly et al. (2014); Simm et al. (2016); Takata, Mukuta & Mizumoto (2018); Moreno et al. (2019); Sheng, Ross & Nicholl (2022) have provided improved methods to estimate the process parameters.
There are many reported studies on the AGN variability through the DRW process. MacLeod et al. (2010) have modeled the variability of about 9000 spectroscopically confirmed quasars covered in the SDSS Stripe82 region, and found correlations between the AGN parameters and the DRW process determined parameters. Bailer-Jones (2012) proposed an another fully probabilistic method for modeling AGN variability by the DRW process. Andrae, Kim & Bailer-Jones (2013) have shown that the DRW process is preferred to model AGN variability, rather than several other stochastic and deterministic models, by fitted results of long-term variability of 6304 quasars. Zu et al. (2013) have checked that the DRW process provided an adequate description of AGN optical variability across all timescales. Zhang & Feng (2017) have checked long-term variability properties of AGN with double-peaked broad emission lines, and found the difference in intrinsic variability timescales between normal broad line AGN and the AGN with double-peaked broad emission lines. Sanchez-Saez et al. (2018) have modeled variability by DRW process and reported statistical analysis of the connection between AGN variability and physical properties of the central AGN activities, through the 2345 sources detected in both SDSS (Sloan Digital Sky Survey) and QUEST-La Silla. Burke et al. (2020) have modeled the month-long, 30 minute-cadence, high-precision TESS (Transiting Exoplanet Survey Satellite) light curve by the DRW process in the well-known archetypical dwarf AGN NGC 4395. More recently, Suberlak, Ivezic & MacLeod (2021) have modeled 15years-long variability of 9248 quasars covered in SDSS stripe 82 region by combining the Pan-STARRS1 PS1 (Panoramic Survey Telescope and Rapid Response System 1 Survey) and SDSS light curves. Zhang et al. (2021) have modeled long-term variability of a composite galaxy to provide flues to support a true Type-2 AGN. There
fore, the long-term AGN variability have been well accepted to be mathematically modeled by the DRW/CAR process.
Meanwhile, as discussed in Mushotzky et al. (2011); Kasliwal et al. (2015); Guo et al. (2017); Tachibana et al. (2020); Stone et al. (2022), intrinsic AGN variability deviations from the simple DRW description on short timescales, and also the estimated intrinsic variability timescale in the DRW process probably rises with increased baseline. However, in the manuscript, long-term variability not on short timescales but with the same length of time durations are mainly considered, therefore, neither variability on short timescales nor effects of different lengths of baseline are discussed in the manuscript. Besides the long-term intrinsic AGN variability well described by the CAR/DRW process, there is an unique kind of variability related to tidal disruption events (TDEs), which cannot intrinsically follow the CAR process expected variability properties, due to unique TDEs variability patterns. The well-known pioneer work on TDEs can be found in Rees (1988) and then followed in Loeb & Ulmer (1997); Komossa et al. (2004); Lodato, King & Pringle (2009); Cenko et al. (2012); Guillochon & Ramirez-Ruiz (2013); Guillochon, Manukian & Ramirez-Ruiz (2014); Wang et al. (2018); Mochler, Guillochon & Ramirez-Ruiz (2019); Stone et al. (2019); Parkinson et al. (2020); Lynch & Ogilvie (2021); Zhou et al. (2021); Zhang (2022), etc. The basic picture of a TDE is as follows. A star can be tidally disrupted by gravitational tidal force of a central massive black hole (BH), when it passing close to the central BH with a distance larger than event horizon of the BH but smaller than tidal disruption radius \(R_{\rm T}=R_{\star}\times(\frac{M_{\rm BH}}{M_{\star}})^{1/3}\) with \(R_{\star}\), \(M_{\star}\) and \(M_{\rm BH}\) as radius and mass of the being disrupted star and mass of central BH, respectively. The fallback materials can be accreted by the central massive BH, leading to time dependent TDEs variability roughly proportional to \(t^{-\,-\,5/3}\) at late times.
More recent reviews on theoretical simulations and/or observational results on TDEs can be found in Komossa (2015); Lodato et al. (2015); Stone et al. (2019). There are more than 100 TDE candidates reported in literature, see detail in [https://tde.space/](https://tde.space/). Meanwhile, the well-known public sky survey projects have lead to more and more TDEs candidates detected, such as the TDEs candidates discovered through the known SDSS Stripe82 database in van Velzen et al. (2011), through the known Catalina Sky Survey (CSS, Drake et al. (2009)) in Drake et al. (2011), through the PanSTARRS (panomamic survey telescope and rapid response system) in Gezari et al. (2012); Chornock et al. (2014), through the PTF (palomar transient factory) in Blagorodnova et al. (2017); van Velzen et al. (2019), through the Optical Gravitational Lensing Experiment (OGLE) in Wyrzykowski et al. (2017); Gromadzki et al. (2019), through the ASAS-SN (all-sky automated survey for supernovae) in Holoien et al. (2014, 2016); Hinkle et al. (2021), through the the CNSS (Caltech-NRAO Stripe 82 Survey) in Anderson et al. (2020), and through the ZTF (Zwicky Transient Facility) in van Velzen et al. (2019); Lee et al. (2020); Stein et al. (2021), etc. More recently, two large samples of dozens of new TDE candidates can be found in van Velzen et al. (2021) from the First Half of ZTF (Zwicky Transient Facility) Survey observations along with Swift UV and X-ray follow-up observations and in Sazonov et al. (2021) from the SRG all-sky survey observations and then confirmed by optical follow-up observations. More recent review on observational properties of reported TDEs can be found in Gezari (2021). However, among the reported TDEs candidates, there are few TDEs detected in normal broad line AGN with both apparent and strong intrinsic AGN variability.
Among the reported TDE candidates, especially optical TDE candidates, strong broad Balmer and Helium emission lines are fundamental spectroscopic characteristics, however, the detected broad emission lines are not expected to be tightly related to normal broad line regions in normal broad line AGN, but to be related to disk-like structures from TDE debris. The known cases with broad emission lines in TDEs candidates can be found in SDSS J0159 as discussed in Merloni et al. (2015); Zhang (2021), ASASSN-14li as discussed in Holoien et al. (2016), PTF90djl as discussed in Liu et al. (2017), PS18kh as discussed in Holoien et al. (2019), AT2018hyz as discussed in Short et al. (2020); Hung et al. (2020), etc., indicating the reported broad emission lines in the TDE candidates are not related to normal BLRs in normal broad line AGN, but are tightly related to TDE debris. Moreover, there are several TDE candidates, their UV-band spectra have been well checked, such as the PS18kh, ASASSN-15lh, ASASSN-14li, etc., there are no broad Mg \(\upmu\)L2800A emission lines. And moreover, in the TDEs candidates with detected optical broad emission lines, there are no clues on DRW process expected variability, except the TDEs expected variability patterns. In other words, there are no confirmed evidence to support central TDEs in normal broad line AGN with apparent intrinsic AGN variability.
Certainly, not similar as in quiescent galaxies, a moving star can be tidally disrupted by the central supermassive BH without a pre-existing accretion disk. However, there is also an existed accretion disk around the central supermassive BH in AGN, therefore, effects of the existed accretion disk should be considered on accreting fallback TDEs debris in normal broad line AGN. Kathirgamaraju et al. (2017) have discussed effects of a pre-existing accretion disc on TDEs expected variability, leading to still TDE expected variability patterns but with a probable cut-off. Chan et al. (2019); Chan, Piran & Krolik (2020) have modeled TDEs variability in AGN with a pre-existing accretion disc, and discussed evolutions of the fallback bound debris being modified by collisions with the pre-existing disk, indicating the expected variability should be not totally similar as the TDEs expected variability patterns. However, there are so-far several TDEs candidates detected and reported in AGN. Blanchard et al. (2017) have reported a TDE candidate in a narrow line Seyfert 1 galaxy of which light curves can be roughly described by theoretical TDE model, and discussed that out-of-plane TDEs have quite weak interactions between the TDE debris and the pre-existing disk because the debris only intersect a small region of the disk. Yan & Xie (2018) have shown the TDE expected variability pattern in the low-luminosity AGN NGC 7213. Liu et al. (2020) have reported a TDE candidate in AGN SDSS J0227 with probable broad Balmer emission lines, and shown the sudden rise followed by a smooth decline trend in long-term variability in SDSS J0227. Zhang et al. (2022) have shown the TDE expected variability patterns in a narrow line Seyfert 1 galaxy. More recently, Zhang (2022) have shown TDE expected long-term variability in the high redshift quasar SDSS J014124+010306, and Zhang (2022) have discussed and shown TDE expected long-term variability of broad H\(\alpha\) line luminosity in low luminosity broad line AGN NGC 1097. Therefore, totally similar TDE simulating variability can be expected in normal AGN with pre-existing accretion disks.
Rare TDEs reported in normal AGN are mainly due to stronger intrinsic AGN variability than TDEs variability. However, there are enough probabilities and feasibilities to expect TDEs in normal AGN with supermassive BHs, even there are no detected TDEs expected variability features which are probably overwhelmed by strong intrinsic AGN variability in observed light curves. For intrinsic long-term AGN variability, the expected timescales are simply consistent with accretion disk orbital timescales or thermal timescales of about hundreds of days as the shown results
in Kelly, Bechtold & Siemiginowska (2009) for normal AGN (including 55 AGN from the MACHO survey, 37 Palomar Green quasars, and eight Seyfert galaxies from the AGN Watch project), in Kozlowski et al. (2010) for about 2700 OGLE quasars, in MacLeod et al. (2010) for about 9000 quasars covered in the SDSS Stripe82 region, and in Rumbaugh et al. (2018) for extreme variability quasars. Meanwhile, for variability from probable TDEs around supermassive BHs with masses around \(10^{7-8}\rm M_{\odot}\) in AGN, the expected years-long timescales can be compared to the timescales of intrinsic long-term AGN variability. Therefore, it is interesting to check effects of TDEs on long-term AGN variability, which could provide interesting clues to expect probable hidden TDEs in normal broad line AGN with CAR process described intrinsic variability, through the long-term light curves from the public sky survey projects. Section 2 and Section 3 present our main hypotheses and main results. Section 4 gives the discussions and further applications. Section 5 gives our final conclusions. And in the manuscript, the cosmological parameters of \(H_{0}=70\rm km\cdot s^{-1}Mpc^{-1}\), \(\Omega_{\Lambda}=0.7\) and \(\Omega_{\rm m}=0.3\) have been adopted.
## 2 Main Hypotheses
### Time Dependent Bolometric Luminosities from TDEs
In the manuscript, the well discussed theoretical TDEs model in Guillochon & Ramirez-Ruiz (2013); Guillochon, Manukian & Ramirez-Ruiz (2014); Mockler, Guillochon & Ramirez-Ruiz (2019) is mainly considered and accepted, combining with the mass-radius relation in Tout (1996) accepted for main-sequence stars. The time dependent bolometric luminosities from TDEs are simulated by the following four steps, similar as what we have done in Zhang (2022) to model X-ray variability of the relativistic TDE candidate Swift J2058.4+0516 and in Zhang (2022b,c) to model optical variability in quasar SDSS J014124+010306 and in low luminosity broad line AGN NGC 1097.
First, standard templates of viscous-delayed accretion rates in TDEs are created. Based on both the TDEFIT ([https://tde.space/tdefit/](https://tde.space/tdefit/)) code (Guillochon, Manukian & Ramirez-Ruiz, 2014) and the MOSFIT (Modular Open Source Firter for Transients, [https://mosfit.readthedocs.io](https://mosfit.readthedocs.io)) code (Guillochon et al., 2018) provided \(dm/de\) (\(m\) as debris mass and \(e\) the specific binding energy), templates of fallback material rate \(\dot{M}_{fbt}=dm/de\times de/dt\) can be created for standard cases with the central BH mass \(M_{\rm BH}=10^{6}\rm M_{\odot}\) and the being disrupted main-sequence star of \(M_{*}=1\rm M_{\odot}\) and with a grid of the listed impact parameters \(\beta_{t}\) in Guillochon & Ramirez-Ruiz (2013). Considering the viscous delay as discussed in Guillochon & Ramirez-Ruiz (2013); Mockler, Guillochon & Ramirez-Ruiz (2019) by a parameter of viscous timescale \(T_{vis}\), templates of viscous-delayed accretion rates \(\dot{M}_{at}\) can be determined by
\[\dot{M}_{at}\ =\ \frac{exp(-t/T_{vis})}{T_{vis}}\int_{0}^{t}exp(t^{\prime}/T_{ vis})\dot{M}_{fbt}dt^{\prime} \tag{1}\]
A grid of 31 evenly distributed \(\log(T_{vis,\ t}\)/years) range from -3 to 0 are applied to create templates \(\dot{M}_{at}\) for each impact parameter \(\beta_{t}\). Therefore, the created templates \(\dot{M}_{at}\) include 736 (640) time-dependent viscous-delayed accretion rates for 31 different \(T_{vis}\) of each 23 (20) impact parameters \(\beta_{t}\) for the main-sequence star with polytropic index \(\gamma\) of 4/3 (5/3).
Second, for TDEs with model parameters of \(\beta\) and \(T_{vis}\) different from the list values in \(\beta_{t}\) and in \(T_{vis,\ t}\), the corresponding viscous-delayed accretion rates \(\dot{M}_{a}\) are created by the following two line interpretations. Assuming that \(\beta_{1}\), \(\beta_{2}\) in the \(\beta_{t}\) are the two values nearer to the input \(\beta\), and that \(T_{vis1,\ T_{vis2}}\) in the \(T_{vis,\ t}\) are the two values nearer to the input \(T_{vis}\), the first linear interpretation is applied to find the viscous-delayed accretion rates with input \(T_{vis}\) but with \(\beta\ =\ \beta_{1}\) and \(\beta\ =\ \beta_{2}\) by
\[\dot{M}_{a}(T_{vis,\ \beta_{1}})\ =\ \dot{M}_{at}(T_{vis1},\ \beta_{1})+ \tag{2}\] \[\frac{T_{vis}-T_{vis1}}{T_{vis2}-T_{vis1}}(\dot{M}_{at}(T_{vis2},\ \beta_{1})- \dot{M}_{at}(T_{vis1},\ \beta_{1}))\] \[\dot{M}_{a}(T_{vis,\ \beta_{2}})\ =\ \dot{M}_{at}(T_{vis1},\ \beta_{2})+\] \[\frac{T_{vis}-T_{vis1}}{T_{vis2}-T_{vis1}}(\dot{M}_{at}(T_{vis2},\ \beta_{ 2})-\dot{M}_{at}(T_{vis1},\ \beta_{2}))\]
The second linear interpretation is applied to find the viscous-delayed accretion rates with input \(T_{vis}\) and with input \(\beta\) by
\[\dot{M}_{a}(T_{vis},\ \beta)\ =\ \dot{M}_{a}(T_{vis},\ \beta_{1})+ \tag{3}\] \[\frac{\beta-\beta_{1}}{\beta_{2}-\beta_{1}}(\dot{M}_{a}(T_{vis},\ \beta_{2})- \dot{M}_{a}(T_{vis},\ \beta_{1}))\]
Third, for TDEs with input parameters of \(M_{\rm BH}\) and \(M_{*}\) different from \(10^{6}\rm M_{\odot}\) and \(1\rm M_{\odot}\), the viscous-delayed accretion rates \(\dot{M}\) and the corresponding time information \(t\) in observer frame are created by the following scaling relations as shown in Guillochon, Manukian & Ramirez-Ruiz (2014); Mockler, Guillochon & Ramirez-Ruiz (2019),
\[\dot{M}\ =\ M_{\rm BH,\ 6}^{-0.5}\ \times\ M_{\star}^{2}\ \times\ R_{\star}^{1.5}\ \times\ \dot{M}_{a}(T_{vis},\ \beta) \tag{4}\] \[t\ =\ (1+z)\times M_{\rm BH}^{0.5}\ \times\ M_{\star}^{-1}\times R_{ \star}^{1.5}\ \times\ t_{a}(T_{vis},\ \beta)\]
where \(M_{\rm BH,\ 6}\), \(M_{\star}\), \(R_{*}\) and \(z\) represent central BH mass in unit of \(10^{6}\rm M_{\odot}\), stellar mass in unit of \(\rm M_{\odot}\), mass-radius relation determined stellar radius in unit of \(\rm R_{\odot}\), and redshift of host galaxy, respectively.
Fourth, the time dependent bolometric luminosities \(L_{\rm bol,\ t,\ DE}\) from TDEs can be finally calculated by
\[L_{\rm bol,\ t,\ TDE}\ =\ \eta\ \times\ \dot{M}(t)\ c^{2} \tag{5}\]
where \(c\) and \(\eta\) are the light speed and the energy transfer efficiency around central BH. The value \(\eta\) will be further discussed in the following subsections. Therefore, for a TDE with given model parameters of central BH mass \(M_{BH}\), stellar mass \(M_{\star}\) and polytropic index \(\gamma\) of the central being disrupted main-sequence star, the impact parameter \(\beta\), the viscous timescale \(T_{vis}\), redshift \(z\) and energy transfer efficiency \(\eta\), time dependent \(L_{\rm bol,\ t,\ TDE}\) can be well simulated by the theoretical TDEs model.
Based on the four steps, TDE expected time dependent bolometric luminosities can be simulated by accepted the only criterion that the TDE model parameters determined tidal radius larger than the event horizon of central BH.
Before the end of the subsection, two points are noted. First, the circularizations in TDEs as discussed in Kochanek (1994); Bonnerot et al. (2016); Hayasaki, Stone & Loeb (2016); Zanazzi & Ogilvie (2020); Lynch & Ogilvie (2021) are not considered in the manuscript. The circularization emissions in TDEs have been probably detected in the TDE candidate ASASSN-15th in Leloudas et al. (2016) and in TDE candidate AT 2019wd in Chen, Dou & Shen (2022), due to the two clear peaks (or two clear phases) detected in the NUV and/or optical band light curves. However, among the more than 100 reported TDEs candidates, there are rare TDEs candidates of which optical light curves have re-brightened peaks, indicating the ratio of TDEs with clear circularization emissions is very low. Therefore, we mainly consider the simple case
that the fallback timescales of the circularizations are significantly smaller than the viscous timescales of the accretion processes, and the fallback materials will circularize into a disk as soon as possible. Second, the expected plateau phase in TDEs expected light curves with considerations of pre-existing accretion disk of AGN are not considered in the manuscript, because the plateau phase has small time duration and/or no plateau phases in some AGN (such as the results in Yan & Xie (2018); Zhang et al. (2022); Zhang (2022b,c), etc.) due to low surface density of pre-existing accretion disk of AGN.
### Time Dependent Bolometric Luminosities from the well-known AGN NGC5548
In the manuscript, the observed long-term light curve \(L_{\rm c,~{}t,~{}N5548}\) of continuum luminosity at 5100A over 13 years of the well-known broad line AGN NGC5548 (\(z=0.01717\)) in Peterson et al. (2002) and in the AGNWATCH project ([https://www.asc.ohio-state.edu/astromony/agmwatch/n5548/1c/](https://www.asc.ohio-state.edu/astromony/agmwatch/n5548/1c/)) is collected as the AGN variability template. Then, the time dependent bolometric luminosity from NGC5548 \(L_{\rm bol,~{}t,~{}N5548}=10~{}\times~{}L_{\rm c,~{}t,~{}N5548}\) is calculated by the bolometric corrections. The bolometric correction factor 10 is accepted, based on the statistical properties of spectral energy distributions of broad line AGN discussed in Richards et al. (2006); Duras et al. (2020) and also on the more recent discussed results in Netzer (2020).
Moreover, based on the well discussed results in Peterson et al. (2004); Bentz et al. (2010); Pancoast et al. (2014), the central BH mass can be accepted as \(M_{BH}\sim 6.7\times 10^{7}\rm M_{\odot}\)1 in the well-known reverberation mapped broad line AGN NGC5548 in the AGNWATCH project and in the LAMP (Lick AGN Monitoring Project) project ([https://www.physics.uci.edu/~barth/lamp.html](https://www.physics.uci.edu/~barth/lamp.html)). And Lu et al. (2016) have reported similar BH mass of NGC5548 by the reverberation mapped results through Lijiang 2.4m telescope at Yunnan Observatory. More recently, Williams et al. (2020); Horne et al. (2021) have reported similar BH mass of NGC5548, through the space telescope and optical reverberation mapping project. Then, based on the well discussed results in Davis & Laor (2011), the energy transfer efficiency around the central BH in NGC5548 can be well estimated as
Footnote 1: BH mass values varying from \(2\times 10^{7}\rm M_{\odot}\) to \(8\times 10^{7}\rm M_{\odot}\) in NGC5548 have few effects on our final results.
\[\eta~{}=~{}0.089\times(\frac{M_{BH}}{10^{8}\rm M_{\odot}})^{0.52}~{}=~{}7.2\% \tag{6}\]
which will be applied in Equation (5) above.
### Time Dependent Bolometric Luminosities with considerations of both AGN and TDE
There are three kinds of mock light curves \(L_{\rm bol,~{}t}\) created by AGN intrinsic variability plus TDEs contributions, from simplicity to complexity. The first kind is to simply add mock light curve \(L_{\rm bol,~{}t,~{}TDE}\) to the light curve \(L_{\rm bol,~{}t,~{}N5548}\). The second kind is to add mock light curve \(L_{\rm bol,~{}t,~{}TDE}\) to a randomly modified light curve \(L_{\rm bol,~{}t,~{}AGN}\) which is created by \(L_{\rm bol,~{}t,~{}N5548}\) plus a CAR process randomly created long-term variability. The third kind is created by CAR process randomly simulated long-term variability with different central physical properties.
The first kind of \(L_{\rm bol,~{}t}\) are simply created as follows. Mock light curves \(L_{\rm bol,~{}t,~{}TDE}\) are created by randomly selected TDEs model parameters. The BH mass \(M_{BH}\) and \(\eta\) is fixed to \(6.7\times 10^{7}\rm M_{\odot}\) and 0.072 (the values of NGC5548). The stellar mass \(M_{\star}\) is randomly selected from \(-2<\log(M_{\star}/\rm M_{\odot})<1\). The polytropic index \(\gamma\) is selected to be 4/3 or 5/3. The impact parameter is randomly selected from the minimum \(\beta_{t}\) to the maximum \(\beta_{t}\). The viscous timescale \(T_{vis}\) is randomly selected from the minimum \(T_{vis,~{}t}\) to the maximum \(T_{vis,~{}t}\). Here, there is a criterion that the expected tidal radius
\[\frac{R_{\rm TDE}}{R_{\rm s}}~{}=~{}5.06(M_{\star})^{-1/3}(\frac{M_{BH,~{}t}}{10 })^{-2/3}~{}R_{\star}~{}>~{}1 \tag{7}\]
larger than event horizon of central BH (\(R_{\rm s}~{}=~{}2GM_{\rm BH}/c^{2}\)). Then, with \(\gamma=4/3~{}(\gamma~{}=~{}5/3)\), 1200 (1200) mock light curves \(L_{\rm bol,~{}t,~{}DE}\) are randomly created. Considering TDEs with different starting times \(t_{\rm s}\) randomly from 0 to 3000days, the mock \(L_{\rm bol,~{}t}\) are created by
\[L_{\rm bol,~{}t,~{}t}=L_{\rm bol,~{}t_{\rm tt,~{}TDE}}+L_{\rm bol,~{}t,~{}N5548} \tag{8}\]
Then, different white noises defined by signal-to-noise ratio (SNR) randomly from 30 to 80 are added to the mock light curves \(L_{\rm bol,~{}t}\). And the observational uncertainties of \(L_{\rm bol,~{}t,~{}N5548}\) are accepted as the uncertainties of \(L_{\rm bol,~{}t}\).
Before proceeding further, simple discussions are given to describe why values of SNRs for white noises are randomly selected from 30 to 80. As the collected information of the long-term light curve of NGC5548, the mean ratio of continuum emissions to uncertainties of continuum emissions is about 32. Meanwhile, to our knowledge, among our collected low-redshift (\(z<0.35\)) SDSS (Sloan Digital Sky Survey) quasars, such as the sample discussed in Zhang (2023), the highest signal-to-noise ratio of SDSS spectra is about 74. Therefore, when adding white noises to the created mock light curves in the manuscript, corresponding SNRs are randomly selected from 30 to 80. Meanwhile, accepted SNRs from 30 to 80, corresponding photometric magnitude uncertainty can be simply estimated to be from 0.036mag to 0.013mag, which are similar as the magnitude uncertainties of light curves of quasars provided by SDSS Stripe82 database (MacLeod et al., 2010).
The second kind of \(L_{\rm bol,~{}t}\) are created as follows. The mock light curves \(L_{\rm bol,~{}t,~{}TDE}\) are similarly created, but the AGN variability template \(L_{\rm bol,~{}t,~{}AGN}\) is created by
\[L_{\rm bol,~{}t,~{}AGN}~{}=~{}L_{\rm bol,~{}t,~{}N5548}~{}+L(CAR) \tag{9}\]
where \(L(CAR)\) is a randomly created light curve with mean of zero. And the \(L(CAR)\) (with expected variance around 0.012) is randomly created through the CAR process described in
Figure 1: Dependence of bolometric luminosity (10 times of the continuum luminosity at 5100Å) on redshift for all the collected SDSS quasars with reliable measurements of continuum luminosity from Shen et al. (2011). Solid red line shows the best description \(L_{\rm bol}=44.96~{}+~{}1.22~{}\times~{}z\).
Figure 3: Dependence of \(\tau_{TN}\) on \(R_{TN}\) and simple linear description in solid red line, based on the mock light curves \(L_{bol,\ t}\) created by \(L_{bol,\ t}\), NS54s plus contributions of TDEs with \(\gamma\ =\ 4/3\) (in the left panel), and with \(\gamma\ =\ 5/3\) (in the right panel). In left panel, solid red circle shows the results for the mock light curve \(L_{bol,\ t}\) shown in the top right panel of Fig. 2. In each panel, top corner shows the results for all the 1200 mock light curves \(L_{bol,\ t}\), however the contour is plotted for the cases with \(R_{TN}\ >\ 0.5\). In each panel, from top to bottom, dashed red lines show \(\tau_{TN}\ =\ 5,\ 2,\ 1\), respectively. And in each top corner, symbols in red and in dark green show the cases with SNR larger than 55 and smaller than 55, respectively. Meanwhile, in each top corner, due to dense data points, the error bars with uncertainties about 20% are not plotted.
Figure 2: Top left panel shows \(L_{bol,\ t}\), NS54s of NGC5548 (in dark green) and the kbs09 method determined best descriptions (solid red line). Bottom left panel shows the MCMC technique determined two-dimensional posterior distributions in contour of \(\sigma\) and \(\tau\) of \(L_{bol,\ t}\), NS54s. Top middle panel shows an example of mock TDEs light curve \(L_{bol,\ t}\), theft with model parameters marked in the panel. And due to small SNR, the light curve \(L_{bol,\ t}\), TDE is not smooth. Top right panel shows an example of mock light curve \(L_{bol,\ t}\) (solid circles plus error bars in dark green) by \(L_{bol,\ t}\), NS54s shown in the top left panel plus the \(L_{bol,\ t}\), theft shown in the top middle panel, and the kbs09 method determined best descriptions (solid red line). Bottom right panel shows the MCMC technique determined two-dimensional posterior distributions in contour of \(\sigma\) and \(\tau\) of \(L_{bol,\ t}\) shown in the top-right panel.
Kelly, Bechtold & Siemiginowska (2009),
\[{\rm d}L(CAR)\,=\,\frac{-1}{\tau}L(CAR)\,{\rm d}t\,+\,\sigma\sqrt{{\rm d}\epsilon }(t) \tag{10}\]
where \(\epsilon(t)\) a white noise process with zero mean and variance equal to 1. Here, the parameter \(\tau\) is randomly selected from 100days to 1000days, as the shown results in MacLeod et al. (2010) for normal quasars. Then, the mock \(L_{\rm bol,\ t}\) are similarly created by
\[L_{\rm bol,\ t}\,=\,L_{\rm bol,\ t\tau_{\rm t},\ TDE}\,+\,L_{\rm bol,\ t,\ AGN} \tag{11}\]
And different white noises defined by SNRs randomly from 30 to 80 are added to the mock light curves \(L_{\rm bol,\ t}\). Here, the light curve \(L_{\rm bol,\ t,\ AGN}\) has different intrinsic variability timescale and amplitude from those of \(L_{\rm bol,\ t,\ N548}\). which will provide further considerations of effects of TDEs on long-term variability of AGN. And the observational uncertainties of \(L_{\rm bol,\ t,\ N5548}\) are accepted as the uncertainties of \(L_{\rm bol,\ t}\).
The third kind of \(L_{\rm bol,\ t}\) is mainly created as follows after considering different parameters of BH mass, redshift, energy transfer efficiency, etc. The AGN variability template \(L_{\rm bol,\ t,\ CAR}\) is created by the CAR process determined \(L(CAR)\) plus an expected bolometric luminosity \(L_{b0}\) (\(\log(L_{bol}/{\rm erg/s})\)) depending on redshift,
\[L_{\rm bol,\ t,\ CAR}\,=\,L_{b0}\,+\,L(CAR)\]
\[{\rm d}L(CAR)\,=\,\frac{-1}{\tau_{0}}L(CAR)\,{\rm d}t\,+\,\sigma\sqrt{{\rm d} \epsilon}(t) \tag{12}\]
\[L_{b0}\,=\,44.96\,+\,1.22\,\times\,z\]
where \(\tau_{0}\) is selected to be 200days or 600days (a common value and a large value of intrinsic variability timescale in quasars, see results in MacLeod et al. (2010); Kelly, Bechtold & Siemiginowska (2009); Kozlowski et al. (2010); Rumbaugh et al. (2018)), and \(\frac{\sigma^{2}\tau}{2}\) is selected to be around 0.012 (leading to similar variance as those in NGC5548). The selected parameters of \(\tau\) and \(\sigma\) lead the \(L(CAR)\) with mean of zero and variance similar as the \(L_{\rm bol,\ t,\ N5548}\). The dependence of bolometric luminosity on redshift \(L_{b0}\,\propto\,1.22\,\times\,z\), shown in Fig. 1, is well determined from all the 23093 SDSS quasars in Shen et al. (2011) with measured continuum luminosity at 5100A. There is a strong positive correlation between redshift and bolometric luminosity calculated by 10 times of the continuum luminosity at 5100A, with Spearman rank correlation coefficient 0.66 (\(P_{null}\,<\,10^{-15}\)) and with RMS scatter about 0.29. Here, 6 different values of 0.05, 0.1, 0.2, 0.3, 0.5, 1 are accepted as input redshift, applied to determine \(L_{b}\). Meanwhile, based on the three different BH masses \(M_{BH}\,=\,10^{6},\ 10^{7},\ 5\times 10^{7}\ {\rm M}_{\odot}\), three different energy transfer efficiency \(\eta\,=\,0.06,\ 0.15,\ 0.3\) and the six redshift, the \(L_{\rm bol,\ t+,\ TDE}(M_{BH},\ \eta,\ z)\) can be randomly created. Then, the mock light curves \(L_{\rm bol,\ t}\) are similarly created by
\[L_{\rm bol,\ t}\,=\,L_{\rm bol,\ t\tau_{\rm t},\ TDE}(M_{BH},\ \eta,\ z)\,+\,L_{\rm bol,\ t,\ CAR} \tag{13}\]
And different white noises defined by SNRs randomly from 30 to 80 are added to the mock light curves \(L_{\rm bol,\ t}\). For each series [\(\gamma\), \(M_{BH},\eta,\ z\), \(\tau\)], 1200 mock light curves are created with contributions of TDEs. Finally, there are \(2\times 3\times 3\times 6\times 2\times 1200\,=\,259200\) mock light curves created after considering TDEs contributions to intrinsic AGN variability. And 10% are accepted as the uncertainties of \(L_{\rm bol,\ t}\).
Actually, besides the linear dependence of bolometric luminosity on redshift, dependence of BH mass on redshift is also checked through the reported parameters of the quasars in Shen et al. (2011). However, the Spearman Rank correlation coefficient for the dependence is only 0.29, quite weaker than the dependence of bolometric luminosity on redshift. Therefore, rather than dependence of BH mass on redshift, the linear dependence of bolometric luminosity on redshift is accepted in the manuscript. The application of the linear dependence of bolometric luminosity on redshift can reduce one free model parameter to create the third kind of mock light curves. Moreover, as shown in MacLeod et al. (2010); Kelly, Bechtold & Siemiginowska (2009), there is a dependence of process parameter \(\tau\) on BH mass. However, the dependence is very loose, with Spearman Rank correlation coefficient about 0.23. Therefore, in the manuscript, the loose dependence of process parameter \(\tau\) on BH mass is not accepted. And accepted the BH mass and \(\tau\) and redshift are independent parameters, much wider parameter space can be occupied to create the mock light curves, and more efficient conclusions can be obtained.
Before the end of the section, three points are noted. First and foremost, in order to clearly show properties of model parameters applied to create TDEs contributions and to create \(L(CAR)\), Table 1 shows the accepted values and/or accepted ranges of the applied model parameters. Besides, the main objective of the manuscript is to determine effects of TDEs contributions on observed long-term AGN variability from simplicity to complexity. Therefore, when the first kind and the second kind of mock light curves are created, the oversimplified procedure is firstly applied with the fixed BH mass (the BH mass of NGC5548), the fixed energy transfer efficiency (determined by the BH mass of NGC5548) and the fixed redshift (the redshift of NGC5548). Then, effects of randomly selected values of model parameters are considered through the third kind of mock light curves. Last but not the least, for the three kinds of mock light curves \(L_{\rm bol,\ t}\), the corresponding maximum BH mass is \(6.7\times 10^{7}\ {\rm M}_{\odot}\) (the BH mass of NGC5548), which is a large (near to the Hills mass limit) but reasonable value, see the maximum BH mass about \(66\times 10^{6}{\rm M}_{\odot}\) determined by the MOSFET in TDEs candidates in Mockler, Guillochon & Ramirez-Ruiz (2019). Meanwhile, when the third kind of mock light curves are created, the Equation (6) is not applied to determined energy transfer efficiency, after considering the listed values of \(\eta\) in Mockler, Guillochon & Ramirez-Ruiz (2019) that high \(\eta\) could be expected around central BH with masses around \(10^{6}{\rm M}_{\odot}\). And also as the shown results in Mockler, Guillochon & Ramirez-Ruiz (2019), the collected \(\eta\) values from 0.06 to 0.3 are also reasonable to create time dependent TDEs expected bolometric luminosities for the third kind of mock light curves.
## 3 Main results
### Results based on the Long-Term Variabilities of \(L_{\rm bol,\ t,\ N5548}\)
As the discussed results in Kelly, Bechtold & Siemiginowska (2009) (see their Fig. 4), the long-term variability \(L_{\rm bol,\ t,\ N5548}\) of NGC5548 has intrinsic variability timescale about 214days. The same method as shown in Equation (7)-(12) in Kelly, Bechtold & Siemiginowska (2009) (the kbs09 method) is applied to analyze variability of \(L_{\rm bol,\ t,\ N5548}\), in order to ensure the applied kbs09 method in the manuscript is reliable. Here, rather than the public JAVELIN (Just Another Vehicle for Estimating Lags In Nuclei) code in Zu, Kochanek & Peterson (2011); Zu et al. (2013), the kbs09 method is applied in the manuscript, due to the following main reason. For each mock light curve with about 1500 data points (time duration longer than 10 years), the kbs09 method running in Surface Studio2 can give the final best-fitting results in ten minutes through the Levenberg-Marquardt least-squares minimization technique (the
known MPFIT package, Markwardt 2009), however, the JAVELIN code will give the final results in more than one hour.
The \(L_{\rm bol,~{}t,~{}N5548}\) is shown in top left panel of Fig. 2, with the kbs09 method determined best descriptions through the Maximum Likelihood method combining with the Markov Chain Monte Carlo (MCMC) technique (Foreman-Mackey et al., 2013), with the kbs09 method determined process parameters through the MPFIT package accepted as starting values of the process parameters in the MCMC technique. The determined posterior distributions of the parameters of \(\tau\) and \(\sigma\) are shown in the bottom left panel of Fig. 2, with accepted \(\log(\tau/days)~{}\sim~{}2.3^{+0.10}_{-0.076}~{}(\tau~{}\sim~{}219^{+60}_{-36} \rm days)\) which is well consistent with the reported 214days in Kelly, Bechtold & Siemignowska (2009). Therefore, the applied kbs09 method is reliable enough.
Through the kbs09 method applied through the Levenberg-Marquardt least-squares minimization technique, variability properties, especially the CAR process parameters of \(\sigma\) and \(\tau\), can be well determined for the total 2400 mock light curves \(L_{\rm bol,~{}t}\) created by \(L_{\rm bol,~{}t,~{}N5548}\) plus \(L_{\rm bol,~{}t,~{}TDE}\). Top middle panel and top right panel of Fig. 2 show an example of \(L_{\rm bol,~{}t,~{}TDE}\) and an example of \(L_{\rm bol,~{}t,~{}TDE}\). The show example in top right panel of Fig. 2 without clear TDEs expected variability features, the determined variability timescale is about 520days, as the shown posterior distributions in bottom right panel of Fig. 2 determined by MCMC technique applied in the kbs09 method, significantly longer than the intrinsic 219days of NGC5548, indicating TDEs contributions can lead to larger variability timescales.
In order to show clearer effects of TDEs contributions, two parameters \(R_{TN}\) and \(\tau_{TN}\) are defined, \(R_{TN}\) as ratio of the peak intensity of \(L_{\rm bol,~{}t,~{}TDE}\) to the mean intensity of \(L_{\rm bol,~{}t,~{}N5548}\), and \(\tau_{TN}\) as ratio of the variability timescale of \(L_{\rm bol,~{}t,~{}t}\) to the intrinsic variability timescale 219days of \(L_{\rm bol,~{}t,~{}N5548}\). Then, Fig. 3 shows the dependence of \(\tau_{TN}\) on \(R_{TN}\) of the 2400 mock light curves \(L_{\rm bol,~{}t,~{}TDE}\) to the intrinsic variability timescale.
In order to show clearer effects of TDEs contributions, two parameters \(R_{TN}\) and \(\tau_{TN}\) are defined, \(R_{TN}\) as ratio of the peak intensity of \(L_{\rm bol,~{}t,~{}TDE}\) to the mean intensity of \(L_{\rm bol,~{}t,~{}N5548}\), and \(\tau_{TN}\) as ratio of the variability timescale of \(L_{\rm bol,~{}t,~{}N5548}\). Then, Fig. 3 shows the dependence of \(\tau_{TN}\) on \(R_{TN}\) of the 2400 mock light curves \(L_{\rm bol,~{}t,~{}TDE}\) created with \(\gamma~{}=~{}4/3\) and 1200 light curves based on the \(L_{\rm bol,~{}t,~{}TDE}\) created with \(\gamma~{}=~{}5/3\). For \(R_{TN}~{}>~{}0.5\) (stronger TDEs contributions), there are positive correlations between \(\tau_{TN}\) on \(R_{TN}\), with the Spearman rank correlation coefficient is about 0.71 (0.79) with \(P_{null}~{}<~{}10^{-15}\) for the cases with \(\gamma~{}=~{}4/3~{}(\gamma~{}=~{}5/3)\). Here, the critical value \(R_{TN}~{}>~{}0.5\) is simply determined that the variance of \(\tau_{TN}\) of the data points with \(R_{TN}~{}>~{}0.5\) is at least 2000 times larger than the variance of \(\tau_{TN}\) of the data points with \(R_{TN}~{}<~{}0.5\). Actually, small different critical values from 0.5 have few effects on the discussed results. After considering the uncertainties in both coordinates, the positive dependence with \(R_{TN}~{}>~{}0.5\) can be simply described by
\[\begin{split}\log(\tau_{TN})(\gamma~{}=~{}4/3)~{}=~{}0.20~{}+~{}0. 58\log(R_{TN})\\ \log(\tau_{TN})(\gamma~{}=~{}5/3)~{}=~{}0.21~{}+~{}0.89\log(R_{TN} )\end{split} \tag{14}\]
through the FITEXY code ([https://idlastro.gsfc.nasa.gov/ftp/pro/mat/](https://idlastro.gsfc.nasa.gov/ftp/pro/mat/)) written by Frank Varosi) as discussed in Tremaine et al. (2002). It is clear that longer variability timescales can be confirmed with larger TDEs contributions. And SNRs have few effects on the results, based on the shown results in each top corner in each panel of Fig. 3.
Before end of the subsection, scatters of \(\tau_{TN}\) for given \(R_{TN}\) can be simply discusses as follows. For smaller values of \(R_{TN}\), TDEs contributions are very tiny, leading to few effects of TDEs contributions on determined \(\tau\) in mock light curves, indicating tiny scatters of \(\tau_{TN}\). However, for larger values of \(R_{TN}\) leading to apparent TDEs contributions, different values of stellar mass and \(\beta\) and large value \(T_{vis}\) can lead to quite different time durations of TDE expected light curves with the same peak intensity. The different time durations can lead to different ratios of \(\tau_{TN}\). Unless the model parameters applied in TDE model are fixed for a given \(R_{TN}\), the scatters of \(R_{TN}\) can be well expected, and provide robust clues in the manuscript to detect hidden TDEs in broad line AGN with apparent intrinsic variability. Similar scatters of \(\tau_{TN}\) can also be expected in the following subsections.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c} \hline \hline & & \multicolumn{4}{c|}{parameters applied in TDE model with \(\gamma=4/3,5/3\)} & \multicolumn{4}{c}{parameters for \(L(CAR)\)} \\ & & \(M_{BH}\) & \(\log(M_{\star})\) & \(\eta\) & \(z\) & \(\beta\) & \(\log(T_{vis})\) & \(t_{s}\) & \(\tau\) & \(\sigma_{\tau}^{2\tau}\) \\ \hline
1st & \(6.7\times 10^{7}\) & \(\in\)[-2, 1] & 0.072 & 0.01717 & \(\in\)[\(\beta_{i}\beta_{m}\)] & \(\in\)[-3,0] & \(\in\)[0, 3000] & \(\in\)[00, 1000] & \(\in\)[0.003,0.048] \\ \hline
2nd & \(6.7\times 10^{7}\) & \(\in\)[-2, 1] & 0.072 & 0.01717 & \(\in\)[\(\beta_{i}\beta_{m}\)] & \(\in\)[-3,0] & \(\in\)[0, 3000] & \(\in\)[100, 1000] & \(\in\)[0.003,0.048] \\ \hline
3rd & \(<\)[10, 10, 10, 10, 5 \(\in\)[-2, 1] & 0.060, 0.15, (\(\leq\)0.05, 0.1, 0.2, & Fig. 3 shows the dependence of \(\tau_{TN}\) on \(R_{TN}\) of the 2400 mock light curves \(L_{\rm bol,~{}t,~{}TDE}\) created with \(\gamma~{}=~{}4/3\) and 1200 light curves based on the \(L_{\rm bol,~{}t,~{}TDE}\) created with \(\gamma~{}=~{}5/3\). \\ \hline \end{tabular}
* The first column shows which kind of mock light curves, ’1st’ means the first kind of mock light curve \(L_{\rm bol,~{}t,~{}t}~{}=~{}L_{\rm bol,~{}t,~{}TDE}~{}+~{}L_{\rm bol,~{}t,~{}N5548}\), 2nd’ means the second kind of mock light curve \(L_{\rm bol,~{}t}~{}=~{}L_{\rm bol,~{}t,~{}TDE}~{}+~{}L_{\rm bol,~{}t,~{}N5548}\) + \(L(CAR)\) (with \(L(CAR)\) as CAR process created variability), ’3rd’ means the third kind of mock light curve \(L_{\rm bol,~{}t}~{}=~{}L_{\rm bol,~{}t,~{}TDE}(M_{BH},~{}\eta,~{}z)~{}+~{}L_{\rm bol,~{}t,~{} CAR}\).
* The second column, that plot column, the fourth column, the fifth column, the sixth column, the seventh column, the seventh column and the eighth column show the parameters of BH mass in units of M\({}_{\odot}\), logarithmic stellar mass in units of M\({}_{\odot}\) energy transfer efficiency \(\eta\), redshift \(z\), \(\beta\), logarithmic \(T_{vis}\) in units of years and shifted time in units of days, applied in theoretical TDE model.
* The last two panels show the CAR process parameters of \(\tau\) in units of days and \(\frac{\sigma_{\tau}^{2\tau}}{2}\) (expected variance of the CAR created light curve) applied to created light curves \(L(CAR)\).
* In each cell for the parameters, if there is only one value, meaning that the parameter is fixed to the listed value.
* In each cell for the parameters, if the mathematical symbol \(\in\) is used, meaning that the parameter is randomly selected from the minimum value to the maximum value listed in the square brackets following the mathematical symbol \(\in\).
* In each cell for the parameters, if the mathematical symbol \(\subset\) is used, meaning that value of the parameter is chosen from the values listed in the square brackets following the mathematical symbol \(c\).
* In the last column, the parameter \(\frac{\sigma_{\tau}^{2\tau}}{2}\) shows the expected variance of the CAR process created light curve. Based on the variance 0.012 of the light curve of NGC5548, the \(\frac{\sigma_{\tau}^{2\tau}}{2}\) is accepted to be larger than \(0.25\times 0.012\) and smaller than \(4\times 0.012\) for the created \(L(CAR)\) in the second kind and the third kind of mock light curves.
* In the fifth
### Results based on the Long-Term Variabilities of \(L_{\rm bol,~{}t,~{}agn}\)
Similar as the results on \(L_{\rm bol,~{}t,~{}NS548}\), top panels of Fig. 4 show an example of mock light curve \(L_{\rm bol,~{}t,~{}TDE}\) (in middle panel) and an example of mock light curve \(L_{bol,~{}t}\) (in right panel) created by \(L_{\rm bol,~{}t,~{}AGN}\) shown in the left panel plus the \(L_{\rm bol,~{}t,~{}TDE}\) shown in the middle panel. And the kbs09 method is applied to determine the intrinsic variability timescale of \(L_{\rm bol,~{}t,~{}AGN}\) = \(\tau\)\(\,\)\(620\)days through the Levenberg-Marquardt least-squares minimization technique. Bottom panels of Fig. 4 shows the dependence of \(\tau_{TN}\) on \(R_{TN}\) of the 2400 mock light curves \(L_{\rm bol,~{}t}\) based on the \(L_{\rm bol,~{}t,~{}AGN}\). For \(R_{TN}~{}>~{}0.5\), the Spearman rank correlation coefficient is about 0.63 (0.68) with \(P_{null}~{}<~{}10^{-15}\) for the cases with \(\gamma~{}=~{}4/3\) (\(\gamma~{}=~{}5/3\)). The positive dependence with \(R_{TN}~{}>~{}0.5\) can be simply described by
\[\begin{array}{l}\log(\tau_{TN})(\gamma~{}=~{}4/3)~{}=~{}0.03~{}+~{}0.30\log (R_{TN})\\ \log(\tau_{TN})(\gamma~{}=~{}5/3)~{}=~{}0.15~{}+~{}0.40\log(R_{TN})\end{array} \tag{15}\]
through the same FITEXY code.
Similar results can be found that longer variability timescales can be confirmed with larger TDEs contributions, and SNRs have few effects on the results. However, the intrinsic AGN variability have longer variability timescales, the \(\tau_{TN}\) will increase more slowly, based on the smaller slopes shown in the equations above.
### Results based on the Long-Term Variabilities of \(L_{\rm bol,~{}t,~{}car}\)
In the subsection, it is interesting to check effects of different model parameters on the dependence of \(\tau_{TN}\) on \(R_{TN}\) which are determined through the MPFIT package applied in the kbs09 method.
Fig. 5 show two examples of the mock light curves \(L_{\rm bol,~{}t}\) with different input model parameters marked in each panel. The first shown \(L_{\rm bol,~{}t}\) is created with \(\tau_{0}~{}=~{}200\)days, \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), \(z~{}=~{}0.05\), \(\gamma~{}=~{}4/3\), \(\beta~{}\sim~{}1.55\), \(M_{*}~{}\sim~{}0.31\)M\({}_{\odot}\), \(T_{vis}~{}\sim~{}0.218\), \(t_{s}~{}\sim~{}2200\)days, SNR = 46, \(\eta~{}=~{}0.06\). The second shown \(L_{\rm bol,~{}t}\) is created with \(\tau_{0}~{}=~{}600\)days, \(M_{BH}~{}=~{}5\times 10^{7}\)M\({}_{\odot}\), \(z~{}=~{}1.0\), \(\gamma~{}=~{}5/3\), \(\beta~{}\sim~{}1.32\), \(M_{*}~{}\sim~{}8.32\)M\({}_{\odot}\), \(T_{vis}~{}\sim~{}0.085\), \(t_{s}~{}\sim~{}1500\)days, SNR = 30, \(\eta~{}=~{}0.3\). Before proceeding further, there is a more intuitive result that smaller variability timescale of TDEs variability from cases with smaller BH mass should have few effects on \(R_{TN}\), such as the shown results with tiny changes in timescales in top panels of Fig. 5. More detailed results are shown as follows.
Fig. 6 shows the dependence of \(\tau_{TN}\) on \(R_{TN}\) for the cases (cases-6-2-4, the first number '6' means BH mass as \(10^{6}\)M\({}_{\odot}\), the second number '2' means \(\tau_{0}/100\)days \(~{}=~{}2\), and the third number '4' means \(\gamma~{}\times~{}3\) = 4) with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), \(\tau_{0}~{}=~{}200\)days, and \(\gamma~{}=~{}4/3\). It is clear that TDEs contributions around BHs with masses around \(10^{6}\)M\({}_{\odot}\) have few effects on the dependence of \(\tau_{TN}\) on \(R_{TN}\), all results shown in Fig. 6 with Spearman rank correlation coefficients smaller than 0.3 for the data points with \(R_{TN}~{}>~{}1\), even considering different redshift and different \(\eta\). The results can be well expected due to smaller variability timescales of TDEs around BHs with masses around \(10^{6}\)M\({}_{\odot}\), relative to the long time durations of \(L_{\rm bol,~{}t,~{}CAR}\). Besides the results for the cases-6-2-4, there are totally
Figure 4: Top panels show the results similar as those shown in top panels of Fig. 2, but based on the light curve \(L_{\rm bol,~{}t,~{}AGN}\) shown in the top left panel. Bottom panels show the results similar as those in Fig. 3, but based on the light curve \(L_{\rm bol,~{}t,~{}AGN}\) with intrinsic variability timescale about 620days. In the bottom right panel, the solid red circle shows the results for the mock light curve \(L_{bol,~{}t}\) shown in the top right panel. In top corners of bottom panels, due to large number of dense data points, the error bars with uncertainties about 20% are not plotted.
similar results, no apparent positive dependence of \(\tau_{TN}\) on \(R_{TN}\), for the cases (cases-6-6-4) with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), \(\tau_{0}~{}=~{}600\)days, and \(\gamma~{}=~{}4/3\), and for the cases (cases-6-2-5) with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), \(\tau_{0}~{}=~{}200\)days, and \(\gamma~{}=~{}5/3\), and for the cases (cases-6-6-5) with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), \(\tau_{0}~{}=~{}600\)days, and \(\gamma~{}=~{}5/3\). Therefore, we do not show the results on cases-6-6-4, cases-6-2-5, and cases-6-6-5 in plots. And there are no further discussions on the results with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), but the determined Spearman rank correlation coefficients are listed in Table 2 for all the cases with BH mass \(10^{6}\)M\({}_{\odot}\). In one word, contributions of TDEs around BHs with masses \(10^{6}\)M\({}_{\odot}\) cannot provide clear clues on central TDEs, through long-term variability.
Then, similar as the discussed results on dependence of \(\tau_{TN}\) on \(R_{TN}\) for the cases with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), the results on the dependence of \(\tau_{TN}\) on \(R_{TN}\) are also discussed with BH masses as \(10^{7}\)M\({}_{\odot}\) and \(5\times 10^{7}\)M\({}_{\odot}\). Based on two different values of \(M_{BH}\), two different values of \(\tau_{0}\) and two different values of \(\gamma\), there are 8 cases named as cases-7-2-4 (the first number '7' means BH mass as \(\log(M_{BH}/\)M\({}_{\odot})=7\), the second number '2' means \(\tau_{0}/100\)days \(=2\), and the third number '4' means \(\gamma~{}\times~{}3~{}=~{}4\)), cases-7-6-4, cases-7-2-5, cases-7-7-2-4 (the first number '7' means BH mass as \(\log(M_{BH}/\)M\({}_{\odot})=\log(5\times 10^{7})\sim 7.7\)), cases-7-7-6-4, cases-7.7-2-5, cases-7.7-6-5. Then, similar as the discussed results for the \(18\times 4\) dependences for the four cases with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), all the 144 (18\(\times\)8) dependences of \(\tau_{TN}\) on \(R_{TN}\) for \(R_{TN}~{}>~{}R_{cri}\) are carefully checked in all the cases with \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\), \(5\times 10^{7}\)M\({}_{\odot}\). Here, the critical values \(R_{cri}~{}=~{}0.3\) and \(R_{cri}~{}=~{}0.15\) are simply determined and accepted for the cases with \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\) and with \(M_{BH}~{}=~{}5\times 10^{7}\)M\({}_{\odot}\), respectively, after simply considering the variance of \(\tau_{TN}\) of the data points with \(R_{TN}~{}>~{}R_{cri}\) at least 2000 times larger than the variance of \(\tau_{TN}\) of the data points with \(R_{TN}~{}<~{}R_{cri}\). The determined Spearman Rank Correlation coefficients are listed in Table 2. Meanwhile, for the correlations with correlation coefficients larger than 0.3, through the same FITEXY code, the strong positive correlations between \(\tau_{TN}\) and \(R_{TN}\) for \(R_{TN}~{}>~{}0.3\) can be well described by
\[\log(\tau_{TN})~{}=~{}A~{}+~{}B~{}\times~{}\log(R_{TN}) \tag{16}\]
with determined \(B\) also listed in Table 2.
Here, not all the 144 (18\(\times\)8) dependences of \(\tau_{TN}\) on \(R_{TN}\) are shown in plots, but the dependence with maximum Spearman Rank correlation coefficient is shown in Fig. 7 among the 18 dependences in each case with \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\), \(M_{BH}~{}=~{}5~{}\times~{}10^{7}\)M\({}_{\odot}\). Meanwhile, based on the determined Coefficients and the slope \(B\) (if there was) listed in Table 2 for the 216 dependences in the 12 cases with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\), \(M_{BH}~{}=~{}5~{}\times~{}10^{7}\)M\({}_{\odot}\), properties of Coefficients and slope \(B\) are shown in Fig. 8.
Based on the determined Coefficients listed in Table 2 and the shown results in Fig. 8, the following seven points can be found. First, comparing with the cases with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\), there are more sensitive and clearer positive dependence of \(\tau_{TN}\) on \(R_{TN}\) (\(R_{TN}~{}>~{}0.3\)), due to the results with Spearman rank correlation coefficients larger than 0.3: almost all the cases with input \(\tau_{0}~{}=~{}200days\) and \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\) have coefficients larger than 0.3 for the correlations with \(R_{TN}~{}>~{}0.3\). Second, for the cases with \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\), intrinsic variability timescales long as 600days should lead to no clear positive dependence of \(\tau_{TN}\) on \(R_{TN}\), but intrinsic variability timescales long as 200days can lead to clear positive dependence of \(\tau_{TN}\) on \(R_{TN}\). Third, for the cases with \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\), the positive dependence of \(\tau_{TN}\) on \(R_{TN}\) are steeper (larger \(B\)) in the cases with \(\gamma~{}=~{}5/3\) than with \(\gamma~{}=~{}4/3\). Fourth, comparing with cases with \(M_{BH}~{}=~{}10^{6}\)M\({}_{\odot}\) and \(M_{BH}~{}=~{}10^{7}\)M\({}_{\odot}\), there are more sensitive and clearer positive dependence of \(\tau_{TN}\) on \(R_{TN}\) for the cases with \(M_{BH}~{}=~{}5~{}\times~{}10^{7}\)M\({}_{\odot}\), due to the results with Spearman rank correlation coefficients larger than 0.3: all the cases with input \(\tau_{0}~{}=~{}200days\) and half of the cases with \(\tau_{0}~{}=~{}600days\) have
Figure 5: Two examples on the mock light curves \(L_{\rm bol,~{}t,~{}c,~{}c}\) shown as dots plus error bars in dark green in left panels, the mock light curves \(L_{\rm bol,~{}t,~{}c,~{}tme}(M_{BH},~{}\eta,~{}z)\) shown in the middle panels and the mock light curves \(L_{\rm bol,~{}t}\) shown as dots plus error bars in dark green in the right panels. In each left panel, the input model parameters of BH mass \(M_{6}\) (in unit of \(10^{6}\)M\({}_{\odot}\)), redshift, \(\tau_{0}\) are listed in blue characters. In each middle panel, the input TDEs model parameters of \(\gamma\), \(\beta\), stellar mass \(M_{\bullet}\), \(T_{vis}\), \(t_{5}\), \(SNR\) and \(\eta\) are listed in blue characters. In each right panel, solid red line shows the kbs09 method determined best descriptions to the \(L_{\rm bol,~{}t}\), and the corresponding determined timescale \(\tau\) is listed in blue characters.
Figure 6: On the dependence of \(\tau_{TN}\) on \(R_{TN}\) for the mock light curves based on the long-term variability \(L_{\rm bol,\,\,i,\,CaR}\) created with \(M_{BH}=10^{6}\)M\({}_{\odot}\), \(\tau_{0}=200\)days plus the \(L_{\rm bol,\,\,i,\,The}\) created with \(\gamma=4/3\). For the panels from top to bottom, the results are based on the redshift of 0.05, 0.1, 0.2, 0.3, 0.5, and 1.0, respectively. For the panels from left to right, the results are based on the \(\eta\) of 0.06, 0.15 and 0.3, respectively. In each panel, the Spearman rank correlation coefficient for the correlation between \(\tau_{TN}\) and \(R_{TN}\) (\(R_{TN}~{}>~{}1\)) is listed in blue characters. In each panel, pluses in red and in dark green show the results with SNR larger than 55 and smaller than 55, respectively. In each panel, due to large number of dense data points, the error bars with uncertainties about 20% - 25% are not plotted.
coefficients larger than 0.3 for the correlation with \(R_{TN}\,>\,0.15\). Fifth, for the cases with BH masses about \(5\,\times\,10^{7}\)M\({}_{\odot}\), intrinsic variability timescales long as 600days but only with \(\,\gamma\,=\,5/3\) should lead to clear positive dependence of \(\tau_{TN}\) on \(R_{TN}\), but intrinsic variability timescales long as 200days almost can lead to clear positive dependence of \(\tau_{TN}\) on \(R_{TN}\). Sixth, the positive dependence of \(\tau_{TN}\) on \(R_{TN}\) are steeper (larger \(B\)) in the cases with \(\,\gamma\,=\,5/3\) than with \(\,\gamma\,=\,4/3\). Seventh, there are few effects of SNR on dependences of \(\tau_{TN}\) on \(R_{TN}\), such as the shown results in Fig. 7.
Based on the results above, we can find that
* BH mass has apparent effects on the dependence of \(\tau_{TN}\) on \(R_{TN}\). Larger BH masses can lead to more apparent and steeper dependence of \(\tau_{TN}\) on \(R_{TN}\).
* Polytropic index \(\gamma\) has apparent effects on the dependence of \(\tau_{TN}\) on \(R_{TN}\). \(\gamma\,=\,5/3\) can lead to more apparent and steeper dependence of \(\tau_{TN}\) on \(R_{TN}\).
* Redshift has tiny effects on the dependence of \(\tau_{TN}\) on \(R_{TN}\). At least, redshift changing from 0.05 to 1.0 cannot lead to clear changes in the dependence of \(\tau_{TN}\) on \(R_{TN}\), only parameter \(B\) being increased quite smoothly in cases-7-2-5 (with \(M_{BH}=10^{7}\)M\({}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=5\)), cases-7.7-2-4 (with \(M_{BH}=5\times 10^{7}\)M\({}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=4\)), cases-7.7-2-5 (with \(M_{BH}=5\times 10^{7}\)M\({}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=5\)) and cases-7.7-6-5 (\(M_{BH}=5\times 10^{7}\)M\({}_{\odot}\), \(\tau_{0}=600days\) and \(3\times\gamma=5\)).
* Energy transfer efficiency has tiny effects on the dependence of \(\tau_{TN}\) on \(R_{TN}\). At least, energy transfer efficiency changing from 0.06 to 0.3 cannot lead to clear changes in the dependence of \(\tau_{TN}\) on \(R_{TN}\).
## 4 Discussions and further applications
It is necessary to check whether intrinsic AGN variability can provide quite different variability timescales in different epochs. Here, based on the 13years-long light curve \(L_{\rm bol,\,\,t,\,N5548}\), 100 different 2000days-long (about 10times of the intrinsic variability timescale 200days) light curves can be randomly collected from \(L_{\rm bol,\,\,t,\,N5548}\) with time duration from a randomly given starting time \(0\,<\,t_{0}/{\rm days}\,<\,3600\) to \(t_{0}\,+\,2000\). The kbs09 method is applied to determine the variability timescales \(\tau_{d}\) of the 100 different 2000days-long light curves. Then, we can find that the ratios of \(\tau_{d}\) to the variability timescale 219days of \(L_{\rm bol,\,\,t,\,N5548}\) have mean value 1.02 with standard deviation 0.11. It is clear that light curves in different epochs cannot lead variability timescale varying so large as the results shown in Fig. 3 with large TDEs contributions. Similar results can be found from the mock light curves of \(L_{\rm bol,\,\,t,\,AGN}\) and \(L_{\rm bol,\,\,t,\,CAR}\).
Furthermore, there are seven more points we should note. First, in order to find more clearer effects of TDEs contributions on long-term AGN variability, the time duration is longer as 13years in the \(L_{\rm bol,\,\,t}\). Once there were shorter time durations applied, the dependence of \(\tau_{TN}\) on \(R_{TN}\) would have larger scatters, due to probably only part of TDEs contributions covered in \(L_{\rm bol,\,\,t}\). Moreover, the simulating light curves are based on BH masses smaller than \(10^{8}\)M\({}_{\odot}\). When BH mass is larger than \(10^{8}\)M\({}_{\odot}\), more massive but shorter-lived main-sequence stars are necessary to simulate suitable TDEs, otherwise tidal disruption radius should be smaller than event horizon of central BH. Therefore, the large BH mass is selected to be \(5\times 10^{7}\) M\({}_{\odot}\) in the manuscript.
Second, as the discussed and shown results in MacLeod et al. (2010); Kelly, Bechtold & Siemiginowska (2009), the parameter \(\sigma\) and \(\tau\) are probably connected. However, the connection between \(\sigma\) and \(\tau\) is quite loose. Therefore, in the manuscript, there are no considerations of the connection \(\sigma\) and \(\tau\), when the third kind of mock light curves \(L_{\rm bol,\,\,t}\) are simulated. With the similar considerations, due to the loose dependence of energy transfer efficiency and BH mass discussed in Davis & Laor (2011), the energy transfer efficiency \(\eta\) is randomly selected to be 0.06, 0.15 and 0.3. Otherwise, the expected energy transfer efficiency around \(M_{BH}=10^{6}\)M\({}_{\odot}\) should be small to be 0.008, an extremely smaller value.
Third, besides BH masses and intrinsic variability timescales, there are no further considerations on the other parameters related to TDEs model. Actually, the parameters, such as the stellar mass \(M_{\bullet}\) and impact parameter \(\beta\), should have effects on the \(\tau_{TN}\), because commonly larger \(M_{\bullet}\) and \(\beta\) can commonly lead to stronger TDEs expected bolometric luminosities. As examples, Fig. 9 shows the dependence of \(\tau_{TN}\) on the stellar mass \(M_{\bullet}\) and on the impact parameter \(\beta\) for the simulated light curves \(L_{\rm bol,\,\,t}\) by \(L_{\rm bol,\,\,t,\,N5548}\) plus TDEs contributions. For the shown dependence of \(\tau_{TN}\) on the stellar mass \(M_{\bullet}\), there are positive correlations with Spearman rank correlation coefficients about 0.35 and 0.61 (\(P_{null}\,<\,10^{-15}\)) for the results with \(\,\gamma\,=\,4/3\) and with \(\,\gamma\,=\,5/3\), respectively. And, for the shown dependence of \(\tau_{TN}\) on the \(\beta\), there are positive correlations with Spearman rank correlation coefficients about 0.76 and about 0.54 (\(P_{null}<10^{-15}\)) for the results with \(\,\gamma\,=\,4/3\) and with \(\,\gamma\,=\,5/3\), respectively. Even for \(M_{\bullet}\) around one solar mass of \(\beta\) gently larger than 1, \(\tau_{TN}\) can be well larger than 2. Certainly, for the cases with smaller BH masses, the positive correlations on \(M_{\bullet}\) and on \(\beta\) should be not so strong. However, not similar as the central BH masses and redshift of normal AGN which can be well estimated through spectroscopic features, the \(M_{\bullet}\) and \(\beta\) can not be previously measured. And the main objective is to provide clues to detect probable hidden TDEs in normal AGN. Probability of more massive main-sequence stars being tidally disrupted with larger \(\beta\) in TDEs in normal AGN is not the objective of the manuscript. If there was a more massive main-sequence star was tidally disrupted with larger \(\beta\) in a normal broad line AGN, it would be more preferred to detect the expected hidden TDEs. Therefore, in the manuscript, effects of the model parameters related to the theoretical TDEs model are not discussed.
Fourth, as discussed in Kozlowski (2017), shorter time baseline should lead to underestimated process parameter \(\tau\) in DRW/CAR process. Considering the expected longer \(\tau\) due to larger contributions from TDEs, intrinsic values of process parameter \(\tau\) should be larger than the currently determined values for the created mock light curves. Therefore, combining with the input value of process parameter \(\tau\) for \(L_{\rm bol,\,\,t,\,CAR}\), larger values of \(\tau_{TN}\) could be expected, leading to more apparent dependence of \(\tau_{TN}\) on \(R_{TN}\) to support our final conclusions. Meanwhile, accepted the criterion reported in Kozlowski (2017) that there are good estimations of process parameters for light curves with \(\,\tau/t_{exp}\,<\,0.1\) (similar to process parameter \(\tau\) divided by time baseline), the determined parameters are not biased for the mock light curves created with \(\,\tau_{0}\,=\,200\)days (\(\tau/t_{exp}\,\sim\,200\)days/\(13\)years \(\,\sim\,0.04\,<\,0.1\)). Therefore, even only considering the results based on \(\tau_{0}\,=\,200\)days, similar conclusions on effects of TDEs contributions can be given.
Fifth, the standard theoretical TDE model discussed in Guillochon & Ramirez-Ruiz (2013); Guillochon, Manukian & Ramirez-Ruiz (2014); Mockler, Guillochon & Ramirez-Ruiz (2019) is applied in the manuscript, leading to expected time-dependent decline \(t^{-5/3}\) at late times. However, besides standard TDE model expected variability pattern, there are slow TDEs, such as the discussed results in Graham et al. (2017), probably leading to shallower decline closer to \(t^{-1}\). The slow TDEs could lead to much longer time durations than standard TDEs. However, based on the discussed results
Figure 7: On the dependences of \(\tau_{TN}\) on \(R_{TN}\) for the mock light curves based on the long-term variability \(L_{\rm bol,\,i}\), \({\rm C}\). The results in the eight panels show the dependence with maximum Spearman Rank correlation coefficient among the 18 dependences in cases-7-24 (with \(M_{BH}=10^{7}{\rm M}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=4\)), cases-7-6-4 (with \(M_{BH}=10^{7}{\rm M}_{\odot}\), \(\tau_{0}=600days\) and \(3\times\gamma=4\)), cases-7-2-5 (with \(M_{BH}=10^{7}{\rm M}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=5\)), cases-7-6-5 (with \(M_{BH}=5\times 10^{7}{\rm M}_{\odot}\), \(\tau_{0}=600days\) and \(3\times\gamma=4\)), cases-7-7-2-4 (with \(M_{BH}=5\times 10^{7}{\rm M}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=4\)), cases-7-6-6-4 (with \(M_{BH}=5\times 10^{7}{\rm M}_{\odot}\), \(\tau_{0}=600days\) and \(3\times\gamma=4\)), cases-7-7-2-5 (with \(M_{BH}=5\times 10^{7}{\rm M}_{\odot}\), \(\tau_{0}=200days\) and \(3\times\gamma=5\)), cases-7-7-6-5 (with \(M_{BH}=5\times 10^{7}{\rm M}_{\odot}\), \(\tau_{0}=600days\) and \(3\times\gamma=5\)) as the listed information in title of each panel. Meanwhile, the information of \(z\) and \(\eta\) are also listed in title of each panel. In each panel, Similar as the results shown in in 11 each panel, pluses in red and in dark green show the results with SNR larger than 55 and smaller than 55, respectively. In each panel, due to large number of dense data points, the error bars with uncertainties about 20% - 25% are not plotted. In each panel, the calculated correlation coefficient (Coe) is marked in blue characters in top-left region for the correlation between \(\tau_{TN}\) and \(R_{TN}\) (\(R_{TN}>R_{crit}\)). In each panel with correlation coefficient larger than 0.3, solid blue line shows the linear description \(\log(\tau_{TN})=A+B\log(R_{TN})\) to the correlation between \(\tau_{TN}\) and \(R_{TN}\) (\(R_{TN}>R_{crit}\)) and the determined parameters of \(A\) and \(B\) are listed in blue characters in top-left region.
above, more apparent difference between characteristic timescales of TDEs variability and characteristic timescales of intrinsic AGN variability should lead to more apparent dependence of \(\tau_{TN}\) on \(R_{TN}\). Therefore, even without considering rare numbers of slow TDEs, considerations of slow TDEs could lead to more apparent clues to support our final conclusions.
Sixth, as more recent discussions in Burke et al. (2022), host galaxy dilution could have strong effects on determined process parameter. However, accepted host galaxy contribution as an constant component with none variability (almost inevitable), the host galaxy dilution should have few effects on process parameter of \(\tau\), because the host galaxy contribution can be included in the parameter \(L_{b0}\) in Equation (12) above. In the manuscript, the ratio of \(\tau\) from the light curves with and without TDEs contributions are mainly considered, therefore, the host galaxy dilution has few effects on our final conclusions.
Seventh, although all the quasars with measurements of continuum luminosities are collected from Shen et al. (2011) to determine the dependence of bolometric luminosity on redshift shown in Fig. 1, some weak quasars are actually not included in the collected quasars, due to their lower continuum emissions. However, considering the very loose (or very weakly positive) dependence of DRW process parameter \(\tau\) on luminosity as simply discussed in Kelly, Bechtold & Siemiginowska (2009); MacLeod et al. (2010), lower bolometric luminosities should lead to no variations of (or lower) DRW process parameter \(\tau\) of intrinsic AGN variability. There
Figure 8: Top left panel shows properties of Spearman Rank Correlation Coefficients for all the dependences of \(\tau_{TN}\) on \(R_{TN}\) for the cases with different \(M_{BH}\); different \(\tau_{0}\), different \(\tau_{2}\), different \(\gamma\) and different \(\eta\). Top right panel shows properties of \(B\) in the formula \(\log(\tau_{TN})=A+B\times\log(R_{TN})\) for all the dependences. Bottom panel shows the legends used in top panels. The four numbers included in ‘cases-0-n1-n2-n3’ shown in legends have the following meanings, ‘n0’ means logarithmic BH mass, ‘n1’ means the value of \(\tau_{0}/100\), ‘n2’ means the values of \(3\times\gamma\) and ‘n2’ means the value of \(\eta\), for example, ‘cases-7.7-2-4-0.30’ means the 6 dependences (relative to six different values of redshift) of \(\tau_{TN}\) on \(R_{TN}\) for the case with \(M_{BH}=5\times 10^{7}\)M\({}_{\odot}\), \(\tau_{0}=200\)days, \(3\times\gamma=4/3\) and \(\eta=0.30\). In top right panel, due to many dependences with coefficients smaller than 0.3, there are some dependences with their \(B\) over-plotted with \(B=0\). In top left panel, horizontal red line marks the position of Spearman Rank correlation coefficient of 0.3.
fore, even considering contributions of the lost weak quasars, there should be not different conclusions if accepted no dependence of DRW process parameter \(\tau\) on luminosity, or lead to more apparent clues to support our final conclusions if accepted weakly positive dependence of DRW process parameter \(\tau\) on luminosity.
Based on the expected effects of TDEs contributions on long-term AGN variability, to check variability properties in different epochs of normal AGN could provide clues on probable central hidden TDEs in normal AGN with apparently intrinsic variability. In one word, the results in the manuscript provide an interesting and practical method to detect probable hidden TDEs in normal AGN with apparent intrinsic variability, especially for AGN with smaller intrinsic variability timescales but BH masses larger than \(10^{7}\mathrm{M}_{\odot}\). To report detected hidden TDEs in normal AGN through quite different \(\tau\) in different epochs can provide robust evidence to support the results in the manuscript. Considering TDEs expected time durations about several years for \(M_{BH}\sim 10^{7}\mathrm{M}_{\odot}\), baseline about (more than) 10years-long should be necessary for light curves to detected clues for hidden TDEs in broad line AGN. Therefore, combining light curves from different sky survey projects should be the efficient way to build light curves with baseline longer than 10 years. Unfortunately, there are quite different qualities, such as different baseline, different time steps, different SNRs, different covered wavelength ranges, different transmission curves for different filters, etc., for light curves from different sky survey projects. Before checking probably different intrinsic variability properties in different epochs from different sky survey projects, effects of the quite different qualities should be firstly and clearly determined. In current stage, long-term light curves from CSS and from ZTF for a large sample of SDSS quasars have been collected, and basic results are currently in writing. In the near future, effects of different qualities on variability properties for light curves from CSS and ZTF and a small sample of quasars with quite different \(\tau\) in light curves from CSS and from ZTF will be discussed and reported as soon as possible. It is a great pity that we can not currently give a clear estimation on detection rates of hidden TDEs through combinations of light curves from different sky survey projects, especially because we do not know what key parameters related to AGN dominate probable TDEs contributions. However, the results in the manuscript are showing a practicable way to detect hidden AGN in normal broad line AGN with apparent variability. To detect hidden TDEs in broad line AGN through different variability properties in different epochs is our main objective in the near future.
## 5 Conclusions
Finally, we give our main conclusions as follows. Based on the AGN variability templates simulated by the CAR process and the variability from theoretical TDEs model, effects of TDEs contributions can be well estimated on the long-term variability properties of normal AGN with apparent intrinsic variability. Stronger TDEs contributions can lead to longer variability timescale \(\tau\) of observational long-term AGN variability, especially for AGN with smaller intrinsic variability timescales and with BH masses larger than \(10^{7}\mathrm{M}_{\odot}\). Therefore, the re
Figure 9: Dependence of \(\tau_{TN}\) on the stellar mass \(M_{\bullet}\) (top panels) and on the impact parameter \(\beta\) (bottom panels) for the simulated light curves \(L_{\mathrm{bol,\;t}}\) by \(L_{\mathrm{bol,\;t,\;NSS48}}\) plus TDEs contributions with \(\gamma\;=\;4/3\) (left panels) and with \(\gamma\;=\;5/3\) (right panels). In each panel, horizontal dashed red lines show \(\tau_{TN}\;=\;2\) and \(\tau_{TN}\;=\;5\), respectively.
sults provide an interesting forward-looking and practicable method to detect central hidden TDEs in normal broad line AGN based on quite different variability properties in different epochs, especially in broad line AGN with shorter intrinsic variability timescales and with BH masses larger than \(10^{7}\)M\({}_{\odot}\).
## Acknowledgements
Zhang gratefully acknowledges the anonymous referee for giving us constructive comments and suggestions to greatly improve our paper. Zhang gratefully acknowledges the kind support from the Chinese grant NSFC-12173020 and NSFC-12373014. The paper has made use of the code of TDEFIT [https://tde.space/tdefit/](https://tde.space/tdefit/) which is s a piece of open-source software written by James Guillochon for the purposes of model-fitting photometric light curves of tidal disruption events, and also made use of the code of MOSFET (Modular Open Source Filter for Transients) [https://mosfit.readthedocs.io/](https://mosfit.readthedocs.io/) which is a Python 2.7/3.x package for fitting, sharing, and estimating the parameters of transients via user-contributed transient models. The paper has made use of the data of NGC 5548 from AGNWATCH project ([https://www.asc.ohio-state.edu/astronomy/agmwatch/](https://www.asc.ohio-state.edu/astronomy/agmwatch/)) which is a consortium of astronomers who have studied the inner structure of AGN through continuum and emission-line variability. The paper has made use of the public code of FITEXY from the IDL Astronomy User's Library ([https://idlastro.gsfc.nasa.gov/ftp/pro/math/](https://idlastro.gsfc.nasa.gov/ftp/pro/math/)). The paper has made use of the MCMC code [https://emece.readthedocs.io/en/stable/index.html](https://emece.readthedocs.io/en/stable/index.html), and make use of MPFIT package [https://pages.physics.wisc.edu/~craigm/idl/cmpfit.html](https://pages.physics.wisc.edu/~craigm/idl/cmpfit.html).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \(\tau_{0}\) & \(\eta\) & \(z\) & \(M_{6}\) = 1 & \(M_{6}\) = 10 & \(M_{6}\) = 50 & \(\tau_{0}\) & \(\eta\) & \(z\) & \(M_{6}\) = 1 & \(M_{6}\) = 10 & \(M_{6}\) = 50 \\ & & & \(B\) (\(\alpha\)) & \(B\) (\(\alpha\)) & \(B\) (\(\alpha\)) & & & & \(B\) (\(\alpha\)) & \(B\) (\(\alpha\)) & \(B\) (\(\alpha\)) \\ \hline \multicolumn{11}{c}{\(\gamma\) = 4/3} \\ \hline
[MISSING_PAGE_POST]
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author ([email protected]).
|
2307.02522 | High-Energy Collision of Quarks and Mesons in the Schwinger Model: From
Tensor Networks to Circuit QED | With the aim of studying nonperturbative out-of-equilibrium dynamics of
high-energy particle collisions on quantum simulators, we investigate the
scattering dynamics of lattice quantum electrodynamics in 1+1 dimensions.
Working in the bosonized formulation of the model and in the thermodynamic
limit, we use uniform-matrix-product-state tensor networks to construct
multi-particle wave-packet states, evolve them in time, and detect outgoing
particles post collision. This facilitates the numerical simulation of
scattering experiments in both confined and deconfined regimes of the model at
different energies, giving rise to rich phenomenology, including inelastic
production of quark and meson states, meson disintegration, and dynamical
string formation and breaking. We obtain elastic and inelastic scattering cross
sections, together with time-resolved momentum and position distributions of
the outgoing particles. Furthermore, we propose an analog circuit-QED
implementation of the scattering process that is native to the platform,
requires minimal ingredients and approximations, and enables practical schemes
for particle wave-packet preparation and evolution. This study highlights the
role of classical and quantum simulation in enhancing our understanding of
scattering processes in quantum field theories in real time. | Ron Belyansky, Seth Whitsitt, Niklas Mueller, Ali Fahimniya, Elizabeth R. Bennewitz, Zohreh Davoudi, Alexey V. Gorshkov | 2023-07-05T18:00:00Z | http://arxiv.org/abs/2307.02522v2 | # High-Energy Collision of Quarks and Hadrons in the Schwinger Model:
###### Abstract
With the aim of studying nonperturbative out-of-equilibrium dynamics of high-energy particle collisions on quantum simulators, we investigate the scattering dynamics of lattice quantum electrodynamics in 1+1 dimensions. Working in the bosonized formulation of the model, we propose an analog circuit-QED implementation that is native to the platform, requires minimal ingredients and approximations, and enables practical schemes for particle wave-packet preparation and evolution. Furthermore, working in the thermodynamic limit, we use uniform-matrix-product-state tensor networks to construct multi-particle wave-packet states, evolve them in time, and detect outgoing particles post collision. This facilitates the numerical simulation of scattering experiments in both confined and deconfined regimes of the model at different energies, giving rise to rich phenomenology, including inelastic production of quark and meson states, meson disintegration, and dynamical string formation and breaking. We obtain elastic and inelastic scattering cross sections, together with time-resolved momentum and position distributions of the outgoing particles. This study highlights the role of classical and quantum simulation in enhancing our understanding of scattering processes in quantum field theories in real time.
+
Footnote †: preprint: UMD-PP-023-02, IQuS@UW-21-050
_Introduction.--_Scattering processes in nuclear and high-energy physics play an essential role in studies of hadronic and nuclear structure and of exotic phases of matter, and in searches for new particles and interactions. Current and future frontiers are the Large Hadron Collider, the Relativistic Heavy-Ion Collider [1; 2], the Electron-Ion Collider [3; 4], and neutrino-nucleus scattering at the Deep Underground Neutrino Experiment [5; 6; 7; 8]. Collisions in these experiments involve hadronic initial states and complex many-particle final states. In addition, scattering proceeds in a multi-stage process and may encompass a wide range of phenomena, including the formation of exotic matter [9; 2], such as quark-gluon plasma [10; 11], thermalization [12; 13], quark and hadron fragmentation [14; 15], and quark-gluon-plasma hadronization [16; 17]. Ideally, such rich phenomenology should be grounded in first-principles quantum-chromodynamics (QCD) descriptions. While perturbation theory and QCD factorization [18; 19; 20; 21], as well as the nonperturbative method of lattice QCD [22; 23; 24; 25; 26; 27; 28; 29; 30], have brought about impressive advances, a full understanding of scattering processes in QCD at all stages and energies is still lacking.
First-principles simulations of high-energy particle scattering are considered a prime application for quantum computers and simulators [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. A central challenge is that realistic scattering experiments involve a vast range of spatial and temporal scales, placing their simulation beyond the capabilities of current digital quantum computers. Analog quantum simulators may enable simulating larger Hilbert spaces and longer times, but concrete proposals are lacking for analog simulation of scattering processes in quantum field theories. At the same time, classical tensor-network methods have been shown to successfully capture ground-state [43], and to some degree dynamical [44], phenomena in gapped theories, including scattering processes [45; 46; 47; 48], particularly in \(1+1\) dimensions, but their reach remains limited in simulating general scattering problems in quantum field theories. This manuscript advances both analog quantum simulation and tensor-network-based classical simulation for a prototypical model of QCD, the lattice Schwinger model, i.e., lattice quantum electrodynamics (QED) in 1+1 dimensions. Previous tensor-network [45; 46; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60] and quantum-simulation [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85] studies of the model focused on formulations involving fermion (or qubit) degrees of freedom (with or without gauge fields). Motivated to address, more generally, theories with bosonic content, here we instead consider the bosonic dual of the theory, a particular type of a massive Sine-Gordon model.
Our first objective is to propose an analog circuit-QED implementation of the bosonized lattice Schwinger model. Recently, the bosonic dual was shown to be approximately realizable by circular Rydberg states [86]. In contrast, we will show that circuit QED's basic components, its native bosonic degrees of freedom, and the available ultrastrong coupling [87; 88] allow the model to
be implemented in a simple circuit with minimal ingredients and approximations, making it particularly suitable for near-term quantum simulation. Our second objective is a numerical exploration of high-energy real-time scattering phenomenology in the model. We work in the nonperturbative regime, near the confinement-deconfinement critical point and in the thermodynamic limit, using uniform matrix product states (uMPS) [89], which in turn allows for the construction [47; 48] and collision of numerically-exact quasiparticle wave packets in the interacting theory at various energies, resulting in nontrivial inelastic effects. In contrast, earlier works were limited to elastic scattering at either weak (nearly free fermions) [46] or strong (nearly free bosons) [45] coupling regimes. We focus on a detailed spatial, temporal, and momentum-resolved diagnostic of elastic and inelastic processes of quark and meson states, involving phenomena such as meson disintegration, dynamical string formation and breaking, and the creation of quark and (excited) meson states. We also investigate the role of entanglement in high-energy scattering [90; 91; 92; 93; 94; 95; 96; 97].
_Model and circuit-QED implementation.--_The massive Schwinger model has the Lagrangian density
\[\mathcal{L}=\bar{\psi}\big{(}i\gamma^{\mu}\partial_{\mu}-e\gamma^{u}A_{\mu}-m \big{)}\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}, \tag{1}\]
where \(\psi(x,t)\) is a 2-component Dirac spinor, \(\gamma^{0}=\sigma^{z},\gamma^{1}=i\sigma^{y}\) with \(\sigma^{z},\sigma^{y}\) being the Pauli matrices, \(m\) is the mass, \(e\) is the electric charge, and \(A_{\mu}(x,t)\) and \(F_{\mu\nu}(x,t)\) are the gauge field and the field-strength tensor, respectively. Equation (1) is dual to a bosonic scalar field theory with the Hamiltonian [98; 99]
\[H=\int dx\,\bigg{[}\frac{\Pi^{2}}{2}+\frac{(\partial_{x}\phi)^{2}}{2}+\frac{M ^{2}\phi^{2}}{2}-u\cos(\beta\phi-\theta)\bigg{]}, \tag{2}\]
where \(\phi(x)\) and \(\Pi(x)\) are the scalar field and conjugate momentum, respectively, \(M=e/\sqrt{\pi}\), \(\beta=\sqrt{4\pi}\), and \(u=\frac{e^{2}}{2\pi}\Lambda m\), where \(\gamma\) is Euler's constant and \(\Lambda\) is a UV scale (we assume \(\hbar=c=1\) throughout, where \(c\) is the speed of light). Finally, \(\theta\in(-\pi,\pi]\), with its origin explained in Ref. [99] and the Supplemental Material (SM) [100]. We work with a lattice discretization of Eq. (2) given by
\[H=\chi\sum_{x}\bigg{[}\frac{\pi_{x}^{2}}{2}+\frac{(\phi_{x}-\phi_{x-1})^{2}}{ 2}+\frac{\mu^{2}\phi_{x}^{2}}{2}-\lambda\cos(\beta\phi_{x}-\theta)\bigg{]}, \tag{3}\]
where \(x\) labels lattice sites, \([\phi_{x},\pi_{y}]=i\delta_{xy}\), \(\chi=1/a\), \(\mu^{2}=M^{2}a^{2}\), \(\lambda=ua^{2}\), and \(a\) is the lattice spacing. We set \(a=1\), with the continuum limit corresponding to \(\mu,\lambda\to 0\). Quantities are assumed in lattice units throughout.
Remarkably, Eq. (3) can be exactly realized in a simple superconducting circuit, shown in Fig. 1. The circuit can be regarded as a chain of inductively coupled fluxoniums [101]. It consists of nodes \(i\), each corresponding to a lattice site with a local bosonic degree of freedom described by flux \(\phi_{i}\) and charge \(\pi_{i}\), composed of a parallel arrangement of a capacitor, an inductor, and a Josephson junction with respective energies \(E_{C},E_{L}\), and \(E_{J}\)[102]. Further, nodes are coupled by inductors with energy \(E_{L^{\prime}}\). The circuit parameters are related to those of Eq. (3) via \(\chi=\frac{8E_{C}}{\beta^{2}}\), \(\frac{E_{L^{\prime}}\beta^{4}}{8E_{C}}=1\), \(\mu^{2}=\frac{E_{L}\beta^{4}}{8E_{C}}\), \(\lambda=\frac{E_{J}\beta^{2}}{8E_{C}^{2}}\), and \(\theta=\Phi_{\text{ext}}-\pi\), where \(\Phi_{\text{ext}}\) is a tunable external flux threading each loop, and \(\beta\neq 0\) can be chosen arbitrarily (see the SM [100] for the full derivation). In fact, when \(\beta\neq\sqrt{4\pi}\), the circuit describes a more general model known as the massive Thirring-Schwinger model [103]. In the SM [100], we present a method for preparing initial wave packets of bosonic particles using two ancillary qubits, hence providing a complete protocol for preparation and evolution of mesonic wave packets for a scattering experiment. Measurements of the local field \(\phi_{x}\)[104] or the output field at the edges [105; 106] can be performed using standard techniques.
To gain insight into the anticipated phenomenology, we proceed with a numerical study of the collision dynamics in the lattice Schwinger model. While quantitative predictions for the continuum theory require an extrapolation procedure [56; 107], here only fixed, but sufficiently small, values of \(\mu\) and \(\lambda\) are considered. The model has two dimensionless parameters, the ratio \(e/m\), corresponding to \(\mu/\lambda\) in Eq. (3), and the angle \(\theta\) representing a constant background electric field \(E_{\theta}=\frac{e}{2\pi}\theta\). Gauss's
Figure 1: Lumped-element circuit diagram that realizes Eq. (3).
Figure 2: (a) Sketch of the phase diagram of the massive Schwinger model as a function of \(e/m\) (corresponding to \(\mu/\lambda\)) and \(\theta\). The red dot is the Ising critical point, where the deconfined phase (red line) terminates. Points (b) and (c) correspond to the two regimes considered in the main text. Panels (b,i) and (c,i) show the corresponding scalar potential \(V(\phi)=\frac{1}{2}\mu^{2}\phi^{2}-\lambda\cos(\sqrt{4\pi}\phi-\theta)\) [Eq. (3)]. Panels (b,ii) and (c,ii) show both the effective potential between the quarks [Eq. (4)] (green) and the electric/scalar-field distributions (blue) due to the quarks and mesons.
law, \(\partial_{x}E=e\psi^{\dagger}\psi\), ties the total electric field \(E_{T}=E_{\theta}+E\) to the dynamical charges, and equals \(E_{T}=\frac{e}{\sqrt{\pi}}\phi\) in the bosonic dual [108].
Two regimes will be studied near the \(\mathbb{Z}_{2}\) critical point, shown in Fig. 2 as (b) and (c). Point (b) is in the deconfined phase [red line at \(\theta=\pi\) in Fig. 2(a) terminating at the Ising critical point], where the ground state is two-fold degenerate [Fig. 2(b,i)]. Here, fundamental excitations are "half-asymptotic" [99] fermions ("quarks"), appearing as topological kinks in the bosonic dual [see Fig. 2(b,ii)]. Point (c) in Fig. 2(a) is in the confined phase, with a unique ground state [Fig. 2(c,i)] and quark-antiquark bound-state ("meson") excitations.
_Quark-antiquark scattering.--_We first consider quark-antiquark scattering in the deconfined phase [Fig. 2(b)]. Constructing a uMPS representation of the two ground states [109], we use the uMPS quasiparticle ansatz [110; 111] to obtain single-particle energy-momentum eigenstates with dispersion \(\mathcal{E}(p)\) and momenta \(p\in[-\pi,\pi)\) (see the SM [100]). From this, we construct two Gaussian wave packets, localized in momentum and position space, centered at opposite momenta \(\pm p_{0}\). The initial state consists of a finite nonuniform region of 150-300 sites containing the two wave packets, and is surrounded (on the left and the right) by the uniform vacuum [we choose the vacuum with positive \(E_{T}\), i.e., the right minimum of Fig. 2(b,i)]. We then time-evolve this state under the Hamiltonian in Eq. (3), while dynamically expanding the nonuniform region [112; 113; 114] up to 600-1300 sites (see the SM [100] for a more detailed description). By working near the critical point, where the quark mass \(m_{q}\equiv\mathcal{E}(p=0)\) (i.e., the gap) is small, one can consider momenta up to \(|p_{0}|\lesssim 0.8\). These are sufficiently small to keep the physics in the long-wavelength regime of the lattice model, where the dispersion is approximately relativistic \(\mathcal{E}(p)\approx(p^{2}+m_{q}^{2})^{\frac{1}{2}}\), but highly relativistic center-of-mass (CM) energies \(\mathcal{E}_{\text{CM}}\equiv 2\mathcal{E}(p_{0})\lesssim 30m_{q}\) are achieved.
Figure 3(a) shows the space-time distribution of the electric field for collisions at three representative energies, \(\mathcal{E}_{\text{CM}}/m_{q}=11.4\), \(23.0\), and \(28.8\). Initially, the quark and antiquark are separated, resembling Fig. 2(b,ii), with electric field between the charges equal in magnitude but opposite in sign to the field outside [the two regions correspond to the two degenerate ground states in Fig. 2(b,i)]. Under time evolution, the two charges propagate ballistically, shrinking the negative-field region until they collide. During the collision, the particles bounce off each other and reverse their propagation direction elastically, the sole process at lower energies. Specifically, as can be seen in Fig. 3(a), at the lowest energy, \(\mathcal{E}_{\text{CM}}/m_{q}=11.4\), the post-collision value of \(E_{T}\) between the charges is practically equal to the pre-collision value. For the higher-energy collisions, \(\mathcal{E}_{\text{CM}}/m_{q}=23.0\) and \(28.8\), an increase of the post-collision electric field is observed, signalling additional charge production.
While our numerical approach does not rely on strong- or weak-coupling expansion, the relevant scattering channels can be understood from weak-coupling arguments as follows. In the SM [100], we derive, in the nonrelativistic limit, an effective potential between opposite charges at the lowest order in \(e/m\) starting from Eq. (1), which reads (in the center-of-mass frame)
\[V_{\text{eff}}(x)=\frac{e^{2}}{2}\bigg{(}|x|-\frac{\theta}{\pi}x\bigg{)}+\frac {e^{2}}{4m^{2}}\delta(x)\,. \tag{4}\]
Here, \(x\) is the distance between charges. For \(\theta\neq\pi\), one recovers linear confinement [Fig. 2(c,ii)] [108; 115; 50; 99], while at \(\theta=\pi\), charges experience short-range _repulsion_ due to the delta function in Eq. (4) [Fig. 2(b,ii)]. This implies the absence of stable bound states (mesons) in the deconfined phase, which is confirmed numerically in the SM [100]. All possible scattering channels are, therefore, (even-numbered) multi-quark states. The lowest-order channel after the elastic one (\(q\bar{q}\to q\bar{q}\)) is the four-quark production (\(q\bar{q}\to q\bar{q}q\bar{q}\)), exhibiting quark fragmentation. In the latter case, the two inner particles screen the electric field produced by the outer two, consistent with the two rightmost panels in Fig. 3(a).
Elastic and inelastic processes are also distinguished by the production of von Neumann entanglement entropy [\(S_{\text{vN}}(x,t)=-\operatorname{tr}(\rho_{>x}(t)\ln\rho_{>x}(t))\) with \(\rho_{>x}(t)\) being the reduced density matrix for sites \(y>x\)] across the collision point (\(x=0\)), shown in Fig. 3(b) as a function of time. Figure 3(c) also shows the asymptotic (\(t\to\infty\)) entanglement generated as a function of the collision energy. The
Figure 3: Quark-antiquark scattering in the deconfined phase. (a) Time evolution of the electric field for different center-of-mass energies. (b) Time evolution of the von Neumann entanglement entropy for a cut at \(x=0\), for the same three collisions as in (a). (c) Elastic scattering probability (right, blue) and asymptotic von Neumann entanglement entropy for the \(x=0\) cut (left, green) as a function of the center-of-mass energy. The parameters are \(\mu^{2}=0.1\) and \(\lambda=0.5\) [see Eq. (3)].
entanglement entropy is maximal during the collision but quickly approaches a constant afterwards. At lower energies, it nearly returns to its pre-collision (vacuum) value. A small increase is observed because different momentum components of the wave packets acquire slightly different elastic scattering phase shifts, making the two scattered wave packets slightly entangled [48]. At higher energies, however, significant net entanglement is generated, indicating inelastic particle production [32].
Finally, we compute elements of the momentum-resolved scattering S-matrix by projecting the post-collision state onto a basis of asymptotic two-particle states (see the SM [100] for details). This basis is constructed from the single-particle wavefunctions, requiring the particles to be widely separated to ensure orthogonality and avoid interaction effects. For \(2\to 2\) scattering, this is guaranteed sufficiently far from the collision point, but not for higher-order scattering. From this, we obtain the elastic scattering probability \(P(q\bar{q})\), displayed in Fig. 3(c), as a function of the collision energy.
The elastic scattering probability is near unity at lower energies, decreasing monotonically, falling below \(0.5\) around \(\mathcal{E}_{\text{CM}}/m_{q}\gtrsim 28\). Interestingly, the energy required for significant inelastic scattering is many times the threshold energy (\(\mathcal{E}_{\text{CM}}=4m_{q}\)). While we did not obtain the precise contribution of the four-quark (or higher-quark-number) states [116], the decrease of \(P(q\bar{q})\) confirms the presence of significant inelastic scattering, consistent with the increase in entanglement entropy in Fig. 3(b) and the screening of \(E_{T}\) in Fig. 3(a).
_Meson-meson scattering.--_We next consider scattering in the confined phase [Fig. 2(c)] at \(\theta=\pi-\varepsilon\). We choose \(\varepsilon\ll 1\), which gives rise to weak confinement of quarks, but keeps us close to the critical point (all other parameters are unchanged). In contrast to the deconfined regime, the interplay of high-energy and weak confinement yields rich behavior following the collision. There are multiple stable meson excitations, which are labeled by \(\pi_{j}\) (\(j=1,2,...\)), with increasing masses \(m_{\pi_{j}}\). Here, we consider \(\pi_{1}\pi_{1}\) collisions, with meson wave packets prepared similarly as before, centered at \(p_{0}=\pm 0.6\) with \(\mathcal{E}_{\text{CM}}/m_{\pi_{1}}=6.84\) (\(5.95\)) for \(\varepsilon=0.04\) (\(0.07\)).
The electric-field evolution for the two collisions is displayed in Fig. 4(a,i). Before the collision, the background electric field is only locally disturbed by the charge-neutral mesons [Fig. 2(c,ii)], unlike in the deconfined case where the presence of free quarks can lead to electric-field screening at arbitrary long distances. After the collision, the mesons partially fragment into a quark-antiquark pair. The quarks are joined by an electric-field string which screens the background electric field (light-blue regions) inside the collision cone. As the quarks travel outward, their kinetic energy gets converted into the potential energy of the string. Eventually, they turn and propagate back in the opposite direction [see also Fig. 4(c)] causing a second collision. Weaker confinement \(\varepsilon=0.04\) allows the quarks to propagate farther.
Next, we project the time-evolved state onto two-particle components, focusing on the lightest two mesons \(\pi_{1},\pi_{2}\), and the quark-antiquark pair \(q\bar{q}\). While the latter are not true (i.e., asymptotic) quasiparticles, at weak confinement \(\varepsilon\ll 1\), (anti)quarks can be approx
Figure 4: Meson-meson scattering in the confined phase. (a) Time evolution of the electric field for different \(\theta=\pi-\varepsilon\) at all positions \(x\) [panels (i)] and at \(x=0\) [panels (ii)] with \(\mu^{2}=0.1\) and \(\lambda=0.5\) as in Fig. 3. The wave packets are centered at \(p_{0}=\pm 0.6\), corresponding to \(\mathcal{E}_{\text{CM}}/m_{\pi_{1}}=6.84,5.95\) for \(\varepsilon=0.04,0.07\). (b) Time evolution of the von Neumann entanglement entropy at all positions \(x\) [panels (i)] and at \(x=0\) [panels (ii)]. (c) Momenta and positions (mean \(\pm\) std. extracted from a Gaussian fit of the projected distributions) of the quarks for \(\varepsilon=0.04\) (top) and the mean positions of the right-moving mesons for \(\varepsilon=0.07\) (bottom). (d) Probabilities of two-particle states \(\mu\nu\) (\(\mu,\nu\in[\pi_{1},\pi_{2},q,\bar{q}]\)) where \(\mu/\nu\) is the particle on the left/right. The curves for \(\pi_{1}\pi_{2}\) and \(\pi_{2}\pi_{1}\) overlap due to the reflection symmetry of the initial state. Near the initial collision (shaded region), as well as near the secondary collision at \(t\sim 550\) for \(\varepsilon=0.07\), the state cannot be captured by a basis of asymptotic particles.
imately described by the modified quasiparticle ansatz of Ref. [48]. This requires a uMPS representation of the electric-flux string, which we approximate by its lowest energy state, a so-called "false-vacuum" state [117; 118], corresponding to the second (local) minimum in Fig. 2(c,i).
Figure 4(d) shows the probabilities of the \(\pi_{1}\pi_{1}\) (blue), \(\pi_{2}\pi_{2}\) (orange), \(\pi_{1}\pi_{2}\) (green), and \(\pi_{2}\pi_{1}\) (pink) combinations (where in state \(\mu\nu\), the particle \(\mu/\nu\) is on the left/right), and of the quark-antiquark state (red). One can observe significant flavor-conserving elastic scattering, \(\pi_{1}\pi_{1}\rightarrow\pi_{1}\pi_{1}\), a smaller probability of exciting one of the outgoing mesons, \(\pi_{2}\pi_{1}\) and \(\pi_{1}\pi_{2}\) (this smaller probability increases with stronger confinement \(\varepsilon=0.07\)), and a substantial \(q\bar{q}\) component. Interestingly, for \(\varepsilon=0.07\), the \(q\bar{q}\) component is decreasing in time, indicating string breaking [119; 50], which is also visible in the gradual increase of the bipartite entanglement entropy in Fig. 4(b,i) [see also Fig. 4(b,ii)], and in the gradual reduction of the electric-field screening [Fig. 4(a,ii)]. At a late time \(t=700\), asymptotic two-particle states account for about 90% (76%) of the state at \(\varepsilon=0.04\) (0.07) [120].
The projection onto the asymptotic two-particle basis also provides the full momentum, and consequently position, distributions of the particles. Figure 4(c) shows the mean and standard deviation of the positions and momenta of the quarks, and the mean positions of the mesons, computed from fits of these distributions to a Gaussian form. The mean momenta of the quarks are approximately \(\langle p(t)\rangle\propto\pm t\), in agreement with the expectation from the linear potential of Eq. (4). Their extracted positions in Fig. 4(c) are consistent with the boundaries of the screened-field region in Fig. 4(a,i) and with the localized increase in the entanglement entropy in Fig. 4(b,i). From the mean position of the mesons, Fig. 4(c), one can see that the heavier meson \(\pi_{2}\) has a slightly lower average velocity compared to \(\pi_{1}\), as expected.
_Discussion and outlook.--_First-principles numerical explorations and quantum simulations of dynamics in strongly interacting quantum field theories are starting to shed light on the rich phenomenology of particle collisions in real time. As a step toward this goal, using _ab initio_ numerical uMPS computations and working with a bosonized formulation of the Schwinger model, we analyzed the real-time dynamics of high-energy particle scattering in the nonperturbative regime of QED in 1+1 dimensions. We also proposed an analog circuit-QED implementation of the bosonized Schwinger model. This implementation requires minimal ingredients and no approximations (besides a lattice discretization), in contrast to previous circuit-QED proposals based on a quantum-link model [83]. We studied both the confined and deconfined regimes of the model, exhibiting a multitude of phenomena, including inelastic particle production, meson disintegration, and dynamical string formation and breaking.
In addition to the local electric-field and entanglement observables, the single-particle excitations allowed us to obtain complete time-resolved momentum and position distributions of the outgoing \(2\to 2\) scattered particles. To account for higher-order scattering beyond this two-particle characterization, it appears necessary to include states where two particles can be close, which could potentially be accomplished using the two-particle uMPS ansatz from Ref. [121]. This might also shed light on the nontrivial transient dynamics in Fig. 4(d). It would also be interesting to explore the energy dependence of string-breaking dynamics [122] as well as the possibility of formation of excited string states and their characterization beyond the false-vacuum approximation.
Ultimately, tensor-network methods are limited by entanglement growth, motivating quantum simulations using the proposed circuit-QED implementation for high-energy collisions. The proposed implementation can also be used to study quench dynamics. For example, the Schwinger mechanism or dynamical topological phase transitions can be studied in quenches of the \(\theta\) parameter [64; 123], which can be accomplished using time-dependent flux control [102].
Finally, our circuit-QED implementation applies to other bosonic theories [124; 125; 126; 127], including the \(\phi^{4}\) theory (achieved in the \(\beta\to 0\) limit) in 1+1 or 2+1 dimensions and generalizations of the bosonized Schwinger model, including to multi-flavor fermions [99; 128] and to Thirring interactions [103]. In the latter case, sufficiently strong Thirring interactions give rise to attractive short-range interactions between quarks in the deconfined phase, as shown in the SM [100], leading to stable meson particles and hence qualitatively different scattering dynamics.
_Acknowledgments.--_We acknowledge valuable discussion with A. Milsted and Z. Minev. The uMPS simulations were performed with the help of the MPSKit.jl Julia package ([https://github.com/maartenvd/MPSKit.jl](https://github.com/maartenvd/MPSKit.jl)). We thank M. Van Damme for help with the package. The authors acknowledge the University of Maryland's supercomputing resources ([http://hpcc.umd.edu](http://hpcc.umd.edu)) made available for conducting the research reported in this paper. R.B., S.W., A.F., and A.V.G. were supported in part by the National Science Foundation (NSF) Quantum Leap Challenge Institute (award no. OMA-2120757), Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research (ASCR), Accelerated Research in Quantum Computing program (award no. DE-SC0020312), ARO MURI, the DOE ASCR Quantum Testbed Pathfinder program (award no. DE-SC0019040), NSF Physics Frontier Center Quantum Computing program, AFOSR, AFOSR MURI, and DARPA SAVaNT ADVENT. Support is also acknowledged from the DOE, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. N.M. acknowl
edges funding by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, InQubator for Quantum Simulation (IQuS) ([https://iqus.uw.edu](https://iqus.uw.edu)) under Award Number DOE (NP) Award DE-SC0020970 via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science. Z.D. and N.M. acknowledge funding by the DOE, Office of Science, Office of Nuclear Physics via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science (award no. DE-SC0021143). Z.D. further acknowledges support by the DOE, Office of Science, Early Career Award (award no. DESC0020271). E.R.B acknowledges support from the DOE, Office of Science, Office of ASCR, Computational Science Graduate Fellowship (award no. DE-SC0023112).
|
2302.07284 | On the Likely Dynamical Origin of GW191109 and of Binary Black Hole
Mergers with Negative Effective Spin | With the growing number of binary black hole (BBH) mergers detected by
LIGO/Virgo/KAGRA, several systems have become difficult to explain via isolated
binary evolution, having components in the pair-instability mass gap, high
orbital eccentricities, and/or spin-orbit misalignment. Here, we focus on
GW191109\_010717, a BBH merger with component masses of $65^{+11}_{-11}$ and
$47^{+15}_{-13}$ $\rm M_{\odot}$, and effective spin $-0.29^{+0.42}_{-0.31}$,
which could imply a spin-orbit misalignment of more than $\pi/2$ radians for at
least one of its components. Besides its component masses being in the
pair-instability mass gap, we show that isolated binary evolution is unlikely
to reproduce the proposed spin-orbit misalignment of GW191109 with high
confidence. On the other hand, we demonstrate that BBHs dynamically assembled
in dense star clusters would naturally reproduce the spin-orbit misalignment
and the masses of GW191109, and the rates of GW191109-like events, if at least
one of the components were to be a second-generation BH. Finally, we generalize
our results to all the events with a measured negative effective spin, arguing
that GW200225 also has a likely dynamical origin. | Rachel C. Zhang, Giacomo Fragione, Chase Kimball, Vicky Kalogera | 2023-02-14T19:00:53Z | http://arxiv.org/abs/2302.07284v2 | On the Likely Dynamical Origin of GW191109 and of Binary Black Hole Mergers with Negative Effective Spin
###### Abstract
With the growing number of binary black hole (BBH) mergers detected by LIGO/Virgo/KAGRA, several systems have become difficult to explain via isolated binary evolution, having components in the pair-instability mass gap, high orbital eccentricities, and/or spin-orbit misalignment. Here, we focus on GW191109.010717, a BBH merger with component masses of \(65^{+11}_{-11}\) and \(47^{+15}_{-13}\) M\({}_{\odot}\), and effective spin \(-0.29^{+0.42}_{-0.31}\), which implies a spin-orbit misalignment of more than \(\pi/2\) radians for at least one of its components. Besides its component masses being in the pair-instability mass gap, we show that isolated binary evolution is unlikely to reproduce the spin-orbit misalignment of GW191109 with high confidence. On the other hand, we demonstrate that BBHs dynamically assembled in dense star clusters would naturally reproduce the spin-orbit misalignment and the masses of GW191109, and the rates of GW191109-like events, if at least one of the components were to be a second-generation BH. Finally, we generalize our results to all the events with a measured negative effective spin, arguing that GW200225 also has a likely dynamical origin.
Black holes(162) -- Gravitational wave sources(677) 0000-0002-4882-2878]Rachel C. Zhang
## 1 Introduction
The LIGO/Virgo/KAGRA (LVK) Collaboration has recently released the third Gravitational Wave Transient Catalog (The LIGO Scientific Collaboration et al., 2021), which has brought the number of candidate binary black hole (BBH) mergers to more than 90 events, transforming our understanding of BHs and gravitational-wave (GW) physics (Abbott et al., 2020, 2020). With the upcoming fourth observational run and the next-generation observatories, such as LISA (eLISA Consortium et al., 2013), the Einstein Telescope (Maggiore et al., 2020), and Cosmic Explorer (Reitze et al., 2019), the number of GW detections will continue to quickly grow.
Despite the growing population of detected BH mergers, their origin is still highly uncertain. Two main formation channels have been discussed to explain the origin of merging compact objects: isolated binary evolution (e.g., Paczynski, 1976; van den Heuvel, 1976; Tutukov & Yungelson, 1993; Belczynski et al., 2002; Kalogera et al., 2007; Dominik et al., 2012, 2013; Postnov & Yungelson, 2014; Belczynski et al., 2016, 2016; Stevenson et al., 2017; van den Heuvel et al., 2017; Giacobbo & Mapelli, 2018; Neijssel et al., 2019; Spera et al., 2019; Bavera et al., 2021) and dynamical assembly in dense stellar environments (e.g., Portegies Zwart & McMillan, 2000; Rodriguez et al., 2015; Banerjee, 2017, 2018, 2018, 2019; Di Carlo et al., 2019; Askar et al., 2017; Fragione & Kocsis, 2018; Samsing, 2018; Samsing & D'Orazio, 2018; Kremer et al., 2020; Fragione & Banerjee, 2021). Sub-channels of these two broad categories include chemically homogeneous evolution of close binaries (e.g., de Mink et al., 2009; de Mink & Mandel, 2016; Mandel & de Mink, 2016; Marchant et al., 2016), hierarchical triple and quadruple systems (e.g., Antonini & Perets, 2012; Hoang et al., 2018; Fragione et al., 2019; Martinez et al., 2020; Hamers et al., 2021; Martinez et al., 2022) and formation in disks of active galactic nuclei (e.g., Bartos et al., 2017; Tagawa et al., 2018, 2020).
The isolated binary evolution and dynamical channels can be distinguished through several characteristic features. In the former case, merging BBHs may have component masses up to about \(45\,M_{\odot}\), as dictated by pair-instability physics (e.g., Heger & Woosley, 2002; Woosley et al., 2007; Farmer et al., 2019). Moreover, merging BHs have spins preferentially aligned with the orbital angular momentum (implying a positive effective spin), no residual eccentricity at \(10\,\mathrm{Hz}\), and mass ratios close to unity (Kalogera, 2000; Dominik et al., 2013; Samsing, 2018). On the other hand, BBH mergers catalyzed by dynamical encounters in dense star clusters have an isotropic orientation of spins relative to the orbital angular momentum (implying a symmetric distribution of the effective spin around zero) and a broader spectrum of eccentricities in the LVK frequency band, but still mass ratios preferentially close to unity (Rodriguez et al., 2016, 2018, 2019; Samsing, 2018; Martinez et al., 2022). Importantly, the
component masses of merging BHs can exceed the limit imposed by pair-instability physics, if they are the remnant of a hierarchical merger (e.g., Antonini et al., 2019; Fragione and Silk, 2020; Mapelli et al., 2021; Fragione et al., 2022).
In this paper, we focus on GW191109, the detected BBH merger with 90.6% of its effective inspiral spin distribution in the negative regime, as shown in Figure 1 (Hoy and Raymond, 2021). Effective inspiral spin is defined as \(\chi_{\rm eff}\equiv(m_{1}\chi_{1}+m_{2}\chi_{2})/(m_{1}+m_{2})\), with \(m_{i}\) and \(\chi_{i}\) (\(i=1,2\)) being the components masses and spins. This BBH merger event has primary and secondary masses of \(65^{+11}_{-11}\) and \(47^{+15}_{-13}\) M\({}_{\odot}\), respectively, and effective spin \(-0.29^{+0.42}_{-0.31}\), which implies a spin-orbit misalignment of more than \(90^{\circ}\) for at least one of its components. The posterior distribution of the effective spin of GW191109 is shown in Figure 1, along with that of GW200225_060421, the BBH merger event with 86.7% of its effective inspiral spin distribution lying in the negative regime (The LIGO Scientific Collaboration et al., 2021). We argue that the spin orientation is a strong indicator of isolated binary or dynamical formation for GW191109, and in general for any BBH merger with a negative value of its effective spin. We note that the observed effective spin distribution can be affected by glitches, and a glitch was found in the Livingston data for GW191109 (Davis et al., 2022). Thus, we acknowledge the uncertainties for this measurement and further discuss this in Section 4.
Our paper is organized as follows. In Section 2, we summarize the GW events with characteristic features that indicate an unlikely isolated binaries origin. In Section 3, we tailor our discussion to GW191109, and show that isolated binary evolution is very unlikely to produce GW191109-like events, while dynamics can easily explain its component masses, effective spin, and merger rates. We conclude and generalize our results in Section 4.
## 2 Binary Black Holes with an Unlikely Origin in Field Binaries
Here, we review BBH mergers which have been discussed to have an unlikely origin as isolated binaries.
1. _Masses in the pair instability mass gap range._ BHs with masses approximately in the range \(\sim 50\,M_{\odot}-120\,M_{\odot}\) (depending on the progenitor metallicity) are not expected to be formed from stellar collapse, due to runaway pair-instability processes. (e.g., Fowler and Hoyle, 1964; Bond et al., 1984; Heger and Woosley, 2002; Woosley et al., 2007; Woosley, 2017; Farmer et al., 2019). The merger event GW190521 has component masses of about \(90\,M_{\odot}\) and \(60\,M_{\odot}\), nominally in the pair-instability mass gap. These masses can be naturally produced in dense stellar environments if these BHs are the remnant of a previous merger event (e.g., Miller and Hamilton, 2002; Antonini and Rasio, 2016; Gerosa and Berti, 2017; Rodriguez et al., 2019; Fragione et al., 2020; Kimball et al., 2021), or in AGN disks through subsequent mergers and gas accretion (e.g., Tagawa et al., 2020, 2020). Other candidate BBH events with at least one component BH in this upper mass gap range include GW190519_153544, GW190602_175927, GW190706_22641, GW200220_061928 (Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2021).
2. _Unequal mass ratios._ Both isolated binaries and cluster dynamics produce BBH mergers that have preferentially mass ratios close to unity (e.g., Dominik et al., 2013; Belczynski et al., 2016; Rodriguez et al., 2016, 2019). However, hierarchical mergers in dense star clusters can naturally produce smaller mass ratios (e.g., Fragione et al., 2022). GW190412 is a detected BBH merger with a nearly \(4:1\) mass ratio, which could easily be explained as a third-generation merger in a massive star cluster (Rodriguez et al., 2020). Other possibilities include a merger in a hierarchical triple, where the inner binary is driven to a merger by the Lidov-Kozai cycles imposed by the tidal field of the tertiary (Su et al., 2021; Martinez et al., 2022), or subsequent mergers in AGN disks (e.g., Tagawa et al., 2020, 2020).
3. _Non-zero orbital eccentricities._ BBHs tend to have zero eccentricity through isolated binary evolution. This is because orbits tend to circularize to minimize energy (Peters, 1964), and merger timescales in isolated binary evolution are long
Figure 1: Posterior distributions of the effective spin of GW191109 and GW200225_060421 derived from the pesummary package by Hoy and Raymond (2021). The probability of a negative effective spin is about 90.6% and 86.7% for the GW191109 and GW200225_060421, respectively.
enough for circularization to happen. However, in a dense stellar cluster, highly eccentric BBHs are formed during few-body interactions of BH systems. Specifically, \(\sim 5\%\) of all BBH mergers from globular clusters are likely to have an eccentricity \(\gtrsim 0.1\) in the LVK frequency band (Samsing, 2018). Other possibilities to create merger events with large eccentricities is through Lidov-Kozai cycles in hierarchical systems (e.g., Fragione et al., 2019). Gayathri et al. (2022) uses numerical relativity simulations to justify GW190521 as a potential highly eccentric BBH, and Romero-Shaw et al. (2022) argues that GW191109 and GW200208 both have a non-negligible eccentricity in the LVK detection band.
4. _Negative effective inspiral spin parameter._ Isolated binary evolution leads to a preferential alignment between the BH spins and the binary orbital angular momentum, implying typically a positive value of the effective spin (Kalogera, 2000). A positive value of the effective spin may also be preferred by the AGN disk channel if radial migration of BHs is inefficient (e.g., Tagawa et al., 2020). Dynamical assembly in dense star clusters (Rodriguez et al., 2016) and mergers in hierarchical systems (Martinez et al., 2022) leads to an isotropic distribution of the effective spin around zero. Therefore, a system with a negative value of the effective spin is unlikely to be originated in field binaries and AGN disks (for details see Section 3). Currently, the GW BBH events that show a likely negative observed \(\chi_{\rm eff}\) are GW191109 (see Section 3) and GW200225 (see Section 4).
## 3 The case of GW191109
In this Section, we show that the likely origin of GW191109 is dynamical assembly in dense star clusters. We use the properties masses, effective spin, and merger rate to show that field binaries are unlikely to produce GW191109. Then, we also discuss why other dynamical channels (hierarchical systems and AGN disks) are unlikely to explain this event.
### Isolated binary evolution
The isolated binary evolution channel for BBH formation involves two massive stars (\(\gtrsim 20\,M_{\odot}\)) in a relatively close orbit. Typically, the more massive one leaves its main sequence first, expanding in radius and possibly donating mass to the companion when filling its Roche lobe. Eventually, the primary undergoes a supernova (SN) explosion or directly collapses to form a BH. After the secondary also leaves its main sequence and expands, the system can have a phase of either stable mass transfer or unstable mass transfer. In the latter case, a common envelope may form (Paczynski, 1976; van den Heuvel, 1976; Tutukov and Yungelson, 1993; Belczynski et al., 2002; Dominik et al., 2012; Stevenson et al., 2017; Giacobbo and Mapelli, 2018). After the secondary also undergoes a SN explosion or direct collapse to a BH, a bound BBH is formed, which may merge in Hubble time.
In our analysis, we model the formation of GW191109 through isolated binary evolution using the following simple assumptions:
* The more massive star forms the more massive BH in the binary first, with no contribution to the final tilt of the orbital plane, as we assume that alignment of the BH spins and the binary orbital angular momentum occurred before the formation of the second BH.
* The kick from the second SN is the primary contribution to the tilt of the orbital plane with respect to the BBH spin axes.
* The natal kick imparted to the secondary BH at birth is isotropic.
With our methodology, we are able to conduct a controlled analysis of how different initial parameters affect the final distribution of tilts, thus avoiding all the uncertainties associated with modeling the full formation process. Moreover, the component masses of GW191109 (in particular the primary) lie in the mass gap, which can be formed in isolated binaries only when considering the uncertain combined effect of the hydrogen-rich envelopes, dredge-ups, and \({}^{12}\)C\((\alpha,\gamma)^{16}\)O nuclear rates in massive stars (e.g., Farmer et al., 2020; Renzo et al., 2020; Costa et al., 2021). Concerning spins, we expect that phases of mass transfer cause the alignment of the spin axes of the stars and BHs to the orbital angular momentum. As a result, the main contribution to the spin misalignment with respect to the orbital angular momentum (to eventually get a negative effective spin) is the natal kick imparted by an asymmetric SN explosion when forming the secondary BH.
To quantitatively assess the distributions of spin-orbit misalignment produced as a result of natal kicks, we compute the tilt of the binary orbit by comparing pre-SN and post-SN energy and angular momentum (Hills, 1983; Brandt and Podsiadlowski, 1995; Kalogera, 1996, 2000; Pijloo et al., 2012; Fragione et al., 2021). We consider a binary of masses \(m_{\rm BH,1}\) and \(m_{2}\), semi-major axis \(a\), and eccentricity \(e\). The new orbital semimajor axis \(a_{\rm n}\), eccentricity \(e_{\rm n}\), and spin-orbit misalignment change as a result of a SN explosion of the secondary due to mass loss (\(\Delta m_{2}=m_{2}-m_{\rm BH,2}\)), and the isotropic natal kick \(\mathbf{v_{k}}\) on the newly born BH is imparted. Assuming that the SN takes place instantaneously, at a relative separation \(\mathbf{r}\) and velocity \(\mathbf{v}\), the misalignment between the post-SN and pre-SN orbit is
\[\Delta\theta=\arccos\left(\frac{\mathbf{h}\cdot\mathbf{h}_{\rm n}}{h\ h_{\rm n }}\right)\,, \tag{1}\]
where
\[|\mathbf{h}|^{2}=|\mathbf{r}\times\mathbf{v}|^{2}=G(m_{\mathrm{BH,1}}+m_{2})a(1-e )^{2}\,, \tag{2}\]
is the pre-SN angular momentum, and
\[|\mathbf{h}_{\mathbf{n}}|^{2}=|\mathbf{r}\times\mathbf{v}_{\mathbf{n}}|^{2}=G(m _{\mathrm{BH,1}}+m_{\mathrm{BH,2}})a_{\mathrm{n}}(1-e_{\mathrm{n}})^{2}\,, \tag{3}\]
is the post-SN angular momentum, with \(\mathbf{v}_{\mathbf{n}}=\mathbf{v}+\mathbf{v}_{\mathbf{k}}\) being the new relative velocity.
Since we are modeling binary systems that result in a GW191109-like merger, we set the primary mass to be \(65M_{\odot}\) and the post-SN secondary mass to be \(47M_{\odot}\), taking the medians of their respective parameter estimation distributions. For the remaining parameters, we test a range of initial conditions for a total of 96 different models:
* \(10^{-1}\) AU and \(10^{-2}\) AU
- 1 AU, for a total of 4 distinct semimajor axis distributions.
* Initial eccentricity: We take into account both uniform and thermal distributions, for a total of 2 distinct eccentricity distributions.
* \(57\mathrm{M}_{\odot}\). The other two pre-SN mass distributions are simulated distributions generated using the Single Star Evolution (SSE) code with metallicities 0.001 and 0.01 (Hurley et al., 2000). For the SSE simulations, we run a population of single stars to Hubble time and extract only stars with post-SN masses within the mass error range of the GW191109 secondary mass: \((34\mathrm{M}_{\odot},62\mathrm{M}_{\odot})\). We show the SSE-simulated mass loss distributions in Appendix A.1 and A.2. We add these mass-loss values to the GW191109 secondary mass median value, \(47\mathrm{M}_{\odot}\), to get the two SSE pre-SN mass distributions.
* Kick velocity: We use a Maxwellian distribution \[p(v_{\mathrm{kick}})\propto v_{\mathrm{kick}}^{2}\exp\left(-\frac{v_{\mathrm{ kick}}^{2}}{2\sigma^{2}}\right),\] (4) with velocity dispersion \(\sigma\) to model the velocity kick imparted to the secondary BH at birth. Since
Figure 2: The cumulative distribution of the final tilts for binaries with the following initial conditions: the mass loss distribution is simulated by SSE with initial metallicities of 0.01 (see Appendix A.2), the initial semimajor axis distribution is log-uniform with range \(10^{-2}\) AU - \(10^{-1}\) AU, and the initial eccentricity distribution is uniform. Different colors show different kick velocity distributions: \(\sigma=50\) km s\({}^{-1}\) (blue), \(\sigma=100\) km s\({}^{-1}\) (black), \(\sigma=265\) km s\({}^{-1}\) (purple), and \(\sigma=265\) km s\({}^{-1}\) scaled by momentum conservation (red). The dotted yellow line marks \(\pi/2\), the tilt angle at which the spin of the post-SN mass would be anti-aligned with the orbital angular momentum, possibly producing a negative \(\chi_{\mathrm{eff}}\).
\(\sigma\) is highly uncertain, we consider 4 different models. We use \(\sigma=50\) km s\({}^{-1}\), 100 km s\({}^{-1}\), and 265 km s\({}^{-1}\), for three of our models. Note that \(\sigma=265\) km s\({}^{-1}\) is the typical kick expected for NSs as found by Hobbs et al. (2005), while \(\sigma=100\) km s\({}^{-1}\) as in the analysis of Arzoumanian et al. (2002). For the fourth model, we adopt natal kicks from Eq. 4 as for NSs, but with \(\sigma\) scaled by linear momentum conservation as follows (Repetto et al., 2012; Janka, 2013):
\[v_{\rm BH}=\frac{\langle m_{\rm NS}\rangle}{m_{\rm BH}}v_{\rm NS}, \tag{5}\]
where \(v_{\rm BH}\) is the natal kick on a BH with mass \(m_{\rm BH}\), \(\langle m_{\rm NS}\rangle\approx 1.3M_{\odot}\) is the average mass of NSs, and \(v_{\rm NS}\) is randomly drawn from the Maxwellian distribution with \(\sigma=265\) km s\({}^{-1}\). Here, we fix \(m_{\rm BH}\) to 47\({}_{\odot}\), the median mass of the secondary BH in GW191109. We note that more recently, non-Maxwellian distributions have been used in the literature (Kapil et al., 2023), but we do not test these distributions in our modeling.
For each combination of parameter distributions, we simulate \(10^{5}\) binaries for a total of 9.6 million binaries.
Our results primarily focus on the final tilt after the second SN. In order to have a negative \(\chi_{\rm eff}\), the natal kick imparted to the secondary has to be strong enough to tilt the orbital angular momentum more than \(\pi/2\) with respect to its initial orientation. In Figure 2, we show the cumulative distribution of the final tilts for binaries, with an initial semimajor axis distribution following a log-uniform distribution in range \(10^{-2}\) AU - \(10^{-1}\) AU, an initial uniform eccentricity distribution, and a SSE-simulated pre-SN mass distribution with metallic
\begin{table}
\begin{tabular}{c|c c c c} \hline & Momentum conservation & \(\sigma=50\) km s\({}^{-1}\) & \(\sigma=100\) km s\({}^{-1}\) & \(\sigma=265\) km s\({}^{-1}\) \\ \hline \multicolumn{4}{c|}{**Uniform mass loss**} \\ \hline a: log-uniform (0.1 AU), e: uniform & \(0^{*}\) & \(5.9\times 10^{-4}\) & \(2.32\times 10^{-3}\) & \(1.487\times 10^{-2}\) \\ \hline a: log-uniform (0.1 AU), e: thermal & \(2\times 10^{-5*}\) & \(1.21\times 10^{-3}\) & \(4.54\times 10^{-3}\) & \(2.709\times 10^{-2}\) \\ \hline a: uniform (0.1 AU), e: uniform & \(0^{*}\) & \(8.4\times 10^{-4}\) & \(3.32\times 10^{-3}\) & \(2.132\times 10^{-2}\) \\ \hline a: uniform (0.1 AU), e: thermal & \(2\times 10^{-5*}\) & \(1.88\times 10^{-3}\) & \(6.58\times 10^{-3}\) & \(3.827\times 10^{-2}\) \\ \hline a: log-uniform (1 AU), e: uniform & \(7\times 10^{-5}\) & \(3.75\times 10^{-3}\) & \(1.225\times 10^{-2}\) & \(4.523\times 10^{-2}\) \\ \hline a: log-uniform (1 AU), e: thermal & \(1.1\times 10^{-4}\) & \(6.52\times 10^{-3}\) & \(2.179\times 10^{-2}\) & \(6.765\times 10^{-2}\) \\ \hline a: uniform (1 AU), e: uniform & \(3.2\times 10^{-4}\) & \(1.456\times 10^{-2}\) & \(3.985\times 10^{-2}\) & 0.12026 \\ \hline a: uniform (1 AU), e: thermal & \(4.4\times 10^{-4}\) & \(2.194\times 10^{-2}\) & \(6.322\times 10^{-2}\) & 0.15382 \\ \hline \multicolumn{4}{c|}{**SSE mass loss (0.001)**} \\ \hline a: log-uniform (0.1 AU), e: uniform & \(2\times 10^{-5*}\) & \(6.1\times 10^{-4}\) & \(2.53\times 10^{-3}\) & \(1.641\times 10^{-2}\) \\ \hline a: log-uniform (0.1 AU), e: thermal & \(0^{*}\) & \(1.21\times 10^{-3}\) & \(4.98\times 10^{-3}\) & \(2.998\times 10^{-2}\) \\ \hline a: uniform (0.1 AU), e: uniform & \(2\times 10^{-5*}\) & \(8.8\times 10^{-4}\) & \(3.60\times 10^{-3}\) & \(2.330\times 10^{-2}\) \\ \hline a: uniform (0.1 AU), e: thermal & \(4\times 10^{-5*}\) & \(1.79\times 10^{-3}\) & \(6.41\times 10^{-3}\) & \(4.177\times 10^{-2}\) \\ \hline a: log-uniform (1 AU), e: uniform & \(4\times 10^{-5}\) & \(6.4\times 10^{-4}\) & \(2.47\times 10^{-3}\) & \(1.627\times 10^{-2}\) \\ \hline a: log-uniform (1 AU), e: thermal & \(2\times 10^{-5}\) & \(1.12\times 10^{-3}\) & \(4.52\times 10^{-3}\) & \(2.954\times 10^{-2}\) \\ \hline a: uniform (1 AU), e: uniform & \(2.9\times 10^{-4}\) & \(1.380\times 10^{-2}\) & \(4.099\times 10^{-2}\) & 0.12744 \\ \hline a: uniform (1 AU), e: thermal & \(7.0\times 10^{-4}\) & \(2.190\times 10^{-2}\) & \(6.382\times 10^{-2}\) & 0.15781 \\ \hline \multicolumn{4}{c|}{**SSE mass loss (0.01)**} \\ \hline a: log-uniform (0.1 AU), e: uniform & \(0^{*}\) & \(6.7\times 10^{-4}\) & \(2.16\times 10^{-3}\) & \(1.535\times 10^{-2}\) \\ \hline a: log-uniform (0.1 AU), e: thermal & \(10^{-5*}\) & \(1.21\times 10^{-3}\) & \(4.52\times 10^{-3}\) & \(2.817\times 10^{-2}\) \\ \hline a: uniform (0.1 AU), e: uniform & \(2\times 10^{-5*}\) & \(8.5\times 10^{-4}\) & \(3.05\times 10^{-3}\) & \(2.102\times 10^{-2}\) \\ \hline a: uniform (0.1 AU), e: thermal & \(2\times 10^{-5*}\) & \(1.74\times 10^{-3}\) & \(6.04\times 10^{-3}\) & \(3.805\times 10^{-2}\) \\ \hline a: log-uniform (1 AU), e: uniform & \(10^{-5}\) & \(5.1\times 10^{-4}\) & \(2.12\times 10^{-3}\) & \(1.482\times 10^{-2}\) \\ \hline a: log-uniform (1 AU), e: thermal & \(10^{-5}\) & \(1.16\times 10^{-3}\) & \(4.21\times 10^{-3}\) & \(2.790\times 10^{-2}\) \\ \hline a: uniform (1 AU), e: uniform & \(2.5\times 10^{-4}\) & \(1.339\times 10^{-2}\) & \(4.028\times 10^{-2}\) & 0.12236 \\ \hline a: uniform (1 AU), e: thermal & \(4.4\times 10^{-4}\) & \(2.152\times 10^{-2}\) & \(6.358\times 10^{-2}\) & 0.15993 \\ \hline \end{tabular}
\end{table}
Table 1: Probability of producing systems with tilt greater than \(\pi/2\) after the secondary SN kicks. Note that the (0.1 AU) or (1 AU) in each model is the upper bound for the semimajor axis range. Values with \(*\) mark the models that are more physically motivated.
ity 0.01. The models in Figure 2 clearly demonstrate that it is unlikely to tilt binaries above \(\pi/2\), and a direct correlation exists between higher velocity kicks and the fraction of binaries experiencing greater tilts. Specifically, no binaries experienced tilts greater than \(\pi/2\) when the velocity kick distribution followed momentum conservation, and \(6.7\times 10^{-4}\), \(2.16\times 10^{-3}\), \(1.535\times 10^{-2}\) of the \(10^{5}\) binaries experienced tilts greater than \(\pi/2\) for Maxwellian velocity kick distributions with \(\sigma=50\) km s\({}^{-1}\), \(\sigma=100\) km s\({}^{-1}\), and \(\sigma=265\) km s\({}^{-1}\), respectively.
The results for all 96 models are summarized in Table 1. We place a star next to the values corresponding to models that are more likely to be representative of a population of merging BBHs. One of the factors that we consider is whether or not the BBHs can merge in Hubble time to be observed today, given the merger timescale (Peters, 1964)
\[T_{\rm GW} \approx 13{\rm Gyr}\left(\frac{2\times 10^{3}\,{\rm M}_{\odot}^{3}}{m_{ \rm BH,1}m_{\rm BH,2}(m_{\rm BH,1}+m_{\rm BH,2})}\right) \tag{6}\] \[\times \left(\frac{a_{n}}{0.095{\rm AU}}\right)^{4}(1-e_{n}^{2})^{7/2}\,.\]
This equation places a limit on how large the semimajor axis can be in order for a BBH to merge in Hubble time. Specifically, given \(m_{\rm BH,1}\) and \(m_{\rm BH,1}\) as 65 and 47 \(M_{\odot}\), respectively, a semimajor axis of 1 AU would require at least a 0.84 eccentricity for the merger timescale to be less than a Hubble time. On the other hand, a semimajor axis of 0.1 AU would allow for the full range of eccentricities. Thus, a uniform or log-uniform semimajor distribution ranging from 0.01 to 0.1 AU is more likely to be representative of a population of merging BBHs than that of the range 0.01 to 1 AU. For completeness, we show all of these models in Table 1, but we star the values with an upper bound of 0.1 AU for the semimajor axis distribution. Another factor to consider what more likely represents a population of merging BBHs is the velocity kick distribution. A Maxwellian distribution with a \(\sigma\) value of 265 km s\({}^{-1}\) is the expected velocity kick distribution for neutron stars, but is very unlikely to be the case for much more massive BHs, such as in this case due to momentum conservation (Belczynski et al., 2008). Thus, we star the numbers in Table 1 that assume a momentum conservation velocity kick distribution. Looking at the starred values in Table 1, we find that at most, only 0.004% of isolated binaries can be tilted enough such that negative effective inspiral spin is produced.
Due to the improbable chance of creating a binary with both a negative \(\chi_{\rm eff}\) and BH component masses in the mass gap (e.g., Belczynski et al., 2020; Farmer et al., 2020; Renzo et al., 2020; Costa et al., 2021), the rates for this formation channel would therefore be too low to explain GW191109. Thus, we conclude that isolated binary evolution is very unlikely to produce GW191109-like events.
### Dynamics in dense star clusters
Merging BBHs can be assembled dynamically in dense star clusters. Here, BHs quickly segregate to the center through dynamical friction (Chandrasekhar, 1943; Spitzer, 1987) and interact with each other to form binaries. If the binary is hard, subsequent three- and four-body encounters with other BHs further harden the binary by extracting orbital energy (Heggie, 1975), which can eventually merge (e.g., McMillan et al., 1991; Hut et al., 1992; Portegies Zwart and McMillan, 2000; Rodriguez et al., 2016; Fragione et al., 2018; Samsing et al., 2018; Banerjee et al., 2020).
An interesting possibility for BBH mergers formed through dynamical interactions is that the merger remnant can be retained in the parent cluster, whenever the relativistic recoil kick imparted as a result of asymmetric emission of GWs is smaller than the cluster escape speed (e.g., Lousto et al., 2010, 2012). The retained BH, which is now a second-generation BH (2G), can be again dynamically processed to form a new BBH system that eventually merges. Higher-generation mergers account for \(\sim 10\%\) of mergers from globular clusters, with this fraction increasing for denser environments such as nuclear star clusters (e.g., Rodriguez et al., 2019; Antonini et al., 2019; Mapelli et al., 2021; Fragione et al., 2022, 2022). From the second LIGO-Virgo GW Transient Catalog, Kimball et al. (2021) finds that the catalog contains at least one second-generation merger with 99% credibility and lists five BBH mergers with high odds of involving at least one second-generation BH.
In Figure 3, we show the component masses of the merging BBH extracted from the public globular cluster models of Kremer et al. (2020). Out of 148 globular cluster models, we find that there are 169 1G-2G and 24 2G-2G BBHs that have primary and secondary masses consistent with the 90% confidence interval mass ranges of GW191109 (The LIGO Scientific Collaboration et al., 2021). Note that some of the 1G BHs have masses well within the mass gap since they originate from the collapse of a very massive stars, born as a result of repeated stellar mergers (Gonzalez et al., 2021). Therefore, 1G-2G and 2G-2G BBHs dynamically formed in dense star clusters can easily produce the masses of GW191109-like events, despite lying in the pair instability mass gap, unlike isolated binaries.
For dynamically assembled BBHs, the population of BH spins relative to the orbital angular momentum axis is isotropic, as a result of the few-body encounters that catalyze the merger of BBHs. We assume that 1G BHs are born with negligible spin (Fuller and Ma, 2019) and 2G BHs have spins of about 0.7 (e.g., Buonanno et al., 2008; Tichy and Marronetti, 2008). Combining the isotropically distributed spin directions with the component masses of GW191109, we get the \(\chi_{\rm eff}\) distributions for both 1G-2G BBHs and 2G-2G BBHs, as shown in Figure 4. Since the distributions are symmetric around zero, with
the 1G-2G distribution uniformly distributed between -0.41 and 0.41 and the 2G-2G distribution normally distributed between -0.70 and 0.70, negative values of \(\chi_{\rm eff}\) are as likely as positive values. Under the assumption that 1G BHs are born with negligible spin, the median \(\chi_{\rm eff}\) value of GW191109 of \(-0.29\) can be easily accounted for with dynamically assembled BBHs as long as one of the components is a 2G BH.
Finally, we use the number of merging BBH consistent with GW191109 to compute the rates of GW191109-like events
\[R=\frac{N\rho_{\rm GC}}{\tau_{\rm H}}, \tag{7}\]
where \(N\) is the number of BBHs in the GW191109 mass range per globular cluster, \(\rho_{\rm GC}\) is the density of globular clusters in the Universe, and \(\tau_{\rm H}\) is Hubble time. We extract N from Figure 3 by taking the number of simulated 1G-2G and 2G-2G events with masses within the GW191109 90% confidence interval range and dividing it by the total number of simulated globular clusters, which is 169/148 and 24/148, respectively. We assume a value of 0.77 Mpc\({}^{-3}\) for \(\rho_{\rm GC}\) with an optimistic value of 2.31 Mpc\({}^{-3}\) and pessimistic value of 0.32 Mpc\({}^{-3}\), using the calculations from Rodriguez et al. (2015). In Figure 5, we indicate the rate calculation assuming \(\rho_{\rm GC}\) = 0.77 Mpc \({}^{-3}\) with the vertical green dotted line, and the shaded region accounts for the range of calculated rates bounded by the optimistic and pessimistic values for \(\rho_{\rm GC}\). Assuming \(\rho_{\rm GC}\) = 0.77 Mpc\({}^{-3}\), we get an approximate rate for 1G-2G events of 0.064 Gpc\({}^{-3}\) yr\({}^{-1}\) and for 2G-2G events of 0.0090 Gpc\({}^{-3}\) yr\({}^{-1}\), leading to a total rate of 0.073 Gpc\({}^{-3}\) yr\({}^{-1}\) for 1G-2G or 2G-2G events having masses in the GW191109 90% confidence interval range. For an optimistic value of \(\rho_{\rm GC}\) = 2.31 Mpc\({}^{-3}\), that total rate rises to 0.218 Gpc\({}^{-3}\) yr\({}^{-1}\), and for a pessimistic value of \(\rho_{\rm GC}\) = 0.32 Mpc\({}^{-3}\), that total rate decreases to 0.030 Gpc\({}^{-3}\) yr\({}^{-1}\).
We then compare our estimate to the expected number of GW191109-like mergers given one detection by the LIGO-Virgo detector network. Following the method described in Kim et al. (2003), we calculate this rate assuming one Poisson-distributed count from a BBH population with masses and spins drawn from the publicly available parameter estimation samples for GW191109 (The LIGO Scientific Collaboration et al., 2021). We calculate the network sensitivity to this population across
Figure 3: Primary mass and secondary mass of 1G-2G and 2G-2G BBHs from 148 simulated globular clusters in the models of Kremer et al. (2020). For 1G-2G mergers, we distinguish between 1G BHs born as a result of the collapse of massive stars (blue points) and 1G BHs formed via star collisions (yellow points).
O1, O2, and O3 using a combined SNR of 10 as a detection threshold. Using an \(R^{-1/2}\) Jeffrey's prior, we find a GW191109-like merger rate of \(R_{191109}=0.09^{+0.2}_{-0.07}\) Gpc\({}^{-3}\) yr\({}^{-1}\) (90% confidence interval), and plot the rate posterior in Figure 5. Here, we see that the simulated combined hierarchical rate lies well within the rate distribution of detecting a GW191109-like merger.
In summary, BBHs assembled in dense star clusters can explain the masses, the effective spin, and the rate of GW191109, unlike isolated binary evolution. Note also that our interpretation of GW191109 as having a dynamical origin is consistent with the findings in Romero-Shaw et al. (2022). In this study, they used a reweighting method (Payne et al., 2019; Romero-Shaw et al., 2019) to calculate the eccentricity posterior probability distribution and find 72.19% of its posterior support an eccentricity above 0.05, and 62.63% above 0.1. This likely non-zero orbital eccentricity also points towards a dynamical origin of GW191109, consistent with our results.
### Other channels
Another astrophysical scenario to consider for the possible formation of GW191109-like events is the AGN channel (e.g., Bartos et al., 2017; Tagawa et al., 2020). The spin evolution of stellar-mass BHs in AGN disks has been studied by Tagawa et al. (2020). Through their semi-analytical simulations, they find that while gas accretion enhances the effective spin towards more positive values, hard binary-single interactions in the disks may reduce it. Therefore, the more efficient radial migration of BHs to inner, densely populated regions of the AGN disk is, the more symmetric around zero is the distribution of effective spin of merging BBHs. However, if this migration is inefficient, then the \(\chi_{\rm eff}\) values would be skewed towards higher values. Tagawa et al. (2020) finds that BBH mergers with component masses in the mass-gap can be reproduced in the AGN channel, but their rates are highly uncertain. Considering the effective inspiral spin, masses, and rates, we cannot confidently say that GW191109 is likely to be formed via the AGN disk channel.
Hierarchical triple systems have been extensively studied in the literature (e.g., Antonini and Perets, 2012; Hoang et al., 2018; Hamers et al., 2021). Using population synthesis of triple stars that form a BH triplet, Martinez et al. (2022) found that the distribution of effective spins for merging BBH is likely symmetric around zero, as a result of the Lidov-Kozai oscillations. Therefore, hierarchical systems can produce mergers with a negative value of \(\chi_{\rm eff}\). However, this channel is unlikely to produce mass-gap BBHs for the same reasons isolated binaries cannot, that is the masses of the component BHs are limited by the pair-instability physics. Finally, Martinez et al. (2022) finds that although highly uncertain, the hierarchical triple rates are estimated to possibly account for only a fraction of the observed BBH rates, which renders this scenario unlikely for GW191109-like events.
Because these other channels are unlikely to be the origin of GW191109-like events, we argue that dynamical assembly in dense star clusters is the most likely formation channel for GW191109.
Figure 4: Distribution of effective spin for 1G-2G and 2G-2G BBHs, for component masses consistent with GW191109 and assuming non-spinning 1G BHs. Both distributions are symmetric around zero, with a non-negligible likelihood to reproduce negative \(\chi_{\rm eff}\) values.
Figure 5: In solid blue, we plot the posterior over the rate of GW191109-like mergers assuming one Poisson-distributed count from a population drawn from the public parameter estimation samples. The dashed blue lines indicate the 90% confidence interval. The dotted green line marks the combined 1G-2G and 2G-2G merger rate, assuming \(\rho_{\rm GC}\) = 0.77 Mpc \({}^{-3}\), and the shaded green region marks the range of rates bounded by assuming \(\rho_{\rm GC}\) = 0.32 Mpc \({}^{-3}\) and \(\rho_{\rm GC}\) = 2.31 Mpc \({}^{-3}\).
## 4 Conclusions and Discussion
In this paper, we have demonstrated that GW191109 is unlikely to originate from isolated binary evolution based on its negative effective inspiral spin, masses, and inferred rate. Other channels such as hierarchical systems and AGN disks are also unlikely to explain GW191109. Combined with the possibility of non-zero eccentricity (Romero-Shaw et al., 2022), all the evidence points towards a dynamical origin for this source.
Furthermore, this result can be extended to binary systems of any mass with some uncertainties. Kalogera (2000) applied a similar prescription for BBH binaries, testing much lower masses in the 5 M\({}_{\odot}\)-20 M\({}_{\odot}\) range and found that even by assuming a Maxwellian distribution with \(\sigma=200\) km s\({}^{-1}\) for the velocity kicks, it is extremely unlikely for isolated binary evolution to tilt a BH over \(\pi/2\) to contribute to a negative effective inspiral spin. We have reproduced similar results using our prescriptions for the different models with the BBH masses attained from GW200225, 19.3 M\({}_{\odot}\) and 14 M\({}_{\odot}\). This GW event, as shown in Figure 1, also has a very likely negative effective inspiral spin. For the models involving a semimajor axis distribution with an upper bound of 0.1 AU and velocity kicks following momentum conservation, no more than 0.09% of all binaries can be tilted over \(\pi/2\) to produce a negative effective inspiral spin, demonstrating that even regardless of mass, it is extremely unlikely for isolated binary evolution to produce a negative effective inspiral spin.
To study isolated binary evolution in our simulations, we have modeled how a binary system is affected after the second SN occurs in the system. It is possible that the first SN could also help tilt the binary system, increasing the total tilt of the BBHs' spins with respect to the angular momentum vector. While this is true, this contribution to the tilt will likely be smaller, as the binary was more massive during the first SN, making it more difficult to tilt the orbital plane. Given that the second SN is already unlikely to tilt the binary systems over \(\pi/2\), the kick the BH experiences by the first SN would be even more unlikely to contribute to such a tilt.
Finally, it is worth noting that in LIGO/Virgo's O3 run, approximately 20% of the signals experienced glitches. The official LVK release data took into account this effect by modeling the glitch in their inference estimation. In the case of GW191109, the glitch was experienced by the Livingston data between 20 and 40 Hz. If these data are entirely removed from the inference, Davis et al. (2022) finds that the \(\chi_{\rm eff}\) distribution does not peak strongly in the negative values anymore. While this information does not imply that a conclusion about the true \(\chi_{\rm eff}\) distribution can be drawn, it does emphasize the uncertainties for the measurement. Additionally, we note that the LVK data analysis pipeline assumes a prior of isotropic spins for all of the GW events (The LIGO Scientific Collaboration et al., 2021), which may affect the posterior \(\chi_{\rm eff}\) distribution. Even if the information on \(\chi_{\rm eff}\) of GW191109 is excluded from the analysis of its origin, its masses and rates still strongly point towards a dynamical origin.
With the LIGO O4 run coming up in the next year, the population of BBHs will increase to hundreds of events, with the potential of providing better constraints on possible formation channels. The observable properties of masses in the pair instability mass gap, unequal mass ratios, and negative \(\chi_{\rm eff}\), and inferred properties of non-zero orbital eccentricity and merger rates, will be large indicators of alternative formation channels from standard isolated binary evolution, such as dynamical origins. Thus, it will be important to place an emphasis on the analysis of these properties in future observations.
We thank Fred Rasio, Salvatore Vitale, and Katerina Chatziioannou for useful comments on an earlier version of the manuscript. G.F. acknowledges support from NASA Grant 80NSSC21K1722. C.K. is grateful for support from the Riedel Family Fellowship. V.K. was partially supported through a CIFAR Senior Fellowship and from Northwestern University, including the Daniel I. Linzer Distinguished University Professorship fund.
## Appendix A Relative Mass Loss Distributions of SSE Simulations
In this appendix, we show the relative mass loss distributions generated from SSE as explained in Section 3. We simulate the evolution of \(10^{5}\) single stars using SSE and plot the mass losses experienced from SN of those stars that have a post-SN mass that is within the error bar range of the GW191109 secondary mass. Figure A1 and Figure A2 show the relative distributions of mass loss given initial star metallicities of 0.001 and 0.01, respectively.
|
2305.15814 | Bhasha-Abhijnaanam: Native-script and romanized Language Identification
for 22 Indic languages | We create publicly available language identification (LID) datasets and
models in all 22 Indian languages listed in the Indian constitution in both
native-script and romanized text. First, we create Bhasha-Abhijnaanam, a
language identification test set for native-script as well as romanized text
which spans all 22 Indic languages. We also train IndicLID, a language
identifier for all the above-mentioned languages in both native and romanized
script. For native-script text, it has better language coverage than existing
LIDs and is competitive or better than other LIDs. IndicLID is the first LID
for romanized text in Indian languages. Two major challenges for romanized text
LID are the lack of training data and low-LID performance when languages are
similar. We provide simple and effective solutions to these problems. In
general, there has been limited work on romanized text in any language, and our
findings are relevant to other languages that need romanized language
identification. Our models are publicly available at
https://ai4bharat.iitm.ac.in/indiclid under open-source licenses. Our training
and test sets are also publicly available at
https://ai4bharat.iitm.ac.in/bhasha-abhijnaanam under open-source licenses. | Yash Madhani, Mitesh M. Khapra, Anoop Kunchukuttan | 2023-05-25T07:53:23Z | http://arxiv.org/abs/2305.15814v3 | # Bhasha-Abhijnaanam: Native-script and Romanized Language
###### Abstract
We create publicly available language identification (LID) datasets and models in all 22 Indian languages listed in the Indian constitution in both native-script and romanized text. First, we create _Bhasha-Abhijnaanam_, a language identification test set for native-script as well as romanized text which spans all 22 Indic languages. We also train _IndicLID_, a language identifier for all the above-mentioned languages in both native and romanized script. For native-script text, it has better language coverage than existing LIDs and is competitive or better than other LIDs. IndicLID is the first LID for romanized text in Indian languages. Two major challenges for romanized text LID are the lack of training data and low-LID performance when languages are similar. We provide simple and effective solutions to these problems. In general, there has been limited work on romanized text in any language, and our findings are relevant to other languages that need romanized language identification. Our models are publicly available at [https://ai4bharat.iitm.ac.in/indiclid](https://ai4bharat.iitm.ac.in/indiclid) under open-source licenses. Our training and test sets are also publicly available at [https://ai4bharat.iitm.ac.in/bhasha-abhijnaanam](https://ai4bharat.iitm.ac.in/bhasha-abhijnaanam) under open-source licenses.
## 1 Introduction
In this work, we focus on building a language identifier for the 22 languages listed in the Indian constitution. With increasing digitization, there is a push to make NLP technologies like translation, ASR, conversational technologies, etc. Bose (2022) available as a public good at population scale Chandorkar (2022). A good language identifier is required to help build corpora in low-resource languages. For such languages, language identification is far from a solved problem due to noisy web crawls, small existing datasets, and similarity to high-resource languages Caswell et al. (2020).
Existing publicly available LID tools like CLD31, LangID2 Lui and Baldwin (2011), FastText3 Joulin et al. (2016) and NLLB4 NLLB Team et al. (2022) have some shortcomings with respect to Indian languages. They do not cover all the above-mentioned 22 languages. In social media and chats, it is also common to use the roman script for most Indian languages leading to substantial user-generated content in roman script. However, none of the LIDs have any support for the detection of romanized Indian language text (except cld3 support for Latin Hindi). The widespread use of romanization implies that accurate romanized Language Identification models are a critical component in the NLP stack for Indian languages, given that this affects over 735 million internet users KPMG and Google (2017). Therefore, our work on developing accurate and effective romanized Language Identification models has the potential to make a significant impact in the NLP space for Indian languages, particularly in the social media and chat application domains. Hence, we undertake the task of creating a LID for these 22 Indian languages. The main contributions of our work are as follows:
Footnote 1: [https://github.com/google/cld3](https://github.com/google/cld3)
Footnote 2: [https://github.com/saffs/dangid.py](https://github.com/saffs/dangid.py)
Footnote 3: [https://fastext.cc/docs/en/language-identification.html](https://fastext.cc/docs/en/language-identification.html)
Footnote 4: [https://github.com/facebookresearch/fairseq/tree/mllb#lid-model](https://github.com/facebookresearch/fairseq/tree/mllb#lid-model)
\(\bullet\) We create _Bhasha-Abhijnaanam_5, a language identification test set for native-script as well as romanized text which spans 22 Indic languages. Previous benchmarks for native script do not cover all these languages NLLB Team et al. (2022); Roark et al. (2020). The Dakshina test set for romanized text covers only 11 languages and there are ambiguous instances in the test set like named entities that cannot be assigned to a particular language Roark et al. (2020).
Footnote 5: The word means language-identification in Sanskrit.
\(\bullet\) We also train, _IndicLID_, an LID for all the above
mentioned languages in both native and romanized script. For native-script training data, we sample sentences from diverse sources and oversample low-resource languages. IndicLID native-script model has better language coverage than existing LIDs and is competitive or better than other LIDs with 98% accuracy and at least 6 times better throughput.
\(\bullet\) To the best of our knowledge, ours is one of the first large-scale efforts for romanized LID in any language, a task that has not received much attention. A major challenge for romanized text LID is the lack of romanized training data. We show that synthetic romanized training data created via transliteration can help train a reasonably good LID for romanized text. A simple linear classifier does not perform well for romanized text. Hence, we combine a simple but fast text classifier with a slower but more accurate classifier based on a pre-trained language model to achieve a good trade-off between accuracy and speed.
Our findings are relevant to other languages that need LID for romanized text. We require native script data and a transliteration model to create the synthetic romanized data for the target language. This romanized data serves as training data for the romanized LID.
## 2 Bhasha-Abhijnaanam benchmark
We describe the creation of the Bhasha-Abhijnaanam LID benchmark for 22 Indian languages in native and roman script. Table 1 describes the statistics of the _Bhasha-Abhijnaanam_ benchmark. We build upon existing benchmarks to fill in the coverage and quality gaps and cost-efficiently cover all languages.
### Native script test set.
We compile a native script test set comprising 19 Indian languages and 11 scripts from the FLORES-200 devtest (NLLB Team et al., 2022) and Dakshina sentence test set (Roark et al., 2020). We create native text test sets for the remaining three languages (_Bodo, Konkani, Dogri_) and one script (_Manipuri_ in _Meeei Mayek_ script) not covered in these datasets. For these new languages we first sample the English sentences from Wikipedia and ask in-house, professional translators to translate the sentences to respective languages. This method ensured the quality and accuracy of our test samples, as well as minimizing any potential noise in the data.
### Roman script test set.
We propose a new benchmark test set to evaluate roman-script language identification for 21 Indian languages. Out of these, 11 languages are represented in the Dakshina romanized sentence test set (Roark et al., 2020), which comprises native script sentences from Wikipedia along with their romanization. However, this test set includes short sentences which are just named entities and English loan words which are not useful for romanized text LID evaluation. To address this issue, we manually validate the Dakshina test sets for the languages we are interested in and filter out about 7% of the sentences. Section 2.3 describes the details of the filtering process. To create a benchmark test set for the remaining 10 Indian languages, we sampled sentences from IndicCorp (Doddapaneni et al.,
\begin{table}
\begin{tabular}{l l r r} \hline \hline
**Language** & **Script** & **Native** & **Roman** \\ \hline Assamese & Bengali & 1012 & **512** \\ Bangla & Bengali & 5606 & 4595 \\ Bodo & Devanagari & **1500** & **433** \\ Dogri & Devanagari & **1498** & **512** \\ Gujarati & Gujarati & 5797 & 4785 \\ Hindi & Devanagari & 5617 & 4606 \\ Kannada & Kannada & 5859 & 4848 \\ & Perso-Arabic & 2511 & **450** \\ & Devanagari & 1012 & **450** \\ Konkani & Devanagari & **1500** & **444** \\ Maithili & Devanagari & 2512 & **439** \\ Malayalam & Malayalam & 5628 & 4617 \\ Manipuri & Bengali & 1012 & **442** \\ & Meeti Mayek & **1500** & **442** \\ Marathi & Devanagari & 5611 & 4603 \\ Nepali & Devanagari & 2512 & **423** \\ Oriya & Oriya & 1012 & **512** \\ Punjabi & Gurmukhi & 5776 & 4765 \\ Sanskrit & Devanagari & 2510 & **448** \\ Santali & Ol Chiki & 2512 & 0 \\ Sindhi & Perso-Arabic & 5893 & 4881 \\ Tamil & Tamil & 5779 & 4767 \\ Telugu & Telugu & 5751 & 4741 \\ Urdu & Perso-Arabic & 6883 & 4371 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the Bhasha-Abhijnaanam benchmark. Number of romanized and native-script sentences are reported. The cells in **bold** indicate the datasets newly contributed by this work. Romanized Santali test-set has not been created since Santhali annotators we contacted did not use roman script and spoke Bengali as a second language. NLLB Team et al. (2022) also cite a similar experience.
2022) and asked annotators to write the same in roman script. We did not specify any transliteration guidelines and annotators were free to transliterate in the most natural way they deemed fit. We additionally asked annotators to skip the sentence if they find it invalid (wrong language, offensive, truncated, etc.).
### Romanized Dakshina testset filtering
The Dakshina romanized sentence test set includes short sentences which are just named entities and English loan words which are not useful for romanized text LID evaluation. To address this issue, we manually validate the Dakshina test sets for the languages we are interested in. We first identified potentially problematic sentences from the romanized Dakshina test set by applying two constraints: (i) sentences shorter than 5 words, and (ii) native LID model is less confident about the native language sentence (prediction score less than 0.8). These sentences were then validated by native language annotators. The annotators were asked to read the roman sentences and determine whether they were named entities or sentences where they could not determine the language. Such entries were filtered out. About 7% of the sentences were filtered. Table 2 describes the filtering statistics.
## 3 IndicLID Model
IndicLID is a classifier specifically for Indic languages that can predict 47 classes (24 native-script classes and 21 roman-script classes plus English and Others). We create three classifier variants: a fast linear classifier, a slower classifier finetuned from a pre-trained LM, and an ensemble of the two models which trades off speed v/s accuracy.
### Training dataset creation
**Native-script training data.** We compiled the training data sentences from various sources viz. IndicCorp (Doddapaneni et al., 2022), NLLB (NLLB Team et al., 2022), Wikipedia, Vikaspedia 6 and internal sources. To ensure a diverse and representative training dataset, we sampled 100k sentences per language-script combination in a balanced way across all these sources. We used oversampling for languages with less than 100k sentences. We tokenized and normalized the sentences using IndicNLP library 7Kunchukuttan (2020) with default settings.
Footnote 6: [https://vikaspedia.in](https://vikaspedia.in)
Footnote 7: [https://github.com/anoopkunchukuttan/indic_nlp_library](https://github.com/anoopkunchukuttan/indic_nlp_library)
**Romanized training data.** There is hardly any romanized corpora for Indian languages in the public domain8. Hence, we explored the use of transliteration for creating synthetic romanized data. We create romanized training data by transliterating the native script training data into roman script using the multilingual IndicXlit9 transliteration model (Indic-to-En version) (Madhani et al., 2022), The authors have provided results on the transliteration quality of the IndicXlit model. We rely on this analysis to ensure the quality of generated training data.
Footnote 8: CC-100 has romanized versions for 4 Indian languages, but a manual analysis suggested that it contains a lot of profane content.
Footnote 9: [https://github.com/AI4Bharat/IndicXlit](https://github.com/AI4Bharat/IndicXlit)
### Linear classifier
Linear classifiers using character n-gram features are widely used for LIDs (Jauhiainen et al., 2021). We use FastText (Joulin et al., 2016) to train our fast, linear classifier. It is a lightweight and efficient linear classifier that is well-suited for handling large-scale text data. It utilizes character n-gram features which enables it to utilize subword information. This makes it particularly useful for dealing with rare words and allows it to discriminate between similar languages having sim
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Language** & **Total samples** & **Valid samples** & **\%filtered** \\ \hline Bengali & 5001 & 4600 & 8.0183 \\ Gujarati & 5001 & 4789 & 4.2391 \\ Hindi & 5001 & 4616 & 7.6984 \\ Kannada & 5001 & 4849 & 3.0393 \\ Malayalam & 5001 & 4627 & 7.4785 \\ Marathi & 5001 & 4617 & 7.6784 \\ Punjabi & 5001 & 4782 & 4.3791 \\ Sindhi & 5001 & 4889 & 2.2395 \\ Tamil & 5001 & 4802 & 3.9792 \\ Telugu & 5001 & 4754 & 4.9390 \\ Urdu & 4881 & 4395 & 9.9569 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of Dakshina roman filtered test set
Figure 1: IndicLID Classifier Workflow
ilar spellings. We trained separate classifiers for native script (**IndicLID-FTN**) and roman script (**IndicLID-FTR**). We chose 8-dimension word-vector models after experimentation as they maintain small model sizes without losing model accuracy (refer Appendix A for results).
### Pretrained LM-based classifier
For romanized text, we observed that linear classifiers do not perform very well. Hence, we also experimented with models having larger capacity. Particularly, we finetuned a pretrained LM on the romanized training dataset. We evaluated the following LMs: XLM-R (Conneau et al., 2020), IndicBERT-v2 (Doddapaneni et al., 2022) and MuRIL (Khanuja et al., 2021). The last two LMs are specifically trained for Indian languages and MuRIL also incorporates synthetic romanized data in pre-training. Hyperparameters for finetuning are described in Appendix B. We used IndicBERT-based classifier as the LM-based classifier (henceforth referred to as **IndicLID-BERT**) since it was amongst the best-performing romanized text classifiers and had maximum language coverage.
### Final Ensemble classifier
Our final IndicLID classifier is an pipeline of multiple classifiers. Figure 1 shows the overall workflow of the IndicLID classifier. The pipeline works as described here: (1) Depending on the amount of roman script in the input text, we invoke either the native-text or romanized linear classifier. IndicLID-FTR is invoked for text containing \(>\)50% roman characters. (2) For roman text, if IndicLID-FTR is not confident about its prediction, we redirect the request to the IndicLID-BERT. We resort to this two-stage approach for romanized input to achieve a good trade-off between classifier accuracy and inference speed. The fast IndicLID-FTR's prediction is used if the model is confident about its prediction (probability of predicted class \(>0.6\) ), else the slower but more accurate IndicLID-BERT is invoked. This threshold provides a good trade-off (See Appendix C for more details).
## 4 Results and Discussion
We discuss the performance of various models on the benchmark and analyze the results. To prevent any overlap between the test/valid and train sets, we excluded the Flores-200 test set (NLLB Team et al., 2022), Dakshina test set (Roark et al., 2020) while sampling native train samples from various sources. Additionally, we removed the training samples from the benchmark samples when collecting sentences for the benchmark test set. We also made sure that there was no overlap between the test and valid sets. To create the romanized training set, we simply transliterated the native training set. As the Dakshina test set (Roark et al., 2020) provided parallel sentences for the native and roman test sets, there was no overlap between the roman train and test sets.
### Native script LID
We compare IndicLID-FTN with the NLLB model (NLLB Team et al., 2022) and the CLD3 model. As we can see in Table 3, the LID performance of IndicLID-FTN is comparable or better than other models. Our model is 10 times faster and 4 times smaller than the NLLB model. The model's footprint can be further reduced by model quantization (Joulin et al., 2016) which we leave for future work.
### Roman script LID
Table 4 presents the results of different model variants on the romanized test set (see Appendix D for language-wise results). IndicLID-BERT is significantly better than IndicLID-FTR, but the throughput decreases significantly. The ensemble model (IndicLID) maintains the same LID performance as IndicLID-BERT with a 3x increase in the throughput over IndicLID-BERT. Further speedups in the model throughput can be achieved by creating distilled versions, which we leave for future work.
**LID confusion analysis** The confusion matrix for IndicLID is shown in Figure 2. We see that major confusions are between similar languages. Some
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **P** & **R** & **F1** & **Acc** & **Throughput** & **Size** \\ \hline IndicLID-FTN-8-dim (24) & 98.11 & 98.56 & 98.31 & 98.55 & 30,303 & 31SM \\ \hline _Comparing our IndicLID-FTN model with CLD3 model (12)_ & & & & & \\ IndicLID-FTN-4-dim & 99.43 & 98.40 & 98.89 & 98.33 & 47.619 & 20SM \\ IndicLID-FTN-8-dim & 99.73 & 98.67 & 99.18 & 98.62 & 33.333 & 31SM \\ CLD3 & 98.52 & 98.14 & 98.31 & 98.03 & 4,861 & - \\ \hline _Comparing our IndicLID-FTN model with NLLB model (12)_ & & & & & \\ IndicLID-FTN-4-dim & 97.78 & 98.10 & 97.92 & 98.19 & 41.666 & 20SM \\ IndicLID-FTN-8-dim & 98.13 & 98.99 & 98.34 & 98.56 & 29,411 & 31SM \\ NLLB & 99.28 & 98.65 & 98.95 & 98.78 & 4,970 & 1.1G \\ \hline \hline \end{tabular}
\end{table}
Table 3: Benchmarking on the Bhasha-Abhijnaanam native-script testset. For fair comparison with NLLB and CLD3, we restrict the comparison to languages that are common with IndicLID-FTN (count of common languages is indicated in brackets). Throughput is number of sentence/second.
examples of such language clusters that can be observed are (1) Hindi and very close languages like Maithili, Urdu and Punjabi, (2) Konkani and Marathi, (3) Sindi and Kashmiri. Improving romanized LID between very similar languages is thus an important direction of improvement.
**Impact of synthetic training data** To understand the impact of synthetic training data, we generate a machine-transliterated version of the romanized test set using IndicXlit. We compare the LID accuracy on the original and synthetically generated test sets. Table 5 shows that the results on the synthetic test set are significantly better than the original test set (approaching accuracy levels in the 90s). The data characteristics of the synthetic test set are much closer to the training data than the original test set. Closing the training-test distribution gap (by representing original romanized data in the training data and/or improved generation of synthetic romanized data to reflect true data distribution) is critical to improving model performance.
The confusion matrix gives further insights into the impact of synthetic training data. Hindi is confused with languages like Nepali, Sanskrit, Marathi and Konkani using the same native script as Hindi (Devanagari). Since a multilingual transliteration model with significant Hindi data was used to create the synthetic romanized training data, it may result in the synthetic romanized forms of these languages being more similar to Hindi than would be the case with original romanized data.
**Impact of input length** Figure 3 plots the LID accuracy for various input length buckets. The LID is most confused for short inputs (<10 words) after which the performance is relatively stable.
## 5 Conclusion
We introduce an LID benchmark and models for native-script and romanized text in 22 Indian languages. These tools will serve as a basis for building NLP resources for Indian languages, particularly extremely low-resource ones that are "left-behind" in the NLP world today Joshi et al. (2020). Our work takes first steps towards LID of romanized text, and our analysis reveals directions for future work.
## Acknowledgements
We would like to thank the Ministry of Electronics and Information Technology of the Government of India for their generous grant through the Digital India Bhashini project. We also thank the Centre for Development of Advanced Computing for providing compute time on the Param Siddih Supercomputer. We also thank Nilekani Philanthropies for their generous grant towards building datasets, models, tools and resources for Indic languages. We also thank Microsoft for their grant to support
\begin{table}
\begin{tabular}{l|r r r r r} \hline \hline
**Model** & **P** & **R** & **F1** & **Acc** & **Transippant** & **Size** \\ \hline IndicXID-FTR (dim-8) & 63.12 & 78.01 & 63.28 & 71.49 & 37.07 & 357.8 M \\ IndicXID-BERT (after-long-) & 72.20 & 84.01 & 74.52 & 80.04 & 3 & 1.1 GB \\ IndicXID (threshold-0.6) & 72.74 & 84.50 & 74.72 & 80.40 & 10 & 1.4 GB \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of IndicXID-FTR on BhashAbbiijanaanram roman script test set. Throughput is number of sentence/second.
Figure 3: Effect of input length on romanized testset
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline
**Testset** & **P** & **R** & **F1** & **Acc** \\ \hline Original & 72.74 & 84.50 & 74.72 & 80.40 \\ Synthetic & 90.79 & 97.24 & 93.43 & 95.96 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of results on Synthetic vs. original Romanized test sets for IndicXID model
Figure 2: Confusion matrix (IndicLID, roman testset)
research on Indic languages. We would like to thank Jay Gala and Ishvinder Sethi for their help in coordinating the annotation work. Most importantly we would like to thank all the annotators who helped create the Bhasha-Abhijnaanam benchmark.
## Limitations
The benchmark for language identification for the most part contains clean sentences (grammatically correct, single script, etc.). Data from the real world might be noisy (ungrammatical, mixed scripts, code-mixed, invalid characters, etc.). A better representative benchmark might be useful for such use cases. However, the use cases captured by this benchmark should suffice for the collection of clean monolingual corpora. This also represents a first step for many languages where no LID benchmark exists.
The use of synthetic training data seems to create a gap in performance due to divergence in train/test data distributions. Acquisition of original native romanized text and methods to generate better romanized text are needed.
Note that the romanized LID model does not support Dogri since the IndicXlit transliteration model does not support Dogri. However, since Dogri is written in the Devanagari script using the transliterator for Hindi which uses the same script might be a good approximation to generate synthetic training data. We will explore this in the future.
This work is limited to the 22 languages listed in the 8\({}^{th}\) schedule of the Indian constitution. Further work is needed to extend the benchmark to many more widely used languages in India (which has about 30 languages with more than a million speakers).
## Ethics Statement
For the human annotations on the dataset, the language experts are native speakers of the languages and from the Indian subcontinent. They were paid a competitive monthly salary to help with the task. The salary was determined based on the skill set and experience of the expert and adhered to the norms of the government of our country. The dataset has no harmful content. The annotators were made aware of the fact that the annotations would be released publicly and the annotations contain no private information. The proposed benchmark builds upon existing datasets. These datasets and related works have been cited.
The annotations are collected on a publicly available dataset and will be released publicly for future use. The IndicCorp dataset which we annotated has already been checked for offensive content.
All the datasets created as part of this work will be released under a CC-0 license10 and all the code and models will be released under an MIT license.11
Footnote 10: [https://creativecommons.org/publicdomain/zero/1.0](https://creativecommons.org/publicdomain/zero/1.0)
Footnote 11: [https://opensource.org/licenses/MIT](https://opensource.org/licenses/MIT)
|
2302.00500 | Serious Games and AI: Challenges and Opportunities for Computational
Social Science | The video game industry plays an essential role in the entertainment sphere
of our society. However, from Monopoly to Flight Simulators, serious games have
also been appealing tools for learning a new language, conveying values, or
training skills. Furthermore, the resurgence of Artificial Intelligence (AI)
and data science in the last decade has created a unique opportunity since the
amount of data collected through a game is immense, as is the amount of data
needed to feed such AI algorithms. This paper aims to identify relevant
research lines using Serious Games as a novel research tool, especially in
Computational Social Sciences. To contextualize, we also conduct a
(non-systematic) literature review of this field. We conclude that the synergy
between games and data can foster the use of AI for good and open up new
strategies to empower humanity and support social research with novel
computational tools. We also discuss the challenges and new opportunities that
arise from aspiring to such lofty goals. | Jaime Pérez, Mario Castro, Gregorio López | 2023-02-01T15:15:04Z | http://arxiv.org/abs/2302.00500v1 | # Serious Games and AI: Challenges and Opportunities for Computational Social Science
###### Abstract
The video game industry plays an essential role in the entertainment sphere of our society. However, from Monopoly to Flight Simulators, serious games have also been appealing tools for learning a new language, conveying values, or training skills. Furthermore, the resurgence of Artificial Intelligence (AI) and data science in the last decade has created a unique opportunity since the amount of data collected through a game is immense, as is the amount of data needed to feed such AI algorithms. This paper aims to identify relevant research lines using Serious Games as a novel research tool, especially in Computational Social Sciences. To contextualize, we also conduct a (non-systematic) literature review of this field. We conclude that the synergy between games and data can foster the use of AI for good and open up new strategies to empower humanity and support social research with novel computational tools. We also discuss the challenges and new opportunities that arise from aspiring to such lofty goals.
Serious Games, Artificial Intelligence, Computational Social Science, Novel Research Tools, Human behaviour 1
## 1 Introduction
Cames have existed in all human societies and many other animal species. While some of the oldest board games, such as Go, Backgammon, or Checkers, are still played today, video games have become one of the most relevant forms of entertainment in our society, with budgets and profits far exceeding those of huge related industries such as cinema [1]. However, since the origin of games, they have had intentions and benefits beyond entertainment, such as teaching social norms, strengthening social bonds, or developing imagination and planning skills.
The rise of video games has had a remarkable social impact, transforming mentalities and helping to establish new patterns of social interaction [2]. A prominent example of this trend is the gamification that our lives have experienced [3], from the workplace (e.g., _Habilica_, _LifeUp_) to romantic relationships (e.g., _Tinder_, _Grindr_) or education (e.g., _Kahoot!_, _Duolingo_). One of the main reasons games have such a tremendous impact on players is due to interactivity, an almost unique feature over other cultural or artistic elements. This attribute encourages higher motivation, engagement, and empathy levels than in other media. It is noteworthy that at the same time as the video game industry is rising, the board game industry continues to grow as well [4]. We can draw a clear conclusion from all these facts: our society is highly gamified; we love to play games, and they have enormous potential to transform how we see the world in a much more profound way than we are often aware of.
Some games --referred to as _serious_[5]-- are explicitly designed for a primary purpose beyond pure entertainment (e.g., learning new skills, conveying values, awareness-raising). However, being entertaining is part of their attractiveness. The first serious games were released in a wide range of formats, from sports to board games (e.g., _Monopoly_, _Suffragetto_), so this concept precedes the digital era.
The current re-emergence of serious games has coincided with the eruption of Artificial Intelligence (AI), one of the most impressive game-changers in the history of humanity. Nowadays, and increasingly so, almost every entertainment element and digital product are at the service of data analysis
and AI algorithms, and games are no exception. Especially given that the amount of data available via video games far exceeds any other artistic element. AI is already transforming society through large digital platforms, social networks, and recommender systems. The most widespread use of these tools is in marketing, meticulously analyzing our patterns and tastes to sell us products and capture our attention as much as possible.
AI has demonstrated its potential to analyze and improve our understanding of the dynamics of our societies, social interactions, as well as individual and collective behaviors. For this reason, we firmly believe that the synergy between serious games and AI offers an exceptional window of opportunity for large-scale, non-invasive, and inexpensive social studies, leveraging their disinhibition and entertainment effects, along with interactivity, to collect large amounts of meaningful data. Moreover, games "casual" and playful nature can help break down conventional communication boundaries, encouraging participants to interact openly and discuss topics that might otherwise be complicated or too sensitive. In Figure 1 we can find a visual overview of this paper's content to guide and facilitate the reading of the document. Including applications of serious games (Section II), the role of AI in them (Section III), and the new lines of work and challenges opening up for employing them as research tools (Section IV), especially in the computational social sciences [6]. Finally, Section V presents the main conclusions drawn from this research.
## II Applications of serious games
The upsurge that serious games have been experiencing in recent years may lead us to think this is a new phenomenon. However, the origin of serious games dates back to the 1970s. Clark C. Abt is credited for coining the term _serious games_, defining them as "_games with an explicit and carefully thought-out educational purpose that are not intended to be played primarily for amusement_". Clark C. Abt studied the potential of games as a vehicle for political, educational, or marketing ideas. Another of the leading figures in the history of serious games is Ian Bogost, the author of seminal books on the theory behind them, such as "_Persuasive Games: The expressive power of video games_" [7]. His research has leveraged video games for political, educational, and business use in the 21st century.
Even though both concepts mirror the same social phenomenon, it is relevant to highlight the distinction between gamification and serious games. Gamification consists of using and integrating game elements into non-game concepts, while serious games refer to the design of entire games for non-playful primary purposes. Although both are concepts from the last century, they have resurfaced in the academic and commercial arenas in recent years.
Among the first serious video games, we find examples where they are employed to convey particular values (e.g., _Captain Bible in the Dome of Darkness_, _The Oregon Trail_, _Mobility_), disease awareness (e.g., _Captain Novolin_), or military training (e.g., _Bradley Trainer_). Nevertheless, the line between "normal" and "serious" games is quite blurred regarding serious games used to convey specific beliefs or ideologies. Like any artistic or intellectual creation, video games always carry an implicit political and philosophical perspective. For example, popular video games such as _Papers Please_ or _This War of Mine_ convey strong political messages that raise fundamental questions. Yet, they were not developed under the idea of being explicit "serious games". On the other hand, video games such as _The Sims_ or _SimCity_ are very politically charged. Still, they are not usually perceived as
Figure 1: Graphical overview of the paper. The blue boxes are the applications found for serious games and AI applied to them. The red box indicates the challenges faced by this union for its use as a research tool. And the green box indicates promising lines of work in this direction.
such since they represent a situation closer to our day-to-day life.
Focusing on serious games that consider themselves as such and have been designed for that purpose, we find many fields where they have demonstrated their usefulness on numerous occasions:
#### Education
In this section, we focus on serious games designed for the player to learn a series of concepts of a specific subject. To do so, the players must demonstrate their knowledge during the game and score their performances. Education has been one of the main focuses of action for serious games, based on the principle that learning while having fun is possible and efficient. This field has been explored so extensively that success and failure factors have even been analyzed in depth [8][9]. Prominent examples of building STEM skills might include _Garfield's Count Me In_[10], _Minecraft: Education Edition_[11], the _Kahoot DragonBox_ maths apps [12] and the _LightBot_ coding apps [13]. Serious games for educational purposes have also become popular in higher medical education [14], although some authors question their usefulness at such high educational levels, possibly serving as complements to more traditional learning methods [15][16].
#### Training
Closely related to education, this category refers to games designed for players to learn and practice specific skills that will enable them to perform those actions in the real world with improved safety, confidence, and knowledge. This approach is widely used in companies where human failure is critical or costly. One of the best-known examples is flight simulators, such as _Microsoft Flight Simulator_[17], where aspiring pilots must spend hours practicing before flying an actual commercial aircraft. There are also notable examples of training healthcare professionals [18], cybersecurity trainees [19][20], and law enforcement agencies or military forces [21][22]. Another widespread use is training to manage complex business situations or the administration of teams and resources, used both in actual private companies [23][24] and universities [25][26].
#### Awareness
Thanks to the almost unique characteristic of interactivity, games evoke deep levels of empathy, making them an ideal vehicle to convey an awareness of relevant social issues. A classic example is _Darfur is Dying_[27], which sought to tell the story of the humanitarian crisis in the Darfur region of South Sudan. However, we can find examples on a wide range of topics, such as drug consumption and trafficking [28][29], cyberbullying [30], gender equality [31], misinformation [32][33], climate change [34], and environmental sustainability [35][36][37][38].
#### Health Treatments
This category is framed in healthcare but focuses more on patients than professionals. Well-known examples might be the _Wii Fit_ and _Brain Training_ games, which aim to have fun and stay fit (physically and mentally) simultaneously. Other notable examples can be found in the field of mental health therapy [39][40], increasing self-efficacy and physical activity in people with chronic diseases [41][42], helping the learning process and support of children with autism [43][44], palliative care and memory training for the elderly and/or people with dementia [45][46], and guidance and motivation in rehabilitation processes [47][48][49][50]. Notably, in 2020 the US Food and Drug Administration approved the first video game-based treatment, _EndeavorRx_, targeting children between the ages of eight and twelve with certain types of Attention Deficit Hyperactivity Disorder (ADHD) [51].
#### Recruitment
If we combine games' interactivity with players' ability to make decisions in a well-designed environment, we can infer some behaviors or aspects of the players' abilities with reasonable confidence. For this reason, serious games have also been used to optimize the recruitment process in private companies [52][53] and even in military forces [54]. In these games, players are presented with complex situations where they must make decisions and act under certain constraints or pressures. A recent notable example is the _CodinGame_1 platform, where users practice their programming skills while playing, and many tech companies recruit profiles they find interesting. Another great example is the _GT Academy_2, a competition in which the best players of a car racing video game have the opportunity to become professional drivers.
Footnote 1: CodinGame [https://www.codingame.com](https://www.codingame.com)
Footnote 2: GT Academy [https://www.gran-turismo.com/es/academy/](https://www.gran-turismo.com/es/academy/)
#### Marketing & Propaganda
When the game is developed primarily for marketing purposes, it is often known as an "advergame". This category of games aims to convey ideas and create desires unintrusively and easily customizable way. It should not be confused with games that introduce advertising during gameplay for economic profit. The principal medium for these advergames is smartphones due to their proliferation, ease of development, and everyday use among young people.
Major brands such as _Volkswagen_, _Magnum_, _Chupa Chups_ or _M&Ms_ have developed advergames. Concerning the Recruitment category, in some cases, companies seek to present and profile themselves through these games to attract new employees and trainees or discover talent.
Likewise, there have also been attempts to use video games as a tool to disseminate electoral campaigns, such as the video game _Corbyn Run_[55], or to encourage citizen participation in public decisions [56][57].
## IEEE Access
### Science & Human-based Computation
This category encompasses games to advance scientific knowledge in some way. One of the most common approaches is employing human players to perform seemingly trivial tasks, either too costly, too complex, or unfeasible with finite computational resources. These tasks may include labelling data, transcribing text, using common sense, or activities based on the human experience.
One of the first examples of this category was "_The ESP Game_" [58], in which players, grouped in pairs, had to guess the photo labels their partner had come up with. Google's reCAPTCHA3 is a recent example that has followed this approach of using human players to label images while identifying legitimate users for accessing online resources. Another successful example was "_EteRNA_" [59], where players had to design RNA sequences that fold into a particular form. The solutions were evaluated to improve computer-based RNA folding prediction models. Other prominent examples might be "_Foldit_" [60] to predict protein structures, "_Eyeswire_" [61] to map retinal neurons, "_MalariaSport_" [62] to help diagnose malaria cases, "_Phylo_" [63] to optimize alignments of nucleotide sequences, or "_Quantum Moves_" [64] to improve how atoms move in a quantum computer.
Footnote 3: reCAPTCHA [https://www.google.com/recaptcha/about/](https://www.google.com/recaptcha/about/)
## III Role of AI and Data Science in Serious Games
Games have long been the test bed for AI as they provide a controlled environment with simple rules for algorithms to learn sophisticated strategies. However, in recent years data science and AI have found a different use for games as sources of vast amounts of player data. Moreover, thus be able to get relevant information about the human players that may be useful both outside and inside the game itself.
Nevertheless, serious games are a particular branch of the gaming industry, so the AI and analysis techniques and their purposes differ noticeably. The significant heterogeneity in the goals of serious games also implies significant technical differences among them. Despite this heterogeneity, we can discern main branches encompassing all major applications of AI and analytics in serious games:
### Assessment
Game-based assessment is a fruitful field in serious games [65], primarily used in education, training, and recruitment. Players are scored based on their knowledge or skills in a particular subject. Pellegrino _et al._[66] stipulate three primary purposes of assessment: (i) to assist learning (formative assessment), (ii) to evaluate the player's capabilities, and (iii) to evaluate programs. In general, collecting, analyzing, and extracting information through educational serious games is known as Game Learning Analytics [67].
The main difference with traditional evaluation methods or test gamification is that game-based assessment also uses in-game and interaction data (e.g., response times) to evaluate the player. Numerous authors have demonstrated the utility of using additional in-game data to evaluate students [68][69][70] or to predict learning results [71][72]. It has also been successfully tested in recruitment processes [73]. Although nowadays, they are more of a complement to the traditional exam-based assessment.
The techniques used are very diverse, from simple descriptive statistics and correlations to supervized machine learning algorithms (e.g., linear regression, decision trees, Naive Bayes, Neural Networks) [74][75]. More rarely, some papers use knowledge inference with Bayesian networks [76][77], which explicitly allows the application of psychological or mental state models, but a flawed model will negatively influence the results significantly.
This branch of AI applications in serious games is one of the most researched and developed, thanks to the technology push changing how education is delivered. However, much work still needs to be done, especially in demonstrating that they can be better than traditional approaches [78].
### Game Design & Validation
Game design is planning the content, rules, and mechanics of a game to create valuable interactive experiences. The large number of artistic and technical factors involved in this process make any analytical information about the players extremely valuable. Game validation employs data and evidence to verify and calibrate the game tasks and their difficulty. In the case of serious games, in addition to maintaining engagement, we also want to ensure that the game meets its primary objective (e.g., to train players in a particular skill, increase awareness of an issue, etc.).
Data-driven serious game design has flourished in academia in recent years, where we can find successful examples of the use of analytical techniques to design, improve, personalize, and validate these games [79][80][70][81][82]. This category is also closely related to the previous one (Assessment), as it is almost essential to use data-driven validation during the game development stage to calibrate the players' evaluation [83][84]. Such analytics can go a step further to adapt in real-time the difficulty of the game [85][86] and even detect when the player is frustrated [87].
In this category, due to the particular aspects of design and validation of each game, the most commonly used techniques are descriptive statistics and visualizations [88][89][90], Randomized Control Trials (to test the usefulness of the intervention) [91][80] and unsupervized machine learning algorithms (to find similar types of players and common patterns in the game) [80].
Using these analytical techniques enables creators and researchers to ensure that their games are entertaining, engaging, and well-designed to fulfill their objectives.
### Player Modeling & Profiling
Player modeling is the creation of computational models to detect, predict and characterize the human player attributes that manifest while playing a game [92]. These models can
be any mathematical representation, rule set, or probability set that maps parameters to observable variables and are built on dynamic information obtained during game-player interaction. On the other hand, player profiling usually refers to categorizing players based on static information that does not alter during gameplay (e.g., personality, cultural background, gender, age). Despite their dissimilarities, these two concepts can complement each other, contributing to more reliable player models.
The main objective of studying players is to understand their cognitive, affective, and behavioral patterns. Recent advances in AI have demonstrated an impressive ability for these same goals that player modeling sets out to achieve. Although, at the moment, there is a significant lack of interpretability in complex models. AI for player modeling is a perfectly good fit solely when explainability is not a hard constraint.
Hooshyar _et al._[93] conducted a systematic literature review that profoundly analyzes the computational and data-driven techniques used for player modeling between the period of 2008 to 2016. As this is such a broad and promising field, the variety of algorithms used is immense: descriptive statistics and correlations, path/network analysis, supervized learning (e.g., Neural Networks, Linear Regression, Hidden Markov Models, Decision Trees), unsupervised learning (e.g., k-means, Linear Discriminant Analysis, Self-Organizing Map), probabilistic algorithms (e.g., Bayesian / Markov Networks), evolutionary methods (e.g., Genetic algorithms), reinforcement learning methods (e.g., Multi-armed bandits), etc. Most of the computational methods used are model-free, meaning they do not impose strict assumptions on the model. However, there are also some model-based approaches (e.g., Bayesian hierarchical models) [94][95] that yield more interpretable and explicit models (e.g., psychological or cognitive) than those which do not impose strict assumptions on the model. For instance, these models can infer the player's hidden parameters or mental states.
Player modeling can be helpful both inside and outside the game itself. The most straightforward goal is to improve the game design, tailoring the content to increase engagement and enhance the gaming or learning experience [96]. Although outside of serious games, we find some prominent examples, such as _Left 4 Dead_[97], where an AI tracks player behavior and adapts future waves of enemies to maintain rhythm and tension. Perhaps the most famous example is the video game _Silent Hill Shattered Memories_[98], which uses a psychological approach where an AI system tries to manipulate players' emotions using the _Five Factor Model_ of personality [99]. Outside the game itself, the most common use of player modeling in the gaming industry is for personalized marketing campaigns, since the commercial sector is very interested in understanding customer behaviors and preferences. In these cases, the games are often presented as free to play in exchange for an intrusion into personal privacy [100]. Besides the "advergames" discussed in the section Marketing & Propaganda, a famous example outside serious games is _Farmville_[101], which monitored the players' behavior to adapt _Amazon_ marketing campaigns to them. This business model is particularly hazardous for younger users, its main target.
In academia, especially in psychology, experiments have been conducted using games (serious and non-serious) for research, but primarily focusing on analyzing how the player's personality is projected in the gameplay patterns [102][103][104][105][106]. However, studying psychological characteristics or phenomenology using serious games seems an up-and-coming field, especially if we introduce AI techniques into the equation.
## IV Challenges and new horizons
In the previous sections, we have discussed the main applications of serious games and the current trends in their synergies with data science and AI. In this section, we take up the argument outlined in the introduction about the great potential of serious games together with AI to serve as research tools, particularly in computational social sciences [6], examining the most critical challenges and promising new lines of work to meet this objective.
As argued in the Introduction section, games allow research to be entertaining, provide high levels of empathy, and have a disinhibition effect that is highly sought after in social investigations. Games can evoke dynamic and complex emotions in players, the manifestations of which are difficult to capture with the traditional approaches of empirical psychology, affective computing, or cognitive modeling research. This is primarily due to their ability to introduce the player to a continuous mode of interaction, which could generate complex cognitive, emotional, and behavioral reactions [92]. Therefore, the use of serious games as research tools may contribute to the advancement of human-computer interaction and the progress of our knowledge of human experiences.
We can already find some splendid examples of the use of games as large-scale social research tools, such as _The Moral Machine Experiment_[107], which uses a gamified environment to explore the moral dilemmas surrounding autonomous cars. To do so, they use the framework of the classic trolley problem and study participants' responses to variations in different parameters (e.g., number of people who would die, age, gender, etc.) and the cross-cultural differences in this decision-making [108]. We can also find some noteworthy examples that use serious games to explore collaborative and trusting behaviors [109][110], understand preferences for charity donations4, or even fight cybercrime [111].
Footnote 4: MyGoodness! [https://www.my-goodness.net/](https://www.my-goodness.net/)
On the other hand, the latest advances in AI allow us to analyze vast amounts of data and find patterns or behaviors that would be very difficult to observe with traditional analytical methods. So far, the main application given to large AI models that study our interactions through social networks and personal data is for marketing purposes and generating
monetary value [112]. This practice has been done almost since the beginning of social networks, without considering the negative social consequences it could have, particularly for children and adolescents [113][114]. With this paper, we also aim to contribute humbly to the "AI for Good"5,6 movements. We are at a critical social, cultural, and economic moment. We must start to consider the uses of AI that can benefit society and each individual in particular. We firmly believe that IA has the potential to help us live better and also to know ourselves better. Furthermore, to achieve great goals that improve our society, it is essential to unite forces between different branches of science (e.g., sociology, psychology, engineering, computer science, AI, etc.), and we believe that games represent such an excellent vehicle for this purpose.
Footnote 5: AI for Good Global Summit [https://airforgood.itu.int/](https://airforgood.itu.int/)
Footnote 6: AI for Good [https://ai4good.org/](https://ai4good.org/)
However, in order for serious games to be able to meet these major goals, they must face some critical **challenges**:
* _Game design_: Whether a game can serve as a valuable research tool depends strongly on whether it has good design and playability. Designing a game is a complex process involving many artistic and technical aspects that can not be wholly rationalized from a scientific standpoint.
* _Validation and generalizability_: One of the most complicated aspects of using serious games as a means of research is demonstrating that their results are as valid as traditional methods. Although we already have numerous examples in some branches, such as game-based assessment or the reflection of player morality into in-game moral dilemmas [115][116], there is still a long way to go in this aspect. This is also because each game (and its purpose) is different from the others and therefore requires individual validation in most cases.
* _Data scarcity_: In recent years, it has become clear that to take full advantage of AI, we need large amounts of data to feed it. Apart from a few exceptional cases [107], serious games academic experiments suffer from small, biased, and heterogeneous datasets. If we aspire to use them as social research tools, we must find ways to get more participants, make the best use of available data or establish appropriate methods of sharing sensitive data.
* _Explainability_: Many of today's AI tools can be highly complex, if not completely opaque (so-called black box models). The general trend in computer science also drives this to focus more on prediction than explanation. However, aspiring to use these tools to study human and social behavior implies a deep understanding of the outcomes that AI provides. While considerable progress has been made in explainable AI techniques, many hurdles still exist [117].
* _Ethical considerations_: When dealing with personal data (whether anonymized or not) and AI, we must seek unequivocal ethical standards. The potential benefits must outweigh the risks, as the participants' safety and well-being must be the top priority, especially when dealing with data from children or people at risk of exclusion. Achieving these standards is genuinely complex because computer scientists and social scientists tend to have different approaches to research ethics [118].
Despite the challenges mentioned above, we can also find promising **new horizons** and future lines of work regarding the interplay between serious games and AI:
* _Synthetic data_: The AI field has extensive experience in developing agents that aim to win a game [119]. However, in recent years, we are also experiencing the emergence of novel synthetic data generation techniques capable of modeling or mimicking human behavior in some aspects [120], and impressive new data augmentation techniques such as Generative Adversarial Networks [121]. Concerning the challenge of data scarcity, this is a promising line of work in which we could make the most of the limited data available and build models that help us better understand players' motivations in decision-making.
* _Data sharing_: The field of computational social science has faced many difficulties in finding and sharing open data, especially from private companies [122]. However, the field of serious games is in a much more advantageous position in this respect, as it does not involve such an amount of sensitive data. Moreover, using anonymization and privacy-preserving algorithms has proven to be very useful in recent years. With this promising line of work, we can address the poor sample size that traditional social science has had and share meaningful data from serious games at scale to enhance collaboration and motivate research.
* _Causality_: The social sciences have traditionally prioritized interpretable explanations of human behavior, mainly invoking causality through randomized controlled trials. However, as powerful as these techniques are, they are also very costly in terms of resources and money. On the other hand, computer scientists have traditionally been more concerned with developing accurate predictive models, whether or not they correspond to causal/interpretable mechanisms. Nevertheless, in the last years, we are experiencing a resurgence of computational causality techniques [123][124], even from observational data (i.e., quasi-experiments) [125], allowing us to explain with greater robustness the workings of the systems under study. Besides, it makes explicit the assumptions of the computational model and the scientist performing it, helping us to make research more open to discussion and to rethink plausible alternatives for existing explanations. If our ultimate goal is to better understand individual and collective human behavior, it is critical to integrate predictive and explanatory approaches to scientific research [126].
## V Conclusions
Gaming, both for entertainment and utility purposes, has been indispensable throughout the development of humankind. The flourishing of AI in recent times, coupled with the vast amounts of meaningful data that can be collected and transmitted through games, creates a unique window of opportunity to use serious games as tools for social research.
In this paper, we have reviewed serious games' main applications and their synergies with AI. We can already find numerous successful examples of serious games in education, science, business and social interests. The great potential of games to transform society should not be underestimated and deserve more and deeper inquiry. In addition, we have identified some challenges and promising new lines of work for using serious games as research tools. By doing so, we aim to motivate researchers to pursue these lines of work and help them to identify potential applications of serious games for beneficial social objectives. We also want to encourage interdisciplinary research, which is essential in this field of science, and which we firmly believe is how the future of science should ultimately be.
We are at a critical juncture as a society, where we are beginning to realize that we need to change the motivations and goals by which we make progress. AI is a game changer that can bring immense benefits or harm to society. It is time to start breaking new ground in using these technologies for the common good. What better way to do it than by playing?
|
2303.17355 | Acoustic Soft Tactile Skin (AST Skin) | This paper presents a novel soft tactile skin (STS) technology operating with
sound waves. In this innovative approach, the sound waves generated by a
speaker travel in channels embedded in a soft membrane and get modulated due to
a deformation of the channel when pressed by an external force and received by
a microphone at the end of the channel. The sensor leverages regression and
classification methods for estimating the normal force and its contact
location. Our sensor can be affixed to any robot part, e.g., end effectors or
arm. We tested several regression and classifier methods to learn the relation
between sound wave modulation, the applied force, and its location,
respectively and picked the best-performing models for force and location
predictions. Our novel tactile sensor yields 93% of the force estimation within
1.5 N tolerances for a range of 0-30+1 N and estimates contact locations with
over 96% accuracy. We also demonstrated the performance of STS technology for a
real-time gripping force control application. | Vishnu Rajendran S, Willow Mandil, Simon Parsons, Amir Ghalamzan E | 2023-03-30T13:11:31Z | http://arxiv.org/abs/2303.17355v3 | # Acoustic Soft Tactile Skin (AST Skin)
###### Abstract
Acoustic Soft Tactile (AST) skin is a novel soft-flexible, low-cost sensor that can measure static normal forces and their contact location. This letter presents the design, fabrication, and experimental evaluation of AST skin. The proposed AST skin has some Acoustic channels(s) (ACs) arranged in parallel below the sensing surface. A reference acoustic wave from a speaker unit propagates through theseACs. The deformation ofACs under the contact force modulates the acoustic waves, and the change in modulation recorded by a microphone is used to measure the force magnitude and the location of the action. We used a static force calibration method to validate the performance of the AST skin. Our two best AST configurations are capable of (i) making more than 93% of their force measurements within \(\pm\) 1.5 N tolerances for a range of 0-30\({}^{+1}\) N and (ii) predicting contact locations with more than 96% accuracy. Furthermore, we conducted a robotic pushing experiment with the AST skin and an off-the-shelf Xela uskin sensor, which showed that the AST skin outperformed the Xela sensor in measuring the interaction forces. With further developments, the proposed AST skin has the potential to be used for various robotic tasks such as object grasping and manipulation.
tactile sensing, soft-skin, acoustics, manipulation.
## I Introduction
Accurately measuring physical interactions is crucial for many physical robotic tasks, including human-robot interaction [1], object grasping, and manipulation [2]. Tactile sensing technologies have been developed and improved to enable robot agents to understand their environment better, leading to safer, more precise, and more efficient actions in a broader range of physical interaction tasks [3, 4, 5].
In the context of manipulation tasks, soft tactile sensors are gaining importance as a retrofit to end effectors after understanding the relevance of soft object handling. These sensors feature a soft, flexible sensing surface whose deformation provides tactile information such as force, contact location, and contact surface maps. Soft tactile sensors utilise both electronic (e.g., resistive [6], capacitive [7], piezoelectric [8], magnetic [9, 10], impedance [11]) and non-electronic transduction methods (e.g., camera-based [12, 13, 14, 15, 16, 17, 18], fluid-based [19], and acoustics [20, 21, 22, 23]) and their combinations [24] for converting the deformation to valuable tactile information.
Despite the progress in soft tactile sensing technology, several limitations persist: (i) many off-the-shelf soft tactile sensors come with a fixed topology and form factor, making integration/interfacing with existing hardware (eg:end-effectors) challenging, (ii) embedded electronics within the tactile sensor are vulnerable to cross-talk noise [25], (iii) requirement of making compact electronic sensory pick up behind sensing surface needs sophisticated manufacturing techniques, (iv) using camera-based transduction methods necessitate large amounts of space and their cumbersome structure makes integration into robotic end-effectors more challenging [14, 26], and (v) fluid-based transduction methods provide a delayed response to forces[25].
Among various acoustic-based soft tactile sensors, Zoller et al. [27] has developed a tactile sensing mechanism for a soft pneumatic finger with minimal hardware and complexities. The finger houses an internal speaker and microphone that continuously monitors changes in sound modulation due to finger deformation caused by external physical interactions, enabling the calculation of tactile information. This tactile sensing method allows the finger to measure contact forces, contact location, and the nature of material that comes in contact with it with a certain precision[28]. This sensing modality can be exploited to overcome some critical shortcomings of the soft tactile sensors presented above.
In this letter, we introduce Acoustic Soft Tactile skin (**AST** skin) technology (Fig. 1) inspired by a pneumatic finger with a sense of touch [29]. The AST skin is a silicone-based tactile skin with hollow passages that act as Acoustic Channels (ACs) and deform in response to external physical interactions. AC
Fig. 1: AST Skin Overview: The force applied to the AST skin surface deforms the ACs beneath the sensing surface. These channels contain reference acoustic waves that travel from the speaker to the microphone. The acoustic wave amplitude is modulated in proportion to the deformation. We use FFT to transform the modulated waves to a frequency domain and ML methods to find the correlation between the force and its contact location, which resulted in the deformation.
deformation modulates the continuous acoustic waves passing through them, which are processed using Fast Fourier Transform and Machine Learning (ML) techniques to estimate the contact force and its location. Unlike [27, 28], the AST skin can act as a standalone tactile sensing skin that can be attached to flat and non-flat surfaces. Moreover, they can be easily fabricated into any shape and size as long as it is possible to embed acoustic channels below the sensing surface. The AST skin design allows the speaker and microphone to be far away from the sensing surface, it makes it easy to attach the skin to existing hardware, e.g., end effectors with space constraints. The contributions of this research include presenting the novel AST skin technology, comparing various ML methods for force and contact position estimation, testing different AC topologies, and speaker-microphone configuration to optimise tactile sensing behaviour. We demonstrate the effectiveness and superiority of the AST skin through a comparison study with the off-the-shelf Xela uSkin tactile sensor.
## II Related works
Acoustic methods have been used to measure tactile information, such as contact deformations, forces, and surface shape recognition. In this section, we explore research surrounding these tasks.
**Measuring contact deformations:** H. Shinoda et al. [30] created a silicone-based hemispherical fingertip that consists of an embedded ultrasound transmitter and receiver array. The fingertip measures deformations up to 10\(\upmu\)m, and sensing surface inclination changes up to 0.001 radians. Building on this design, H. Shinoda et al. [21] introduced a hollow spherical cavity inside a flexible membrane. Equipped with an ultrasound transmitter and receiver, detecting the acoustic resonant frequency of the air inside to indicate the principal stress around the cavity. K. Teramoto et al. [23] proposed a flexible sensing membrane with acoustic transmitters and receivers on the substrate beneath, which can measure the curvature of objects encountering the membrane. Y. Tanaka et al. [22] developed an acoustics-based tactile sensor probe for real-time lump detection in laparoscopic surgery. The sensor probe is a silicone-based hollow tube. A speaker emits sound waves into the tube, and a microphone records the reflected waves from the end of the probe to detect wave deformation caused by contact with a lump.
**Measuring force and other contact surface features:** Chuang et al. [20] developed an ultrasonic tactile sensor that can measure normal static force in real-time (ranging from 1 to 6 N) and recognise shapes. The sensor uses ultrasonic sound pulses and measures variations in time of flight when the sensing when the elastomeric sensory surface undergoes deformation. The sensor includes a Thin Film Transistor layer between piezoelectric PVDF transmitter and receiver layers with a soft polymer sensing surface. In addition to measuring standard tactile information such as deformations, contact forces, and contact surface shapes, V. Wall et al. [28] demonstrated that active acoustic methods could be used to characterise force contact location and the material of contact objects by a soft pneumatic finger with a speaker and microphone enclosed in the finger cavity. Also, the same pneumatic finger, when housing a single microphone alone (passive acoustic method), can measure contact forces, their location, and contact materials [29]. But this measuring capability is limited to situations when the contact results in some sound that propagates to the microphone.
These reports showcase the potential of using acoustic techniques to extract tactile information from soft material deformation. To simplify skin design, it is advisable not to integrate complex circuitry into the skin, as demonstrated by the tactile sensing utilised in pneumatic actuators [29, 28]. It is also beneficial to relocate the sensory hardware away from the sensing surface to ensure sensor compactness and avoid requiring sophisticated manufacturing techniques. This approach enables shaping the skin into various form factors, as dictated by the application. The sensor we developed in this letter maintains these strengths and can be easily calibrated and customized for various use cases.
## III Acoustic Soft Tactile Skin
Figure 1 provides an overview of AST Skin technology. The deformable membrane of the skin is served by a speaker unit and a microphone arranged on opposite sides. The speaker unit generates continuous acoustic waves that travel through the ACs, and the microphone receives the acoustic waves. As the channels deform due to external physical interactions, the amplitude of the acoustic waves changes (see Fig 6), and we leverage ML models to capture the relationship between the amplitude changes and the tactile information.
### _Design_
To demonstrate this technology, we fabricated a flat, rectangular-shaped silicone skin measuring 35 mm x 60 mm. We investigate various configurations with single and dual ACs with simple geometrical shapes, such as cylindrical and conical, that run through the length of the skin. The diameter of the cylindrical AC is 5 mm, while the conical AC has diameters of 5 mm and 3 mm. The ACs connect the speaker-microphone arrangement of the skin. To ensure portability and ease of testing, we mount the skin inside a 3D-printed casing. However, we plan for skin designs without a hard case in our future work. The details of our prototype design and the different skin configurations tested are presented in the following sections.
Acoustic Channel DesignThe impact of contact deformation on acoustic waves varies with the shape of the ACs, which we exploit to estimate force and contact location. We investigated the effect of different channel configurations on feature extraction for force and contact location (see Fig 2 for AC designs).
We studied single-channel and dual-channel configurations to verify our hypothesis that ACs can be used to predict tactile information. The study on single-channel skin configuration (AST 1) verifies the usability of this tactile skin, which calls for a narrow sensing region. For a broader sensing surface, the skin requires multiple channels, which we explore with studies on dual ACs (AST 2a-b, AST 3a-b, AST 4a-d) as an initial
step. In future works, we will explore using multiple channels with different geometries spanning the entire skin area.
For single-channel skin (AST 1), we considered a cylindrical-shaped channel. For dual channel skin configurations, we have used combinations of (1) two cylindrical-shaped channels (AST 2a, AST 2b), (2) two conical-shaped channels (AST 3a, AST 3b), and (3) a conical-shaped channel with cylindrical-shaped one (AST 4a-4d).
By providing these channel configurations, we tested two primary design parameters: (1) whether AC with a non-varying cross-section (cylinder) can distinguish forces acting on different points along their length? (2) for dual-channel skins, whether ACs with different geometrical shapes (AST 4a-4d) can best capture the tactile information compared to skins with similar AC geometries (AST 2a-2b, AST 3a-3b)?
We did not experiment with a single conical channel or a combination of two conical channels that are not inverted to each other. This is due to the assumption that the smaller diameter end of the channel will shut off earlier than the larger diameter end, resulting in a lower force measurement range. To mitigate this, we introduced conical-shaped channels which are inverted to each other (AST 3a, AST 3b).
Speaker ConfigurationsSpeaker configurations are the second primary design feature we investigated in this work. We used a single speaker for AST 1, AST 2b, AST 3b, AST 4c, and AST 4d. For AST 2a, AST 3a, AST 4a-4b, we provided an individual speaker for each channel.
These speaker arrangements enable us to study the differences in skin performance (1) when all channels are provided with a single speaker versus each channel with an individual speaker and (2) when the speakers are arranged on the smaller and larger diameter ends of the conical channel (AST4a and 4b, AST4c and 4d). In this study, we used computer headphone speakers and a microphone, which will be replaced with a miniature type in further development studies.
### _Prototyping Process_
The prototyping process employed in this work involves using cost-effective materials and a streamlined production process to enable the agile development of skin prototypes. The skin casing, as well as the speaker-microphone housings and channel inserts, are 3D printed using PLA material on a Raisen Pro 3D printer. A Polycraft Silskin silicone rubber material, with a shore hardness value of A13 and a 1:1 catalyst ratio, is then poured into the 3D printed casing after positioning the channel inserts. After curing, the inserts are removed, leaving the desired channel cavities in the cured silicone rubber. Subsequently, the speaker(s) and microphone are mounted to their housing and then fastened to the casing following the configurations presented in Fig.2. The prototyping process is illustrated in Fig.3.
For the sake of simplicity, we will refer to the AST skin with the casing as AST skin in the subsequent sections of this letter.
### _Extracting Tactile Feature from Acoustic Signal_
Here, we outline the methodology for generating two tactile features, the normal force (Newtons) and their contact location. We created a dataset using a robot arm and load cell by applying force to a series of locations on the soft skin. And these data are then processed and applied ML methods to predict the tactile features.
Sensor CalibrationThe calibration setup consists of a 6 DOF robot arm1 with a calibrated high-precision load cell mounted on the robot flange. This in-line load cell measures up to 1 KN of force, has an inbuilt driver board for USB communication with the PC, and is mounted with a wedge-shaped 3D-printed peg as shown in Fig. 4a.
Footnote 1: The UFactory xArm by olfactory.cc
Here we used a robotic arm to apply a known force through the load cell and peg on specific locations on the AST skin. We initially considered three calibration points on the skin surface for all configurations, namely A(17,10), B(17,30), and C(17,50), as illustrated in Fig. 4b. These points are crucial, as point A is closer to the mic, point C is closer to the speaker, and point B is between the mic and speaker(s).
Before the calibration process, we placed the AST skin in a fixed position over the workbench, as shown in Fig.4a, and played a continuous reference acoustic signal through the
Fig. 3: Prototyping process: (a) 3D printing the sensor casing, inserts, and speaker-mic housings (b) Arranging the inserts into sensor casing (c) Pouring the silicon rubber into the casing (d) Curing (e) Removing inserts from the sensor casing (f) Mounting the speakers and mic
Fig. 2: AST skin configurations
speaker(s). The details of this reference signal are provided in the forthcoming subsection. The robot arm is then driven to each calibration point one by one, and on each point, the peg pushes by the arm with an increment of 0.2 mm. As the central axis of the peg passes vertically through the designated points, the AC(s) are compressed uniformly. At each incremental step of 0.2 mm, the corresponding load cell readings are recorded, and simultaneously, the mic recorded 50 samples of the sound signal it received. This process continued until the load cell reading reached 30\({}^{+1}\) N. In our future studies, we will calibrate these skins for multiple calibration points spanning the whole length and width of the skin design with different contact shapes.
Reference Acoustic SignalA test reference acoustic signal is generated using Audacity. It comprises four sine waves of frequencies of 300 Hz, 500 Hz, 700 Hz, and 900 Hz and with individual amplitudes of 0.6 (on a 0 to 1 scale). During the sensor operation, this reference signal is played through the speaker(s).
Data ProcessingThe data processing pipeline used in the calibration process is shown in Fig. 5. For each AST skin configuration, the resulting dataset after data processing contains the load cell reading with the corresponding location of loading (A, B, or C) and the amplitudes of the selected frequencies of the sound signal. Fast Fourier Transform (FFT) algorithm determines the amplitudes of the acoustic signals received by the microphone (FFT data). For each skin model, a total of 5100 data points are generated, with 1700 data points each for the three locations, A, B, and C.
## IV Results and Discussions
We have done a systematic study to find the best configurations of ACs, speaker-microphone, and optimal ML models for tactile feature extraction. So this section answers the following key questions: (1) What is the impact of ACs and speaker configurations on force and contact location prediction? (2) Which ML model is selected to develop the prediction model for each AST skin? (3) What is the performance of force and contact location prediction? (4) Can the AST Skin measure static push force during an object-pushing test case, and how does it perform against a state-of-the-art Xela uSkin tactile sensor?
**Performance Metrics:** The following metrics present the outcomes of the tests conducted to present the findings. (1) We use the validation error/efficiency obtained for the dataset corresponding to each AST skin for selecting the regression and classifier model. (2) The force prediction performance of each AST skin is presented as the percentage of predictions that fall within \(\pm\)0.50 N, \(\pm\)1 N, and \(\pm\)1.50 N tolerance of the actual force values. (3) The contact location prediction performance is presented as the total number of true predictions per test (each skin location is tested for 170 trials), and they are averaged to define the overall accuracy. (4) For the object-pushing test case, we compare the force-measuring performance of the AST skin and the Xela sensor in terms of their absolute error in the measured force.
### _Selection of Machine Learning Model_
We compared various regression and classifier models based on their validation errors/accuracy for each AST skin data. The models are trained using a dataset partition of 90:10 and 10-fold cross-validation. The regression model with the minimum validation error and the classifier with the maximum accuracy is selected for predicting force and location, respectively. The comparison results are presented in tables I and II, and the selected models for each AST skin are highlighted in the tables. The details of force and location predictions using these models are presented below.
### _Force Prediction_
The force prediction performance of the AST skins is presented in Fig. 8, showing the percentage of force predictions falling within \(\pm\)0.5 N, \(\pm\)1 N, and \(\pm\)1.5 N tolerances. The performance of each skin configuration is analysed with respect to the effect of speaker configuration and AC geometries.
Effect of Speaker ConfigurationThe presented results in Fig 8 indicate that the dual-channel skin configurations with a single speaker (AST2b, AST3b, AST4c, AST4d) mostly outperformed or showed comparable performance to their individual speaker designs (AST2a, AST3a, AST4a, AST4b). This suggests that a single speaker can serve multiple channels without compromising performance. This finding implies that the skin technology is potentially scalable, and a skin with multiple channels may not require an equal number of speakers. However, further investigation is needed to confirm this claim. We also tested the effect of positioning speakers on different ends of channels, but the results did not show any significant impact on the sensor performance.
Effect of Acoustic Channel GeometryThe single-channel AST1 configuration demonstrated a force prediction accuracy of 82.74% within \(\pm\)0.5 N tolerance range and 93.5% within \(\pm\)1 N range, indicating that a single channel with a uniform geometrical shape can accurately infer contact forces acting on different points. Other skin configurations were also tested to investigate the feasibility of using an array of AC to serve a skin that may require a broader sensing surface area. The results in Fig. 8 show that considerably better performance can be achieved when the channel geometries are non-identical and a single speaker unit serves all channels. This is evident when comparing the performance of skin configuration AST3b, AST4c, and AST4d with AST2b.
### _Contact Location Prediction_
Table III presents the accuracy of each AST skin configuration for predicting the contact locations. All skin configurations achieved a high prediction accuracy of more than 89%. The results indicate that the location prediction performance is improved when a single speaker serves the AC(s). Apart from this, no other significant factors were found that greatly influenced the location prediction capability of the skin.
After analysing the results of force and location predictions and their underlying factors, several conclusions can be drawn: (1) A smaller-width skin can utilise a single acoustic channel to measure contact force and its location accurately. Furthermore, the channel can have a simple geometry, such as a cylinder. Even though it has a uniform shape along its length, it can still distinguish forces and their locations applied at different points, (2) For skin with a broader sensing surface area, an array of ACs can be used, and individual speakers for each channel may not be necessary (3) The use of different geometrical shapes for each AC may lead to the improved performance of the skin.
### _Pushing Experiment with AST and Xela uSkin Sensor_
We conducted a static pushing experiment to evaluate the performance of our preliminary design of the acoustic tactile (AST) sensor. The test involves pushing a fixed box on the worktable (Fig 9) with the AST skin mounted to an extension attached to the flange of uFactory arm. Also, we performed the same pushing experiment using the Xela uSkin sensor2 for performance comparison. Xela sensor has a 4 x 4 array of sensing units called taxels which act as force-pickup units.
Footnote 2: The uSkin sensor by selarobotics.com
For this experiment, we selected AST 1 from the various AST skins presented, as it provided better accuracy in predicting the forces during calibration. We recalibrated the skin before the experiment to avoid variation in measurement quality due to any unseen factors. The box used in this experiment was provided with a load cell to understand the interaction force and compare it with the force value measured by the sensors. The load cell is mounted with the same wedge-shaped peg used during the AST cell calibration. Three static normal forces (6 N, 12 N, and 18 N) are applied to the box through the load cell by driving the robot arm. This force range
is selected as the measuring range of Xela uSkin is 0 to 18 N. We conducted 30 pushes on the box with above mentioned normal static loads with contact making at the middle of the sensory surface of both sensors (location B for AST skin). The absolute error in force measured by both sensors is presented in Fig 10.
The results showed that the AST skin could predict the normal forces of 6 N, 12 N, and 18 N with a mean error of -0.01 N, 0.01 N, and -0.44 N, respectively, while the Xela sensor predicted them with a mean error of 2.03 N, 2.51 N, and 0.89 N, respectively.
In this pushing experiment, the Xela sensor showed a relatively higher error, possibly because the loading contact might not be exactly acted on the taxels. In contrast, the AST skin showed better performance in this loading situation because it does not rely on individual force-pickup units but rather on channel deformation to predict force.
Our study demonstrates the potential of AST skin technology to be used in real-time static normal force-measuring applications, as it showed comparable performance to an off-the-shelf soft tactile sensor.
Overall, the AST skin has the potential to become a valuable tool for tactile measurements. Our future work on the AST skin will include calibrating the sensor for dynamic normal and shear force measurement with continuous 2D sensing points on the skin surface. Moreover, AST skin technology will be tested for real-time applications, such as robotic manipulation, human-robot interaction, and medical diagnosis.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Regression models**} & \multicolumn{8}{c|}{**AST skin configuration**} \\ \cline{2-10} & **AST 1** & **AST 2a** & **AST 2b** & **AST 3a** & **AST 3b** & **AST 4a** & **AST 4b** & **AST 4c** & **AST 4d** \\ \hline
**Linear Regression** & 2.22 & 5.98 & 7.23 & 4.87 & 2.85 & 7.28 & 7.17 & 5.91 & 3.66 \\ Interactions Linear & 1.78 & 5.01 & 6.45 & 3.45 & 2.04 & 6.61 & 6.102 & 5.54 & 3.13 \\ Robust & 2.54 & 6.33 & 7.74 & 6.13 & 3.81 & 9.17 & 8.29 & 6.72 & 3.66 \\ Stepwise Linear & 1.78 & 5.01 & 6.46 & 3.46 & 2.04 & 6.61 & 6.11 & 5.54 & 3.14 \\ \hline
**Regression Trees** & & & & & & & & & \\ Fine Tree & 1.08 & 3.19 & 4.34 & 2.82 & 1.31 & 4.07 & 4.69 & 3.27 & 1.59 \\ Medium Tree & 1.18 & 3.02 & 4.26 & 2.62 & 1.24 & 3.86 & 4.5 & 3.1 & 1.625 \\ Coarse Tree & 1.42 & 3.5 & 4.52 & 2.58 & 1.31 & 4.2 & 4.73 & 3.49 & 1.73 \\ \hline
**Support Vector Machines** & & & & & & & & & \\ Linear & 2.4 & 6.1 & 7.52 & 5.39 & 2.94 & 8.09 & 7.41 & 6.18 & 3.74 \\ Quadratic & 1.93 & 5.98 & 6.96 & 3.62 & 2.09 & 7.98 & 5.82 & 5.63 & 2.95 \\ Cubic & 2.29 & 28.81 & 12.43 & 21.54 & 10.02 & 17.45 & 28.25 & 15.23 & 24.8 \\ Fine Gaussian & 1.04 & 2.42 & 4.34 & 2.319 & 1.3 & 3.31 & 3.7 & 2.95 & 1.33 \\ Medium Gaussian & 1.24 & 3.29 & 4.26 & 2.73 & 1.52 & 3.96 & 4.5 & 3.98 & 1.77 \\ Coarse Gaussian & 1.67 & 4.68 & 4.52 & 3.22 & 2 & 5.92 & 5.68 & 5.45 & 2.78 \\ \hline
**Gaussian Process** & & & & & & & & & \\ Rational Quadratic & 0.72 & **2.21** & 3.4 & 2.16 & 1.08 & 3.34 & 3.59 & 2.56 & 1.22 \\ Squared Exponential & 0.87 & 2.41 & 3.31 & 2.25 & 1.17 & 3.27 & 3.65 & 2.72 & 1.26 \\ Matern 5/2 & 0.8 & 2.27 & **3.24** & 2.2 & 1.14 & 3.29 & 3.6 & 2.61 & 1.2 \\ Exponential & **0.72** & 2.24 & 3.35 & **2.15** & **1.06** & **3.25** & **3.6** & **2.53** & **1.18** \\ \hline
**Ensemble of Trees** & & & & & & & & & \\ Boosted Trees & 1.67 & 3.6 & 4.6 & 2.57 & 1.56 & 3.88 & 4.59 & 3.65 & 1.8 \\ Bagged Trees & 0.94 & 2.63 & 3.67 & 2.29 & 1.08 & 3.41 & 3.86 & 2.62 & 1.3 \\ \hline
**Neural Networks** & & & & & & & & & & \\ Narrow Neural Network & 1.2 & 3.48 & 5.21 & 2.83 & 1.47 & 4.63 & 4.8 & 3.86 & 1.62 \\ Medium Neural Network & 1.15 & 2.88 & 4.94 & 2.53 & 1.49 & 3.74 & 4.27 & 3.305 & 1.51 \\ Wide Neural Network & 1.05 & 2.89 & 4.32 & 2.35 & 1.37 & 3.64 & 3.85 & 2.89 & 1.34 \\ Bilayered Neural Network & 1.07 & 2.68 & 5.27 & 2.36 & 1.29 & 3.63 & 4.33 & 5.13 & 1.41 \\ Trilayered Neural Network & 0.9 & 2.52 & 4.03 & 2.28 & 1.29 & 3.42 & 3.95 & 2.94 & 1.31 \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of various regression models based on validation error for predicting contact force
Fig. 8: Percentage of force predicted with \(\pm\) 0.50 N, \(\pm\) 1 N, and \(\pm\) 1.50 N tolerance
## V Conclusions
We have presented the design, fabrication, and testing of a novel Acoustic Soft Tactile skin (AST skin). The AST skin uses Acoustic Channels beneath the sensing surface to measure the contact normal force and its location based on the principle of the wave modulation. To validate the concept, we have prototyped and studied nine AST skins with different configurations of Acoustic Channels and acoustic hardware (speaker-mic). During our study, two AST skin designs made more than 93% of static normal force predictions with \(\pm\)1.5 N tolerance for a full-scale force range of 0-30\({}^{+1}\) N. Also, they made more than 96% accurate force contact location predictions. We have also demonstrated the superiority and effectiveness of the AST skin in measuring static normal forces in a pushing experiment against an off-the-shelf soft tactile sensor (Xela uSkin).
|
2304.03387 | From Social Engineering to Quantum Threats: Safeguarding User Wallets
with FailSafe | While cryptocurrencies have been rapidly gaining adoption, secure wallet
interactions are still elusive for many users, which frequently leads to loss
of funds. Here we propose an approach to securing interactions with
cryptocurrency wallets for end-users. The approach called FailSafe consists of
several defence-in-depth measures that can be applied near-term as well as a
tool called qMig for aiding eventual quantum migration. | Gennady Medvinsky, Ben Livshits | 2023-04-06T21:38:22Z | http://arxiv.org/abs/2304.03387v2 | # From Social Engineering to Quantum Threats: Safeguarding User Wallets with FailSafe
###### Abstract
While cryptocurrencies have been rapidly gaining adoption, secure wallet interactions are still elusive for many users, which frequently leads to loss of funds. Here we propose an approach to securing interactions with cryptocurrency wallets for end-users. The approach called FailSafe consists of several defence-in-depth measures that can be applied near-term as well as a tool called qMig for aiding eventual quantum migration.
keywords: blockchain, wallets, system security +
Footnote †: journal: Blockchain: Research and Applications
## 1 Introduction
In Web2, most users have a certain degree of familiarity with threats to their online accounts, be those of financial nature or pertaining to other services like social media. Frequently, the attacker focuses on user credentials and account takeover. Password compromises can be achieved through a dictionary attack, taking advantage of a poorly chosen secret, or through many forms of phishing and social engineering attacks, enticing the user to share their credentials. With the password compromised, the attacker can then extract whatever value is associated with the account, or use that account to perform some form of escalation to compromise further accounts. According to recent surveys [24], the following best online security practices remain a challenge for both end-users and IT professionals alike.
Usability issues around security have some parallels in Web3. For instance, phishing attacks can vary from tricking the user to disclose their
private key, to obtaining a signature that grants permission for potentially unlimited fund transfer. Flawed but popular open-source software can generate vanity blockchain addresses (i.e, Profanity [13]), but in the process makes it trivial for the attacker to compute the corresponding private key.
Most products in this space are built with relatively standalone threat models in mind. For example, hardware wallet solutions tend to mainly focus on having an air gap with less trusted software components. Threat intelligence products like Chainalysis provide an insight into the counter-party risk. Users are left to devise their own threat mitigation strategies through some combination of these products. This approach not only puts an undue burden on the user, but may leave users exposed to web3 threats in unforeseen ways.
### Defence-in-Depth
FailSafe is an anti-theft Web3 wallet companion system that is focused on protecting the end to end web3 transaction journey. FailSafe is built using the defence-in-depth principle: it offers a multilayered set of security mechanisms, with built-in redundancy, designed to minimise the loss of user assets even under the worst-case circumstances (disclosure of the user's private key, or a compromised insider within a trusted system).
FailSafe takes every opportunity to protect the user's assets across the lifecycle of a transaction: from initial user engagement phase with the dApp, to the point it is committed on chain. At the outset, on enrollment, FailSafe helps the user to reduce risk by moving the majority of assets to the user's cold wallet address that does not partake in regular web3 transactions; this is not unlike what custody solutions do; however, up til now, this practice has been unavailable to retail users.
According to a recent study [26] of ERC-20 token usage patterns, 60% of all users grant unlimited transfer approvals to dApps, 22% of which are considered to be at high risk of their approved tokens being stolen. By moving the majority of assets to the user's cold wallet, these assets are no
Figure 1: Timeline of a Transaction.
longer exposed to the above risk.
FailSafe, automatically maintains the user desired balance ratio between the hot and cold wallet addresses, preserving the de-risked security posture over time. Once the user engages with a dApp, the FailSafe Blockchain Reconnaissance (FBR) service is used to obtain the risk score for the counter-party's web3 address.
If FailSafe software is in the code path, fraudulent transactions are outright blocked. Otherwise, the next layer of protection is the FailSafe Interceptor Service (FIS), which monitors pending transactions submitted to the blockchain's memory pool. If the transaction counter-party has a high risk score (based on a call to FBR), FIS is capable of submitting another transaction that is executed ahead of the attacker's, moving the funds at risk into the user's cold storage address before the attacker's transaction is executed.
### Forward Security
FailSafe defence-in-depth approach is forward-looking -- it lays the groundwork for safeguarding the user's crypto against newly-emerging threats.
Advances in quantum computing hardware have made significant strides, propelled by the nation-state quantum computing race with a number of different R&D centres, reaching significant computing benchmarks and milestones (see: Google's Quantum Supremacy [8] and IBM Quantum System One [7]).
When viewed from the lens of cryptography, it presents a unique problem. While Shor's algorithm published in 1994 [25], could theoretically break certain algorithms used for digital signatures (i.e., ECDSA), it requires a sufficiently powerful quantum computer to do it. With recent advances, the time window to reach this milestone has been shrinking (see Global Risk Institute's 2022 report [23]).
The situation is especially dire for the Ethereum ecosystem (this includes EVM compatible networks, like Polygon, Binance Smart Chain, Avalanche, and many others). The current version of Ethereum lacks cryptographic
Figure 2: Layers of protection in FailSafe.
agility. Externally-owned addresses (user wallets) use ECDSA with no other option built in (see the quantum threats section for a more in-depth discussion). Furthermore, by design, externally owned addresses are commonly re-used, giving an attacker with future quantum hardware a longer time window to derive the private key via the earlier record of transaction signatures.
Once wallet signatures are no longer cryptographically trustworthy, the inability to establish rightful custody over web3 assets will pose a barrier to bridging assets to a quantum-safe network (e.g., QRL [6]) or future versions of quantum-safe, EVM-compatible blockchains.
As part of the FailSafe project, the Quantum Migration Tool (qMig) was developed to future-proof against this outcome. Prior to the quantum inflection point, qMig enables users to construct and record a future intent to transfer tokens, in case the quantum inflection point occurs and ECDSA signature by itself can not be trusted. The security of this intent is rooted in cryptography that is not susceptible to quantum attacks. The integration of FailSafe with qMig, records the necessary proofs automatically, requiring no additional effort by the end user, as detailed in Section 3.3.
## 2 Web3 Threats to Your Crypto
The private key that corresponds to the user's wallet address controls the transfer of value on the public ledger, be it in the form of tokens or native cryptocurrency. To capture this key, a potential attacker has a range of possible options; the reader is referred to Mirza et. al. for more details [22]:
* **Theft of private keys**: with the knowledge of the private key, the attacker can send a transaction for every token and native currency associated with the address, transferring the assets to the attacker's own address. Any staked tokens in third party systems can be withdrawn and transferred to the attacker's address. There are numerous examples of this in the wild: fraudsters often pose as customer support convincing users to install a fake wallet software that captures and shares the user's passphrase with the attacker. Some of the recent bridge hacks have the same culprit as well [28].
* **Obtaining user's authorization**: through social engineering and confusing Web3 wallet user experiences, the attacker convinces the user to sign a transaction that can be crafted to transfer funds out of the wallet [26].
Figure 3: FailSafe architecture.
* **Compromise of third-party smart contracts**: Exploit smart contract vulnerabilities and then drain user assets that temporarily reside under the contract address ownership (there are numerous examples of bridge hacks that fall into this category).
### Designing with Operator Error in Mind
From recent trends in Web3 attacks, it is clear that the human factor plays a central role. Users might be lured into violating one or more best security practices without knowing - in the case of the Profanity bug hack, vanity Web3 addresses were generated that made it possible for attackers to derive the private key. Once the system is configured into a secure state, over time it is likely that the security posture will decay, if it requires regular end-user effort to upkeep.
In a recent $8M exploit, users were lured into installing an unofficial update of a popular web3 wallet. It is suspected that the fake wallet update involved users re-entering the seed phrase (giving the attacker full access to the victim's crypto assets). The FailSafe threat model is designed with these seemingly game over scenarios in mind. In the later part of this section, we introduce how the defense-in-depth principle is applied throughout the life-cycle of a transaction, and how the application of FailSafe multi-layered defenses minimises losses from the type of incidents noted above.
### Defence-in-depth and the Lifecycle of a Transaction
FailSafe is built on the defence-in-depth principle: a multilayered set of security mechanisms, with built in redundancy, designed to minimise loss of user assets even in the worst case circumstances (e.g., user is tricked into giving away the wallet's passphrase). A summary is shown in Figure 2.
To better understand how this works, let's take a closer look at the life-cycle of a transaction: from initial user engagement phase with the dApp, to the point it becomes part of a permanent record on a public ledger (as illustrated in Figure 1).
Each phase below presents both an opportunity for the attacker, as well as a chance to employ a countermeasure.
**Defence 1: de-risk Web3 Asset Positions:** Before engaging with the user, the attacker has an opportunity to learn a great deal from the public ledger, fine-tuning targets of interest, based on type and value of owned assets. From the public ledger, the attacker's bot can compile a list of addresses
and corresponding owned tokens on selected EVM blockchains, customising the attack as needed.
On the flipside, during this phase, the user has a chance to de-risk and remove the majority of owned assets entirely beyond the attacker's reach. By enrolling in theFailSafe automated cold storage feature, the vast majority of assets are re-balanced, to be owned by the user's wallet address that does not partake in regular web3 transactions.
Just as importantly,FailSafe is designed to maintain this security posture over time. With little to no imposition on the user,FailSafe automatically maintains the asset balance ratio between the hot and cold wallet, subject to the user's high level instructions. Access to cold storage is safeguarded via a multi-signature contract, the corresponding private keys are protected under a unique orchestration of Amazon's Nitro Enclaves and Google's Confidential Compute with cloud hardware. Just as importantly,FailSafe is designed to maintain this security posture over time. With little to no imposition on the user,FailSafe automatically maintains the asset balance ratio between the hot and cold wallet, subject to the user's high level instructions. Access to cold storage is safeguarded via a multi-signature contract, the corresponding private keys are protected under a unique orchestration of Amazon's Nitro Enclaves and Google's Confidential Compute with cloud hardware security modules (HSM); it is designed to withstand insider threat/compromise (Figure 3 illustrates the overall architecture and described in more detail in the later section below).
**Defence 2: FailSafe Blockchain Reconnaissance: First contact with the attacker:.** As noted earlier, the attacker's goal at this point is either to directly learn the user's private key, or convince the user to sign a transaction of the attacker's choice. A myriad of tried and tested social engineering attacks are available, and just as in the Web2 world, even if rejected by 99
At this stage, a FailSafe user is protected by several countermeasures. When the user encounters the attacker's dApp, if the user is using a client that is directly integrated with FailSafe Blockchain Reconnaissance (FBR) service (e.g., like a FailSafe Chrome extension or a proxy RPC URL), the attacker's request is likely to be rejected outright. The FBR maintains an up-to-date database of black listed addresses; this includes sanctioned addresses, fraudulent/rug-pull contracts. Risk profiles are also constructed based on historical as well real time transaction driven behaviour anomalies/patterns.
**Defence 3: FailSafe Interceptor Service: Fortune is on the at
**tacker's side thus far:** The victim proceeded without leveraging FBR, being lured into sharing the seed phrase or signing a transaction of the attacker's choice. At this point all that remains is to submit the transaction so that its effect will be reflected as part of the next block of the public ledger. The attacker may choose to submit to any participating network node to be queued up with other pending transactions in the public memory pool (the holding area used to prioritise, and order proposed transactions for the next block on the ledger).
At this stage, the FailSafe Interceptor Service (FIS) is on standby. While monitoring a low latency stream of ingress mempool transactions, FIS filters for transactions thatFailSafe users partake in. Once detected the counterparty address is passed to FBR. If a threat is detected, FIS attempts to make the attacker's transaction revert. It should be noted that FIS intervention may also be triggered by transaction policy limits that are part of the user'sFailSafe configuration (e.g., max value allowed to be transferred in a given time period, etc.).
To intercept, FIS submits a new transaction into the pool that transfers the assets in play to another address owned by the user (e.g., the cold wallet address). Most importantly this new transaction will have a slightly higher gas price than that being paid by the attacker, which results in being placed ahead of the attacker's transaction, in the execution order. When it comes time to execute the attacker's transaction, it will revert and not become part of the ledger as the user will have an insufficient balance at that point. The same principle applies to NFTs (ERC-721).
To improve the odds, the attacker may choose to pay a transaction fee premium and bypass the global memory pool altogether through so-called "private transactions." These are aggregated in bulk via intermediaries and included by miners in the next mined block) on the blockchain.
To close down this avenue to fraudsters and apply the above techniques, we are exploring an "exceptions list" mechanism, where end-users will be able to add their own web3 address to this list. Mempool service providers such as BloxRoute would then filter out private transaction requests where the source address is a member of this list.
Additionally, throughout the life-cycle of the transaction the user is alerted with real-time push message notification and further advice on possible next remediation steps.
**Summary:** Defences 1-3 provide the elements of defence-in-depth that the
## 3 FailSafe Architecture
Figure 3 presents a high level view of the overall FailSafe system architecture. Starting at the top, as part of the enrollment process, a factory contract is used to deploy a dedicated user instance of the multi-signature FailSafe contract. As illustrated in the figure, each operation supported by the contract may require one or more independently controlled keys for an authorization before a given operation is executed. For instance, in the example illustrated in the figure, the withdrawal operation (from the contract to the enrolled hot wallet key), requires three independent signatures.
Figure 4: Timeline and approach of qMig.
To call the FailSafe contract's intercept and rebalance methods, both the FailSafe Interceptor Service and Assets Balancer Service use the dedicated enclave for signing, while access to private keys are subject to the above restrictions. In the Multi-Sig FailSafe contract these operations are configured to require a single signature since the assets are being shifted between addresses under the user control, based on a prior user consented to configuration. In contrast, an administrative operation (e.g., updated ratio of hot/cold asset balances) or asset withdrawal, requires multiple independently controlled key signatures. With FailSafe, a user enrolls in the AWS Cognito Identity management system which supports multi-factor authentication (MFA).
The user's Cognito account ID is mapped to the user's FailSafe Account. Similar to the intercept and rebalance private keys, the private key representing a given user's authentication factor has a corresponding key protected by Nitro Enclave/KMS (this is supported via Cognito identity pool/lambda function authentication flow). To mitigate against worst-case insider attacks, the user has another authentication factor enrolled and protected by a secondary, independent cloud (GCP), which offers similar key isolation and protection environment via Confidential Compute + KMS with cryptographic attestation proofs.
Figure 5: Generating incognito transfer intent.
Next, let's consider how the keys for these operations are managed and protected. As an example, let's consider the intercept key which, as described in the previous section, is required for moving funds from the user's hot wallet address to the user's FailSafe contract in the event that a pending fraudulent transaction is detected in the memory pool. As illustrated in Figure 3, in the cloud, at rest the intercept key is only only preserved in an encrypted form. It is encrypted under the data encryption key that resides in the AWS KMS/Cloud HSM (hardware security module).
The FailSafe system leverages the AWS Nitro Enclave + KMS security architecture. The request to decrypt the intercept key is fulfilled by KMS if 1) the request comes from the expected IAM identity; 2) the request is made from the target Nitro enclave. AWS Nitro Enclave is a hardened, isolated execution environment; cryptographic attestation is used to prove to KMS the enclave's identity and the code running in the enclave.
### Forward Security in FailSafe
As part of the FailSafe project, the Quantum Migration Tool was developed to address future platform level threats to the user's web3 assets. This section examines the threats stemming from quantum computing to EVM based blockchains. We then present the design of the quantum migration
Figure 6: Verifying incognito transfer intent.
tool and its role in the overall architecture of the FailSafe system.
### Quantum Threats to EVM-based Blockchains
Shor's Algorithm [25] makes it possible for a sufficiently powerful quantum computer to break the ECDSA algorithm. That is, starting from a transaction signed with an ECDSA private key, one can extract the public key and then derive the private key. This is the ultimate game over condition, as the attacker can then transfer any balance associated with the external owned account (EOA) at will.
In contrast, quantum computers pose no such (known) threat to hashing algorithms. Grover's algorithm [14] (aka quantum search algorithm), reduces the search for collisions in Keccak-256 (Ethereum's hash algorithm) from \(2^{2}56\) to \(2^{1}28\), which is less efficient than some generic collision search algorithms. (A quick peek ahead: this hashing resilience to quantum attacks will play a key role in our approach).
For the underlying cryptography, the National Institute of Standards and Technology (NIST) initiated a standardisation effort for quantum resilient signature schemes, and is currently evaluating a number of candidate schemes. All of these come with their own set of trade-offs, particularly when compared with key size, speed and re-use of the same key pair by the EVM family of blockchains [27].
In terms of the threat timeline, (i.e, how long until quantum hardware is capable of breaking ECDSA), estimates vary between experts. Many believe the threat is still in the distant future (e.g., Vitalik was famously quoted comparing quantum computing advances to going from hydrogen bombs to harnessing nuclear fusion).
For a systematic approach, the Global Risk Institute conducts an annual survey on the threat timeline of leading subject matter experts. According to its 2022 report, the "likelihood" estimates have been trending upwards from initial surveys. Nearly 25% of respondents estimated a 50% chance for the threat to materialise within a 10-year time window in light of recent advances (i.e., Google's Quantum Supremacy and IBM Quantum System One) and the nation state competition (aka "quantum race") with high levels of funding. The inevitable question, much like the plight of global warming, is not a question of "if" -- it's a question of "when."
#### 3.2.1 On ECDSA Key Re-use
Networks where the common pattern is to reuse the same key pair across different transactions (like the EVM family of blockchains) face a greater risk, once quantum attacks become feasible. The attacker has a longer time window to derive the private key via the earlier record of transaction signatures. However, using new ECDSA key pairs per-transaction, may only offer some temporary relief; once quantum attacks become sufficiently fast, an attacker could derive the private key and front-run a targeted transaction.
#### 3.2.2 Account Abstraction as a Path to Sunesting ECDSA on Ethereum?
A future version of Ethereum is expected to support an account abstraction, a unified representation of an account (rather than the two types that exist today; a smart contract account, and an externally owned account (EOA) with a corresponding ECDSA private key). The most recent account abstraction proposal under consideration is EIP-4337 [4]. Among its features is a representation of an account as a smart contract wallet with cryptographic agility for submitting requests (referred to as UserOperations) to the wallet. UserOperations can be signed using a quantum safe signature scheme. Under this proposal, after a network upgrade, the current user base with quantum vulnerable EOAs "can individually upgrade their wallets to quantum-safe ones," as noted by Vitalik.
In the event of a quantum attack breakthrough, the user-dependent upgrade strategy might mean a large number of unconverted accounts. Addresses with prior transaction history would be at highest risk. On Ethereum, by design, externally owned addresses are commonly re-used. Addresses with a prior transaction history, the ECDSA public key can then be readily retrieved.
If the quantum attack breakthrough occurs while any significant portion of EOAs haven't been upgraded yet, any subsequent transactions signed with ECDSA (including upgrade to quantum resilient wallet) would be suspect: is it the attacker or the key rightful owner performing the operation? By comparison, this dilemma is more severe than the Ethereum rollback debate after the 2016 DAO exploit [5] (which resulted in a hard fork, and two chains going forward, Ethereum and Ethereum classic).
To address this problem, a path rooted in cryptographic based trust is needed even when the algorithm authorising the majority of today's transactions is compromised (i.e., in a scenario where users need to migrate their web3 assets to a forked version of the chain, where UserOperations only use
quantum resilient algorithms).
### Quantum Migration Tool (qMig)
#### 3.3.1 Assumptions and Goals
* The quantum inflection point (quantum attack breakthrough) may occur when the large majority of web3 assets and transactions are done on blockchains reliant on ECDSA.
* Web3 tokens (ERC-20, ERC-721, etc.) are only allowed to be bridged (e.g., via LayerZero) to a quantum-resilient network, if and only if these assets have not been hijacked via ECDSA compromise. That is, this transfer must be based on a cryptographic scheme that remains secure even after the above quantum inflection point occurs.
* While users could reduce their own exposure to the attack by not reusing the same address across transactions, it is not a prerequisite for using qMig. qMig should prevent movement of stolen funds, while supporting EOAs with prior transaction history.
#### 3.3.2 The Workings of qMig
**Step 1:** The qMig approach is illustrated with a timeline shown in Figure 4. The blue phase represents a time period when quantum attacks on ECDSA are still not feasible. In this period, the qMig contract enables users to construct and record a future intent to transfer tokens, in case the quantum inflection point occurs and ECDSA signature by itself can not be trusted. The security of this intent will be rooted in cryptography that is not susceptible to quantum attacks. Figure 5 illustrates how this works. To call qMig.registerTransferIntent(), the client creates a transfer intent source structure that includes source EOA (from field), the chain ID for this EOA and its future destination counterpart. So for example, the source address 0x1739...3c75, may own USDC (ERC-20 token) on chain ID 1 (Ethereum mainnet). An updated bridge contract (e.g., LayerZero), will be able to utilise this information, to verify that the destination address was authorised by 0x1739...3c75, prior to the inflection point.
As shown in Figure 5, the client must sign the transfer intent source with the ECDSA private key that corresponds to the EOA source address in the structure (this linkage is verified at a later stage, if transfer is initiated in step 3).
The signed output is fed into Keccak-256 hash function, the resulting digest is referred to as incognito transfer intent in the above figure. This digest is stored in a hash table by registerTransferIntent() method along with the corresponding block height (e.g., 16316192), preserving a record on the blockchain of when the intent was registered. Please note:
* The hash of the signature rather than the signature itself is persisted so that a public key of EOA can not be extracted. Moreover the wallet address used to sign the request for calling registerTransferIntent() is recommended to be different from the source EOA, to avoid leaving a record of the ECDSA public key.
* The Keccak-256 function is assumed to be resilient to finding collisions based on quantum hardware.
* After the inflection point, the recorded block height along with the hash over the signature will serve as proof that source EOA authorised the transfer to the target destination address and not the attacker's address. The ledger record shows that this was captured, prior to the quantum breakthrough when the attacker could derive the ECDSA private key.
**Step 2:** ECDSA is compromised (the quantum inflection point of Figure 5). For every token contract prepared for this scenario, its token administrator sets the block height in the bridge contract, representing a point when quantum computing attacks on ECDSA are effective. If this value is set, a new authorization policy takes effect for all further transfer of assets over the bridge to a quantum safe network. It is described in step 3. Note that the bridge contract's administrative functionality should be protected by a secondary, quantum-resilient signature (e.g., CRYSTALS-Dilithium [9]).
**Step 3:** Represented as the red phase in Figure 5. In this phase, users can transfer tokens to a quantum-resilient blockchain, subject to the following restrictions, to avoid stolen funds from being transferred:
* The destination address was authorised by the source address from step 1. That is, at some point prior to the inflection point, the user needs to call registerTransferIntent(). The bridge contract can verify this by calling the verifyTransferIntent() method exposed by the
qMig contract. To achieve this, the client will build and sign the transfer intent source (shown in figure 6, and then will pass the raw signature (transfer intent sig) in conjunction with the transfer request to the bridge contract. The bridge contract calls verifyTransferIntent(), passing the source and destination info, the raw signature (transfer intent sig) as well as the blockchain height value (the inflection point set in step 2). verifyTransferIntent() performs verification through a process shown in Figure 5. It uses the public key from the transfer intent sig to cryptographically verify the signed data and confirm the signer's address matches the source address in the transfer intent source structure, thus confirming that it was the source that authorised the destination address. Next, is the look up to check that the qMig contract has a record of the target intent. Just like in step 1, the transfer intent sig is passed to Keccak-256, to compute the incognito transfer intent digest. This value is then looked up in a hash table of all recorded transfer intents. If found, the returned block height must be less than the block height passed in by the bridge contract: require (intentBlockHeight < inflectionPointBlockHeight, ''Intent to transfer registered after the quantum inflection point!'').
* The next key requirement to address is that this mechanism must prevent any potentially stolen assets (due to ECDSA compromise) from being bridged to the quantum-safe blockchain. The source address in the transfer intent could have funds obtained illicitly from others via ECDSA compromise, sometime after quantum inflection point. To address this, the amount permitted to transfer for a given source address is the balance at quantum inflection point (block height set in step 2), subtract from the sum total of all withdraws since that block number (up to the most recent block). The above can be augmented to allow for transfers from addresses that were explicitly authorised by the destination addresses to source address prior to the inflection point (using the same mechanism).
### Combining FailSafe and qMig
The qMig approach is a low friction means for users to prepare for accelerated breakthroughs in quantum hardware based attacks. The user can
continue to conduct business on today's networks, while setting up a path to migrate assets to a quantum safe network if needed. The intent to transfer can be implemented and recorded on a qMig contract on today's chains. It is also feasible to build out bridging support to an existing quantum ready network like QVL in the near term. The same infrastructure is then applicable to quantum safe versions of EVM networks once they become available.
Users enrolled into the FailSafe system are afforded the quantum threat protections developed as part of the qMig tool. This is facilitated in two steps. As part of wallet address enrollment in the user's FailSafe contract shown in Figure 3, a separate call is made by the client to register an incognito transfer intent with the quantum migration proofs contract (as described in the previous section). Secondly, as part of the enrollment transaction, the user's FailSafe contract calls the qMig contract, registering an intent to transfer funds from the FailSafe contract to one or more user wallet addresses registered with the contract.
After the quantum inflection point, during the bridging of assets to a quantum resilient network, the qMig contract can then take into account any transfers from the contract back to the wallet. To prevent fraudulent transfers due to ECDSA cryptography compromise, recall that the amount permitted to transfer for a given source address is the balance at quantum inflection point, subtract from the sum total of all withdraws since that block number (up to the most recent block) and adjusted for any other transfer intents to the source address that were registered prior to the inflection point.
## 4 Related Work
### Wallet Security
Below we describe some of the recent academic work focusing on wallet security, although the reader is referred to industrial reports1,2,3.
Footnote 1: [https://www.continuumloop.com/wp-content/uploads/2022/02/The-Current-and-Future-State-of-Digital-Wallets-v1.0-FINAL.pdf](https://www.continuumloop.com/wp-content/uploads/2022/02/The-Current-and-Future-State-of-Digital-Wallets-v1.0-FINAL.pdf)
Footnote 2: [https://www.mdpi.com/2076-3417/12/21/11180/pdf](https://www.mdpi.com/2076-3417/12/21/11180/pdf)
Footnote 3: [https://go.sensor](https://go.sensor) tower.com/rs/351-RWH-315/images/state-of-crypto-apps-in-europe-2022.pdf
Karantias [17] provides the first definition of a cryptocurrency wallet, which they model in a client-server paradigm. The authors categorize wallets based on whether they work for transparent or private cryptocurrencies,
what trust assumptions they require, their performance and their communication overhead. For each type of wallet the authors provide a description of its client and server protocols. They explore superlight wallets and describe their difference to superlight clients that have appeared in recent literature.The paper demonstrates how new wallet protocols can be produced by combining concepts from existing protocols. Finally, the paper evaluates the performance and security characteristics of all wallet protocols and compare them.
By analyzing source code, bytecode, and execution traces, di Angelo et al. (2019) derive usage scenarios and patterns. The authors discuss methods for identifying wallet contracts in a semi-automatic manner by looking at the deployed bytecodes and the on-chain interaction patterns. The authors extract blueprints for wallets and compile a ground truth. Furthermore, the paper differentiates characteristics of wallets in use, and group them into several types. The paper provides numbers and temporal perspectives regarding the creation and use of wallets. For the 40 identified wallet blueprints, the authors compile detailed profiles. The authors analyze the data of the Ethereum main chain up to block 11,500,000, mined on December 22, 2020.
### Usable Security for Blockchain Users
Issues related to usable security in the cryptocurrency space are not particularly well understood, although some studies are starting to appear.
Specifically, Froehlich et al (2019) provide a literature survey focusing on usable privacy and security for cryptocurrency, although a lot of the focus has been around user's perceptions of privacy in light of anonymization that can be accomplished through address clustering or more subtle attacks around mixers and and privacy-friendly chains like Monero. We expect that new reports will start to appear in the near future, focusing on end-user interactions, for instance (Froehlich et al., 2020).
Mangipudi et al. (2020) look at mental models around wallet usage. This work presents a data-driven investigation into the perceptions of cryptocurrency users towards multi-device wallets today, using a survey of 255 cryptoc-wallet users. Their results revealed two significant groups of participants: Newbies and Non-newbies. These two groups statistically significantly differ in their usage of crypto-wallets. However, both of these groups were concerned with the possibility of their keys getting compromised and yet are unfamiliar with the guarantees offered by multi-device wallets. After educating the participants about the more secure multi-device wallets, around 70%
of the participants preferred them; However, almost one-third of participants were still not comfortable using them. The qualitative analysis revealed a gap between the actual security guarantees and mental models for these participants: they were afraid that using multi-device wallets will result in losing control over keys (and in effect funds) due to the distribution of key shares. Moreover, considerations about the threat model further affected their preferences, signifying a need for contextualizing default settings.
Min et al. [21] attempt to portray DApp users through large-scale Ethereum data, seeking to present an understanding of the data aspects of users in the blockchain scenario beyond surveys and interviews.They built a series of datasets labeled with DApp name and extracted information about each address to enrich the data dimension. We then visualized and analyzed the user profile dataset from several aspects and explored methodologies to divide user groups. The authors propose a way to use the number of interactions by the categories of DApp as cluster input and classify the addresses with SOM network. After that, they separated active addresses into four groups according to the purpose and frequency of use and discuss the differences between them and their sensitivity to the market. In addition, the paper gives examples of how to use transactional data. The authors combine the results of their analysis with previous research studies to summarize the profile of DApp users and examine user motivations, values, and the current DApp market.
Ghesmati et al. [12] focus on how well end-users understand privacy guarantees in the blockchain space. They conducted a study on user perception and preference on Bitcoin privacy, investigating different add-on privacy techniques in Bitcoin as well as their implementation in practice. The authors showed the difference between users' preferences and the implementation of privacy techniques in practice. Most users preferred privacy coins rather than add-on techniques in Bitcoin. The results show that participants are more likely to accept delays rather than extra fees to achieve anonymity in Bitcoin. The participants also preferred indistinguishable privacy techniques rather than being flagged by monitoring tools. Therefore, important questions are raised as current privacy wallets offer CoinJoin transactions with equal-sized output that are distinguishable in the blockchain. They show that users who prefer better privacy are not likely to use Bitcoin, and they favor embedding built-in privacy features in Bitcoin.
Korir et al. [18] focus on the prevalence of decentralized identify approaches. There is a growing expectation that political and technical ini
tiatives towards digital identity will gather pace in the foreseeable future. However, user perspectives have not been a driving force in shaping those ongoing initiatives. The findings of this study point to the dominance of paper/card-based identity methods for online identity verification and a large gap between identity verification today and what it might be in the foreseeable future. The results suggest that technical narratives might not be a compelling driving force for future uptake and that, as previous work in identity management has highlighted, the user proposition should receive further thought. What seems most salient to drive adoption is the existence of supporting (infra)structures, the appeal of the list of available verifiers, and the low complexity of using a new identity wallet tool.
Mai et al. [19] explores user perceptions and misconceptions of cryptocurrency users (\(N=29\)) enriched with drawing and card assignment tasks. Although the study focused on Bitcoin and Ethereum, the findings can be further useful for improving the security and privacy of a large body of (existing or future) altcoins which also build on the blockchain technology. The paper points out that flaws and inconsistencies in user mental models of cryptocurrency systems expose users to security and privacy risks when using current cryptocurrency tools. These risks include money loss, fraud, or deanonymization. Most importantly, the paper revealed major misconceptions related to the functionality and management of cryptographic keys which are not compensated by the cryptocurrency tools. The findings explain why cryptocurrency users fail to manage their private keys securely and, as a result, frequently fall victim to money loss and fraud. Furthermore, users think that the blockchain is encrypted or oblivious, which prevents them from taking measures to safeguard their privacy. Another interesting result was that many participants were not aware of the fact that the amount of mining fees can be actively selected to influence the transaction speed. They proposed several concrete enhancements to state-of-theart cryptocurrency tools (e.g., wallets or exchanges) with the purpose of protecting users with misconceptions from security and privacy threats. Among others, the paper suggests suggest automation of key generation, management, and back-up as much as possible. With this work, the authors create a foundation for improving the usability of state-of-the-art cryptocurrency management tools to prevent security and privacy breaches.
A recent report by Wang et. al.[26] highlights some of the common usage practices around Web3 wallets and provides compelling motivation for FailSafe. In this paper, the authors present the first in-depth study of
quantifying the risk of unlimited approval of ERC20 tokens on Ethereum. The study proposes a fully-automatic approach to detect the approval transactions, and reveals the high prevalence (60%) of unlimited approval in the ecosystem. The authors conduct an investigation to reveal the security issues involved in interacting with 31 UIs (22 DApps and 9 wallets) to send approval transactions.
The result shows that only a few UIs provide explanatory understandings (10%) and flexibility (16%) for users to mitigate the risk of unlimited approval. Furthermore, the paper performs a user behavior analysis to characterize five modes of user behaviors and formalize the good practice to use approved tokens. The result reveals that users (0.2% of user behaviors) barely follow the good practice towards mitigating the risks of unlimited approval. Finally, the paper discusses two existing solutions attempting to address the trade-off between convenience and security of unlimited approval, and provide possible suggestions.
### Quantum-resistant Blockchains
Post-quantum cryptography (required for quantum resilient blockchains), is being standardized by the National Institute of Standards and Technology (NIST). The NIST standardization effort for quantum resilient signature schemes, and is currently evaluating a number of candidate schemes (CRYSTALS-Dilithium, FALCON & SPHNCS). All of these come with their own set of trade-offs, particularly when compared with key size, speed, and re-use of the same key pair by the EVM family of blockchains [27].
A future version of Ethereum is expected to support an account abstraction, a unified representation of an account (rather than the two types that exist today; a smart contract account, and an externally owned account (EOA) with a corresponding ECDSA private key). The account abstraction model also aims to provide cryptographic agility, which would allow for post quantum safe signature algorithms.
* The account abstraction architecture undergone several different iterations. The first proposal dates back to 2016 with EIP-86 [1]. It was primarily focused on converting all accounts into contracts, with a more flexible security model than the current version of Ethereum (i.e., support for multisig and a framework to specify alternative signature algorithms, beyond ECDSA).
* The next version of account abstraction was proposed in EIP-2938 [2]. Which was in part motivated by gas efficiency and introduced new transaction types and corresponding new opcodes. Next, EIP 3074 [3] proposed a mechanism by which an externally owned account can delegate control to a smart contract, enabling delegation of fees for block chain transactions.
* The most recent account abstraction proposal under consideration is EIP-4337 [4]. Among its features is a representation of an account as a smart contract wallet with cryptographic agility for submitting requests (referred to as UserOperations) to the wallet. UserOperations can be signed using a quantum safe signature scheme. Under this proposal, after a network upgrade, the current user base with quantum vulnerable EOAs "can individually upgrade their wallets to quantum-safe ones," as noted by Vitalik.
In the event of a quantum attack breakthrough, the user-dependent upgrade strategy might mean a large number of unconverted accounts. Addresses with prior transaction history would be at highest risk. On Ethereum, by design, externally owned addresses are commonly re-used. Addresses with a prior transaction history, the ECDSA public key can then be readily retrieved.
If the quantum attack breakthrough occurs while any significant portion of EOAs haven't been upgraded yet, any subsequent transactions signed with ECDSA (including upgrade to quantum resilient wallet) would be suspect: is it the attacker or the key rightful owner performing the operation? By comparison, this dilemma is more severe than the Ethereum rollback debate after the 2016 DAO exploit [5] (which resulted in a hard fork, and two chains going forward, Ethereum and Ethereum classic).
A notable effort to build an entirely new quantum resilient blockchain is Quantum Resistant Ledger (QRL) [6], which uses XMSS [15] signatures. An affiliated project, EnQlave, is a wallet implemented as an Ethereum smart contract that uses a secondary signature (XMSS), resilient to quantum attacks; the scheme relies on a Merkle tree of cryptographic hashes. The idea is to transfer your tokens into this quantum resilient vault as a means to mitigate future unexpected events (i.e., a successful quantum attack).
While this mechanism is useful to protect individual EOA, it does not deal with the resulting fallout of such an attack: a loss of confidence in a system rooted in cryptographic trust. Any further transaction signed with an
ECDSA key would be suspect. Business as usual on a network in this state would likely cease. In case of such an event, there is a need for a mechanism that can securely move assets that belong to their rightful owner (rather than the attacker) to a quantum-resilient network.
## 5 Conclusions
FailSafe provides a set of safety and protection mechanisms that are built for a user base that does not necessarily follow best security practices at all times. FailSafe avoids over reliance on any single defence mechanism, if one is bypassed, next one is inline to help minimise losses. This approach spans across the entire lifecycle of a Web3 transaction, from de-risking Web 3 asset positions (via auto-rebalancing asset to cold storage) to intercepting the attacker's transaction via the blockchain mempool, if all other defences failed.
Similarly, with FailSafe's forward-security, the risk to the user's Web 3 assets are reduced even if the underlying cryptographic based trust is compromised.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.